What are the negative implications of listing URLs in a sitemap that are then blocked in the robots.txt?
-
In running a crawl of a client's site I can see several URLs listed in the sitemap that are then blocked in the robots.txt file.
Other than perhaps using up crawl budget, are there any other negative implications?
-
I highly doubt it would effect rankings due to low quality issues but it will show that you have site map error warnings in your GWT console. That issue is technically classified as 'Warnings' and not 'Errors'. The right thing to do in that scenario is take the robots.txt block off and just use a 'noindex' tag on the pages. That way they can stay in the site map but they won't show up in the index. Otherwise you should remove them from the sitemap if you don't want the warnings in GWT.
-
I personally do not think there is any penalty SEO wise in doing it. Although, I do think it will mess up the metric in GWT that shows how many pages have been submitted and how many have been indexed. I find that metric useful, so it would make it no longer useful if there are a lot of pages blocked by the robots.txt.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with existing URL when replatforming and new URL is the same?
We are changing CMS from WordPress to Uberflip. If there is a URL that remains the same I believe we should not create a redirect. However, what happens to the old page? Should it be deleted?
Technical SEO | | maland0 -
Should I add my html sitemap to Robots?
I have already added the .xml to Robots. But should I also add the html version?
Technical SEO | | Trazo0 -
Removed URLs
Hi all, We have recently removed 200+ articles from our blog. However, those links are still being shown on Google weeks after their removal. In there a way to speed up the process? What effect will this have on our SEO ranking?
Technical SEO | | businessowner0 -
Sitemap
Hi, I am setting up a new sitemap for our website. the website contains about 8000 - 10.000 pages. Of wich are 6000 productpages. I have 10 categories, about 80 sub-catagories and about 400 sub-sub categories ( these ar my most important landingpages) At this moment our sitemap is only 1 MB. From that point of view 1 sitemap will be enough. But can i take SEO advantage by splitting this sitemap in 10 categories? Or are there other ways to set it up for a better SEO? Thanks!
Technical SEO | | Leonie-Kramer0 -
A few misc Webmaster tools questions & Robots.txt etc
Hi I have a few general misc questions re Robots.tx & GWT: 1) In the Robots.txt file what do the below lines block, internal search ? Disallow: /?
Technical SEO | | Dan-Lawrence
Disallow: /*? 2) Also the sites feeds are blocked in robots.txt, why would you want to block a sites feeds ? **3) **What's the best way to deal with the below: - old removed page thats returning a 500 response code ? - a soft 404 for an old removed page that has no current replacement old removed pages returning a 404 The old pages didn't have any authority or inbound links hence is it best/ok to simply create a url removal request in GWT ? Cheers Dan0 -
XML Sitemap Issue or not?
Hi Everyone, I submitted a sitemap within the google webmaster tools and I had a warning message of 38 issues. Issue: Url blocked by robots.txt. Description: Sitemap contains urls which are blocked by robots.txt. Example: the ones that were given were urls that we don't want them to be indexed: Sitemap: www.example.org/author.xml Value: http://www.example.org/author/admin/ My issue here is that the number of URL indexed is pretty low and I know for a fact that Robot.txt aren't good especially if they block URL that needs to be indexed. Apparently the URLs that are blocked seem to be URLs that we don't to be indexed but it doesn't display all URLs that are blocked. Do you think i m having a major problem or everything is fine?What should I do? How can I fix it? FYI: Wordpress is what we use for our website Thanks
Technical SEO | | Tay19860 -
Canonical URL
I previously set the canonical Url in google web masters to the non www version, when I check my on page opt, it tells me that I have a critical issue with this. Should I change it in google web masters back to the www version? if so is there the possibility of negative results? Or is there a better way to deal with this? Note, I have inbound links pointing to both types.
Technical SEO | | bronxpad0 -
Using Robots.txt
I want to Block or prevent pages being accessed or indexed by googlebot. Please tell me if googlebot will NOT Access any URL that begins with my domain name, followed by a question mark,followed by any string by using Robots.txt below. Sample URL http://mydomain.com/?example User-agent: Googlebot Disallow: /?
Technical SEO | | semer0