What are the negative implications of listing URLs in a sitemap that are then blocked in the robots.txt?
-
In running a crawl of a client's site I can see several URLs listed in the sitemap that are then blocked in the robots.txt file.
Other than perhaps using up crawl budget, are there any other negative implications?
-
I highly doubt it would effect rankings due to low quality issues but it will show that you have site map error warnings in your GWT console. That issue is technically classified as 'Warnings' and not 'Errors'. The right thing to do in that scenario is take the robots.txt block off and just use a 'noindex' tag on the pages. That way they can stay in the site map but they won't show up in the index. Otherwise you should remove them from the sitemap if you don't want the warnings in GWT.
-
I personally do not think there is any penalty SEO wise in doing it. Although, I do think it will mess up the metric in GWT that shows how many pages have been submitted and how many have been indexed. I find that metric useful, so it would make it no longer useful if there are a lot of pages blocked by the robots.txt.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I safely block my product listing from search? Does it even make sense?
Hi, I've an ecommerce website with more than 50k urls and only 10% or so are getting crawled regularly by Google.
Technical SEO | | GhillC
Product listing pages represent roughly 80% of these 50k pages. Trying to improve this, I was thinking to remove altogether all (most?) of my product listing from search (via Robot.txt) to keep only the product pages themselves and the product categories. My organic situation since Jan 2019:
Users: 2,300,000 (of which 9% are visiting product listing pages)
Page views: 8,000,000 (of which 5% are product listing pages). Am I about to unleash armageddon (or more like harakiri) on my website by doing so or actually get Google to crawl much more relevant resources (product pages, product categories, blog content and so on)? Thanks,
G0 -
Sitemaps, 404s and URL structure
Hi All! I recently acquired a client and noticed in Search Console over 1300 404s, all starting around late October this year. What's strange is that I can access the pages that are 404ing by cutting and pasting the URLs and via inbound links from other sites. I suspect the issue might have something to do with Sitemaps. The site has 5 Sitemaps, generated by the Yoast plugin. 2 Sitemaps seem to be working (pages being indexed), 3 Sitemaps seem to be not working (pages have warnings, errors and nothing shows up as indexed). The pages listed in the 3 broken sitemaps seem to be the same pages giving 404 errors. I'm wondering if auto URL structure might be the culprit here. For example, one sitemap that works is called newsletter-sitemap.xml, all the URLs listed follow the structure: http://example.com/newsletter/post-title Whereas, one sitemap that doesn't work is called culture-event-sitemap.xml. Here the URLs underneath follow the structure http://example.com/post-title. Could it be that these URLs are not being crawled / found because they don't follow the structure http://example.com/culture-event/post-title? If not, any other ideas? Thank you for reading this long post and helping out a relatively new SEO!
Technical SEO | | DanielFeldman0 -
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | | Kilgray0 -
URL Structure
I'm going through the process of redesigning our website, and the URL structure was brought up. We currently have our URLs structured as domain.com/keyword. It seems that some people think setting your URLs up to look like: domain.com/directory/keyword makes more sense from a user's perspective, and from a search engine's perspective. With our directories labeled as services, solutions, clients - I see no value in adding directories as it dilutes the keyword and brings the keyword further away from the domain. Are there situations where adding a directory before the page in the URL makes sense? If anyone has data showing the difference between the two that'd be great! Thanks, Brian
Technical SEO | | PrasoonGoel0 -
Changed URL of all web pages to a new updated one - Keywords still pick the old URL
A month ago we updated our website and with that we created new URLs for each page. Under "On-Page", the keywords we put to check ranking on are still giving information on the old urls of our websites. Slowly, some new URLs are popping up. I'm wondering if there's a way I can manually make the keywords feedback information from the new urls.
Technical SEO | | Champions0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
Shorter URLs
Hi Is there a real value in having the keywords in the URL structure? we could use the URL: Mybrand.com/software/tablets/ipad/supertrader.html Or instead have the CMS create the shorter version mybrand.com/supertrader.html and just optimize this page for the keyword 'supertrader ipad software'
Technical SEO | | FXDD1 -
Can I Disallow Faceted Nav URLs - Robots.txt
I have been disallowing /*? So I know that works without affecting crawling. I am wondering if I can disallow the faceted nav urls. So disallow: /category.html/? /category2.html/? /category3.html/*? To prevent the price faceted url from being cached: /category.html?price=1%2C1000
Technical SEO | | tylerfraser
and
/category.html?price=1%2C1000&product_material=88 Thanks!0