Should I use meta noindex and robots.txt disallow?
-
Hi, we have an alternate "list view" version of every one of our search results pages
The list view has its own URL, indicated by a URL parameter
I'm concerned about wasting our crawl budget on all these list view pages, which effectively doubles the amount of pages that need crawling
When they were first launched, I had the noindex meta tag be placed on all list view pages, but I'm concerned that they are still being crawled
Should I therefore go ahead and also apply a robots.txt disallow on that parameter to ensure that no crawling occurs? Or, will Googlebot/Bingbot also stop crawling that page over time? I assume that noindex still means "crawl"...
Thanks
-
Hi,
Thanks, I will do some testing to confirm that this behaves how I would like it to
-
if all pages are 100#5 not indexed then I would block it in robots.txt, Google's John Muller confirmed to me that Googlebot will continue to crawl every link to check to see if a nofollow or noindex has changed status.
So as a result we blocked our pages with robots.txt and saw a great increases in index/crawl rates on pages we want Google to pay attention to. It also reduces waste in server resources.
However if there are any pages that are index, if you block them in robots.txt then Googlebot will never be able to crawl the link to determine that it should be noindex. This means it could stay in a permanent stage of indexed.
I hope that answers all your questions?
-
When you say:
nofollow will tell the crawlers to not crawl the page
I believe you mean to say that this will tell the crawlers not to crawl the links on the page, the page itself is itself still "crawled" is it not?
But yes, you are right to say, that once robots.txt disallow is in place, the meta tag will not be seen and thus be moot (at which point I may as well take it off).
It would be nice to be able to say "don't crawl this and don't put it in the index"... but is there a way?
-
noindex only tells the search crawlers to not include the page in the index but still allows for them to crawl the page. nofollow will tell the crawlers to not crawl the page.
robots.txt will accomplish this as well but both I think would be overkill.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is better for Meta description ??
Hi everybody, I noticed that a lot of websites prefer their meta description would be the first words of the content inside.
Intermediate & Advanced SEO | | roeesa
I on the other hand thought that google will prefer the meta description to be like a peek to what going to be inside.
anyone can explain me, what is better? Thanks 🙂0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Can URLs blocked with robots.txt hurt your site?
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us? I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
Intermediate & Advanced SEO | | nicole.healthline0 -
Robots.txt error message in Google Webmaster from a later date than the page was cached, how is that?
I have error messages in Google Webmaster that state that Googlebot encountered errors while attempting to access the robots.txt. The last date that this was reported was on December 25, 2012 (Merry Christmas), but the last cache date was November 16, 2012 (http://webcache.googleusercontent.com/search?q=cache%3Awww.etundra.com/robots.txt&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a). How could I get this error if the page hasn't been cached since November 16, 2012?
Intermediate & Advanced SEO | | eTundra0 -
Noindex, Nofollow to previous domain
Hi, My programmer recently did a horrible mistkae by adding noindex, nofollow to our website without me noticing for two days. At the same time he did it we bought a new domain and redirected the old domain to the new domain: The Old domain is: http://www.websitebuildersworld.com and the new one is: http://www.websiteplanet.com Now unfortunatly I didn't notice the noindex,nofollow when it was on the old domain and I redirected it to websiteplanet.com before I fixed the noindex, nofollow. I fixed the problem around 10 hours ago on the new domain (www.websiteplanet.com) but the old domain didn't get indexed back (yet), so for example if you search for WebsiteBuildersWorld in google you will not reach the homepage as google deleted it because of the noindex,nofollow. My question is:
Intermediate & Advanced SEO | | Ouzan
Do you think that it will be fixed and google will retrieve websitebuildersworld homepage to his search results and then redirect it to websiteplanet? Or because I redirected websitebuildersworld.com to websiteplanet.com before letting google crawling websitebuildersworld.com without the noindex,no follow it wouldn't get indexed again? I hope I explained the problem good enough. Looking forward for your valuable replies. Thanks.0 -
Can I use rel=canonical and then remove it?
Hi all! I run a ticketing site and I am considering using rel=canonical temporary. In Europe, when someone is looking for tickets for a soccer game, they look for them differently if the game is played in one city or in another city. I.e.: "liverpool arsenal tickets" - game played in the 1st leg in 2012 "arsenal liverpool tickets - game played in the 2nd leg in 2013 We have two different events, with two different unique texts but sometimes Google chooses the one in 2013 one before the closest one, especially for queries without dates or years. I don't want to remove the second game from our site - exceptionally some people can broswer our website and buy tickets with months in advance. So I am considering place a rel=canonical in the game played in 2013 poiting to the game played in a few weeks. After that, I would remove it. Would that make any sense? Thanks!
Intermediate & Advanced SEO | | jorgediaz0 -
Can I use a "no index, follow" command in a robot.txt file for a certain parameter on a domain?
I have a site that produces thousands of pages via file uploads. These pages are then linked to by users for others to download what they have uploaded. Naturally, the client has blocked the parameter which precedes these pages in an attempt to keep them from being indexed. What they did not consider, was they these pages are attracting hundreds of thousands of links that are not passing any authority to the main domain because they're being blocked in robots.txt Can I allow google to follow, but NOT index these pages via a robots.txt file --- or would this have to be done on a page by page basis?
Intermediate & Advanced SEO | | PapaRelevance0 -
Does using robots.txt to block pages decrease search traffic?
I know you can use robots.txt to tell search engines not to spend their resources crawling certain pages. So, if you have a section of your website that is good content, but is never updated, and you want the search engines to index new content faster, would it work to block the good, un-changed content with robots.txt? Would this content loose any search traffic if it were blocked by robots.txt? Does anyone have any available case studies?
Intermediate & Advanced SEO | | nicole.healthline0