Noindex search pages?
-
Is it best to noindex search results pages, exclude them using robots.txt, or both?
-
I think you're possibly trying to solve a problem that you don't have!
As long as you've got a good information architecture and submitting a dynamically updated sitemap then I don't think you need to worry about this. If you're got a blog, then sharing those on Google+ can be a good way to get them quickly indexed.
-
Our search results are not appearing in Google's index and we are not having any issues with getting our content discovered, so I really don't mind disallowing search pages and noindexing them. I was just wondering what advantage there is to disallowing and what I would lose if I only noindex. Isn't it better to allow many avenues of content discovery for the bots?
-
Don't worry. I'm not saying that in your case it'll be a "spider trap". Where I have seen it cause problems was on a site search result page that included a "related searches" and a bunch of technical issues.
Are your search results appearing in Google's index?
If you have a valid reason for allowing spiders to crawl this content then yes. you'll want to just noindex them. Personally I would challenge why you want to do this - is there a bigger problem trying to get search engines to discover new content on your site?
-
Thanks for the response, Doug.
The truth is that it's unlikely that the spiders will find the search results, but if they do why should I consider it a "spider trap"? Even though I don't want the search results pages indexed, I do want the spiders crawling this content. That's why I'm wondering if it's better to just noindex and not disallow in robots.txt?
-
Using the noindex directive will (should) prevent search engines from including the content in their search results - which is good but it still means that the search engines are crawling this content. I've seen one (unlikely) instance where trying to crawl search pages created a bit of a spider trap[, wasting "crawl budget".
So the simplest approach is usually to use the robots.txt to disallow access to the search pages.
If you've got search results in the index already, then you'll want to think about continuing to let Google crawl the pages for a while and using the noindex to help get them de-indexed.
Once this has been done, then you can disallow the site search results in your robots.txt.
Another thing to consider is how the search spiders are finding your search results in the first place...
-
I think it's better to use the robots. With that, you doesn't have problem if someone links to your page.
For better security you can add a meta for this question.
But, as always, it's the spider option to relay on robots, links or metas. If your page it's private, make it private really and put it below a validation system. If you doesn't do it, some "bad" spiders can read and cache your content.
-
No index and blocking robots pretty much do the same thing but you shouldn't only do this if you don't want pages to be not indexed, for more secure areas of the site I would block robots too.
If its to avoid duplicate content don't forget you can use the rel=canonical tag.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does redirecting a duplicate page NOT in Google‘s index pass link juice? (External links not showing in search console)
Hello! We have a powerful page that has been selected by Google as a duplicate page of another page on the site. The duplicate is not indexed by Google, and the referring domains pointing towards that page aren’t recognized by Google in the search console (when looking at the links report). My question is - if we 301 redirect the duplicate page towards the one that Google has selected as canonical, will the link juice be passed to the new page? Thanks!
Intermediate & Advanced SEO | | Lewald10 -
Google image search
How does google decide which image show up in the image search section ? Is is based on the alt tag of the image or is google able to detect what is image is about using neural nets ? If it is using neural nets are the images you put on your website taken into account to rank a page ? Let's say I do walking tours in Italy and put a picture of the leaning tower of pisa as a top image while I be penalised because even though the picture is in italy, you don't see anyone walking ? Thank you,
Intermediate & Advanced SEO | | seoanalytics1 -
Is this page low quality?
Hey everyone, I need some help defining a post whether it is low quality or not. I got a post and it's a roundup post having 5 lists of fonts for free download. I actually linked to the sites from where anyone can download the font. The post is driving 300 visits a day but the bounce rate is too high around 90% and the time spent on the post is about 20 seconds on average (I checked it under GA Behaviour > Site Content > Landing pages). Also, I checked the traffic of those sites which I'm pointing in the roundup post and in their referral traffic my website is contributing. Does this mean that people clicking on the post from SERPs then quickly visiting the site to download the font as there are only 6 fonts featured in the post to download (due to six font they are not spending time)? Should I need to improve it or the page is answering query fast? Any thoughts are welcome.
Intermediate & Advanced SEO | | Bunnypundir0 -
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
After adding a ssl certificate to my site I encountered problems with duplicate pages and page titles
Hey everyone! After adding a ssl certificate to my site it seems that every page on my site has duplicated it's self. I think that is because it has combined the www.domainname.com and domainname.com. I would really hate to add a rel canonical to every page to solve this issue. I am sure there is another way but I am not sure how to do it. Has anyone else ran into this problem and if so how did you solve it? Thanks and any and all ideas are very appreciated.
Intermediate & Advanced SEO | | LovingatYourBest0 -
How Google organic search results differ in Local Searches?
We all know Google displays nearby results by locating our ip address. My question is how does these results differ? For eg 1. If someone from Newyork search for "chinese Restaurant in Newyork" 2. Someone from California search for "chinese Restaurant in Newyork" 3. Someone from California changes his location to Newyork and search for "chinese Restaurant in Newyork" What are the factors the Google SERP looks into to display the result in local terms?
Intermediate & Advanced SEO | | rajeevEDU0 -
SEOMOZ crawl all my pages
SEOMOZ crawl all my pages including ".do" (all web pages after sign up ) . Coz of this it finishes all my 10.000 crawl page quota and be exposed to dublicate pages. Google is not crawling pages that user reach after sign up. Because these are private pages for customers I guess The main question is how we can limit SEOMOZ crawl bot. If the bot can stay out of ".do" java extensions it'll perfect to starting SEO analysis. Do you know think about it? Cheers Example; .do java extension (after sign up page) (Google can't crawl) http://magaza.turkcell.com.tr/showProductDetail.do?psi=1001694&shopCategoryId=1000021&model=Apple-iPhone-3GS-8GB Normal Page (Google can crawl) http://magaza.turkcell.com.tr/telefon/Apple-iPhone-3GS-8GB/1001694/.html
Intermediate & Advanced SEO | | hcetinsoy0 -
Why are new pages not being indexed, and old pages (now in robots.txt) remain in the index?
I currently have a site that was recently restructured, causing much of its content to be reposted, creating new URL's for each page. To avoid duplicates, all of the existing pages were added to the robots file. That said, it has now been over a week - I know Google has recrawled the site - and when I search for term X, it is stil the old page that is ranking, with the new one nowhere to be seen. I'm assuming it's a cached version, but why are so many of the old pages still appearing in the index? Furthermore, all "tags" pages (it's a Q&A site, like this one) were also added to the robots a few months ago, yet I think they are all still appearing in the index. Anyone got any ideas about why this is happening, and how I can get my new pages indexed?
Intermediate & Advanced SEO | | corp08030