Noindex search pages?
-
Is it best to noindex search results pages, exclude them using robots.txt, or both?
-
I think you're possibly trying to solve a problem that you don't have!
As long as you've got a good information architecture and submitting a dynamically updated sitemap then I don't think you need to worry about this. If you're got a blog, then sharing those on Google+ can be a good way to get them quickly indexed.
-
Our search results are not appearing in Google's index and we are not having any issues with getting our content discovered, so I really don't mind disallowing search pages and noindexing them. I was just wondering what advantage there is to disallowing and what I would lose if I only noindex. Isn't it better to allow many avenues of content discovery for the bots?
-
Don't worry. I'm not saying that in your case it'll be a "spider trap". Where I have seen it cause problems was on a site search result page that included a "related searches" and a bunch of technical issues.
Are your search results appearing in Google's index?
If you have a valid reason for allowing spiders to crawl this content then yes. you'll want to just noindex them. Personally I would challenge why you want to do this - is there a bigger problem trying to get search engines to discover new content on your site?
-
Thanks for the response, Doug.
The truth is that it's unlikely that the spiders will find the search results, but if they do why should I consider it a "spider trap"? Even though I don't want the search results pages indexed, I do want the spiders crawling this content. That's why I'm wondering if it's better to just noindex and not disallow in robots.txt?
-
Using the noindex directive will (should) prevent search engines from including the content in their search results - which is good but it still means that the search engines are crawling this content. I've seen one (unlikely) instance where trying to crawl search pages created a bit of a spider trap[, wasting "crawl budget".
So the simplest approach is usually to use the robots.txt to disallow access to the search pages.
If you've got search results in the index already, then you'll want to think about continuing to let Google crawl the pages for a while and using the noindex to help get them de-indexed.
Once this has been done, then you can disallow the site search results in your robots.txt.
Another thing to consider is how the search spiders are finding your search results in the first place...
-
I think it's better to use the robots. With that, you doesn't have problem if someone links to your page.
For better security you can add a meta for this question.
But, as always, it's the spider option to relay on robots, links or metas. If your page it's private, make it private really and put it below a validation system. If you doesn't do it, some "bad" spiders can read and cache your content.
-
No index and blocking robots pretty much do the same thing but you shouldn't only do this if you don't want pages to be not indexed, for more secure areas of the site I would block robots too.
If its to avoid duplicate content don't forget you can use the rel=canonical tag.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Which is the best option for these pages?
Hi Guys, We have product pages on our site which have duplicate content, the search volume for people searching for these products is very, very small. Also if we add unique content, we could face keyword cannibalisation issues with category/sub-category pages. Now based on proper SEO best practice we should add rel canonical tags from these product pages to the next relevant page. Pros Can rank for product oriented keywords but search volume is very small. Any link equity to these pages passed due to the rel canonical tag would be very small, as these pages barely get any links. Cons Time and effort involved in adding rel canonical tags. Even if we do add rel canonical tags, if Google doesn't deem them relevant then they might ignore causing duplicate content issues. Time and effort involved in making all the content unique - not really worth it - again very minimal searchers. Plus if we do make it unique, then we face keyword cannibalisation issues. -- What do you think would be the optimal solution to this? I'm thinking just implementing a: Across all these product based pages. Keen to hear thoughts? Cheers.
Intermediate & Advanced SEO | | seowork2140 -
Will Reducing Number of Low Page Authority Page Increase Domain Authority?
Our commercial real estate site (www.nyc-officespace-leader.com) contains about 800 URLs. Since 2012 the domain authority has dropped from 35 to about 20. Ranking and traffic dropped significantly since then. The site has about 791 URLs. Many are set to noindex. A large percentage of these pages have a Moz page authority of only "1". It is puzzling that some pages that have similar content to "1" page rank pages rank much better, in some cases "15". If we remove or consolidate the poorly ranked pages will the overall page authority and ranking of the site improve? Would taking the following steps help?: 1. Remove or consolidate poorly ranking unnecessary URLs?
Intermediate & Advanced SEO | | Kingalan1
2. Update content on poorly ranking URLs that are important?
3. Create internal text links (as opposed to links from menus) to critical pages? A MOZ crawl of our site's URLs is visible at the link below. I am wondering if the structure of the site is just not optimized for ranking and what can be done to improve it. THANKS. https://www.dropbox.com/s/oqchfqveelm1q11/CRAWL www.nyc-officespace-leader.com (1).csv?dl=0 Thanks,
Alan0 -
How do we decide which pages to index/de-index? Help for a 250k page site
At Siftery (siftery.com) we have about 250k pages, most of them reflected in our sitemap. Though after submitting a sitemap we started seeing an increase in the number of pages Google indexed, in the past few weeks progress has slowed to a crawl at about 80k pages, and in fact has been coming down very marginally. Due to the nature of the site, a lot of the pages on the site likely look very similar to search engines. We've also broken down our sitemap into an index, so we know that most of the indexation problems are coming from a particular type of page (company profiles). Given these facts below, what do you recommend we do? Should we de-index all of the pages that are not being picked up by the Google index (and are therefore likely seen as low quality)? There seems to be a school of thought that de-indexing "thin" pages improves the ranking potential of the indexed pages. We have plans for enriching and differentiating the pages that are being picked up as thin (Moz itself picks them up as 'duplicate' pages even though they're not. Thanks for sharing your thoughts and experiences!
Intermediate & Advanced SEO | | ggiaco-siftery0 -
ECommerce search results to noindex?
Hi, To avoid duplicated content and the possibility of thousands additional pages to an ecommerce website would it be a reasonable solution to have the page as a no-index, would this benefit the site? Thanks **Lantec **
Intermediate & Advanced SEO | | Lantec0 -
Indexing of internal search results: canonicalization or noindex?
Hi Mozzers, First time poster here, enjoying the site and the tools very much. I'm doing SEO for a fairly big ecommerce brand and an issue regarding internal search results has come up. www.example.com/electronics/iphone/5s/ gives an overview of the the model-specific listings. For certain models there are also color listings, but these are not incorporated in the URL structure. Here's what Rand has to say in Inbound Marketing & SEO: Insights From The Moz Blog Search filters are used to narrow an internal search—it could be price, color, features, etc.
Intermediate & Advanced SEO | | ClassicDriver
Filters are very common on e-commerce sites that sell a wide variety of products. Search filter
URLs look a lot like search sorts, in many cases:
www.example.com/search.php?category=laptop
www.example.com/search.php?category=laptop?price=1000
The solution here is similar to the preceding one—don’t index the filters. As long as Google
has a clear path to products, indexing every variant usually causes more harm than good. I believe using a noindex tag is meant here. Let's say you want to point users to an overview of listings for black 5s iphones. The URL is an internal search filter which looks as follows: www.example.com/electronics/apple/iphone/5s?search=black Which you wish to link with the anchor text "black iphone 5s". Correct me if I'm wrong, but if you no-index the black 5s search filters, you lose the equity passed through the link. Whereas if you canonicalize /electronics/apple/iphone/5s you would still leverage the link juice and help you rank for "black iphone 5s". Doesn't it then make more sense to use canonicalization?0 -
Tips for improving this page
I have made a content placeholder for a keyword that will gain significant search volume in the future. Until then I am trying to optimize the page to rank when the game launches and the keyword gains volume. http://hiddentriforce.com/a-link-between-worlds/walkthrough/ Is there anything I can do to improve the optimization for the phrase 'a link between worlds walkthrough' A lot of my competitors are already setting up similar placeholder pages and doing the same thing. I have 2 fairly large gaming sites that will place a banner for my walkthrough on their site. I did not pay for the links. I do free writing/ other services in exchange for this. I have been sharing the link socially. It has almost 200 likes and a handful of shares, tweets, g+ votes
Intermediate & Advanced SEO | | Atomicx0 -
Still Going Down In Search
After signing up to SEOmoz as a pro user and sorting out all the things that the search flagged up with our website (htyp://www.whosjack.org) we jumped very slightly in search only to continue going down again. We are a news based site, we have no dup content, we have good writers and good orangic links etc I am currently very close to having to call it a day. Can anyone suggest anything at all from looking at the site or suggest a good SEO firm that I could talk to who might be able to work out the issue as I am totally at a loss as to what do do now. Any help or suggestions greatly appreciated.
Intermediate & Advanced SEO | | luwhosjack0 -
What to do with WordPress generated pages?
I'm an SEOmoz Newbie and have a very specific question about the auto generated WordPress Pages. SEOmoz caught and labeled the auto generated WP pages as Crawl Warnings like: Long URL - 302 - Title Element to Long - Missing Meta Description Tag - Too Many On-Page Links So I have learned the lesson and have now made those pages "no follow" / "no idex." HOWEVER, WHAT DO I DO WITH THE ONES THAT HAVE ALREADY BEEN INDEXED? Do I... 1. Just leave them as is a hope they don't hurt me from an SEO perspective? 2. Redirect them all to a relevant page? I'm sure many people have had this issue. What do you think? Thanks Dominic
Intermediate & Advanced SEO | | amorbis0