Block all search results (dynamic) in robots.txt?
-
I know that google does not want to index "search result" pages for a lot of reasons (dup content, dynamic urls, blah blah). I recently optimized the entire IA of my sites to have search friendly urls, whcih includes search result pages. So, my search result pages changed from:
- /search?12345&productblue=true&id789
to
- /product/search/blue_widgets/womens/large
As a result, google started indexing these pages thinking they were static (no opposition from me :)), but i started getting WMT messages saying they are finding a "high number of urls being indexed" on these sites. Should I just block them altogether, or let it work itself out?
-
You can block the urls which has term "/product/search/" in them. It can be easily done by adding the following to the robots.txt
User-agent: * Disallow: /product/search/ Hope this helps...
-
As you said: The increasing number of pages indexed will dilute the link juice of the entire site.
Can you give more example? Or just a tip where to search for this kind of information?
Thank you.
-
I would agree with BK Search. You want to minimize what Google has to crawl (I know this sounds backwards) so that Google focuses on the pages that you want to rank.
Long term, why would you waste GoogleBot's time on pages that don't matter as much? What if you had an update on a more important page and GoogleBot is too busy indexing this infinite loop of pages.
At this point, I would use the noindex meta tag vs robots.txt so that google will crawl and remove all the urls from the index. Then you can drop it in later into robots.txt so it will stop crawling. Otherwise you may end up with a lot of junk in the index.
-
I might be a little different than some of these answers but I would recommend that you exclude them from getting indexed.
The reasons I would do that are that:
You know it is largely duplicate content and goes down to the same pages as your categories.
Google has stated that they would prefer to not have it indexed.
The increasing number of pages indexed will dilute the link juice of the entire site.
There is also the possibility that people using the url bar of their browser will start to increase the number of pages indexed by a large manner.
A competitor could create thousands of links to these pages and create a huge footprint that is search pages.
And finally, I like having product pages ranking highly if at all possible.
I would do this with both the robots.txt file and the GWMT exclusion on /product/search/ directory
Good Luck!
-
Hi! We're going through some of the older unanswered questions and seeing if people still have questions or if they've gone ahead and implemented something and have any lessons to share with us. Can you give an update, or mark your question as answered?
Thanks!
-
As a follow-up or further info: Its been about 5 months since the change. I do get some traffic from these indexed pages (not a ton, but enough that i would like to not block if there is no negative impact). The SE interaction seems to be confusion- they index the content, but also recognize that something may not be right. So I am wondering if anyone else has done something similar or is trying this.
Admitidly this is what i wanted the new url structure to do- as an experiment. Just looking for anyone else who has/is doing similar
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
Bulk reverse image search?
Hi, i have a couple fashion clients who have very active blogs and post lots of fashion content and images. Like 50+ images weekly. I want to check if these images have been used by other sources in bulk, are there any good reverse image search tools which can do this? Or any recommended ways to efficiently do this for a large number of images? Cheers
Intermediate & Advanced SEO | | snj_cerkez0 -
Should comments and feeds be disallowed in robots.txt?
Hi My robots file is currently set up as listed below. From an SEO point of view is it good to disallow feeds, rss and comments? I feel allowing comments would be a good thing because it's new content that may rank in the search engines as the comments left on my blog often refer to questions or companies folks are searching for more information on. And the comments are added regularly. What's your take? I'm also concerned about the /page being blocked. Not sure how that benefits my blog from an SEO point of view as well. Look forward to your feedback. Thanks. Eddy User-agent: Googlebot Crawl-delay: 10 Allow: /* User-agent: * Crawl-delay: 10 Disallow: /wp- Disallow: /feed/ Disallow: /trackback/ Disallow: /rss/ Disallow: /comments/feed/ Disallow: /page/ Disallow: /date/ Disallow: /comments/ # Allow Everything Allow: /*
Intermediate & Advanced SEO | | workathomecareers0 -
Massive URL blockage by robots.txt
Hello people, In May there has been a dramatic increase in blocked URLs by robots.txt, even though we don't have so many URLs or crawl errors. You can view the attachment to see how it went up. The thing is the company hasn't touched the text file since 2012. What might be causing the problem? Can this result any penalties? Can indexation be lowered because of this? ?di=1113766463681
Intermediate & Advanced SEO | | moneywise_test0 -
Do I need to disallow the dynamic pages in robots.txt?
Do I need to disallow the dynamic pages that show when people use our site's search box? Some of these pages are ranking well in SERPs. Thanks! 🙂
Intermediate & Advanced SEO | | esiow20130 -
Can Someone Provide an Example of a Site that Indexes Search Results Successfully?
So, I know indexing search results is a big no-no, but I recently started working with a site that sees 50% of its traffic from search result pages. The user engagement on these pages is very high, and these pages rank well too. Unfortunately, they've been hit by Panda. They already moved the section of the site with search results to a subdomain, and saw temporary success. There must be a way to preserve their traffic from these search result pages and get out from under Panda.
Intermediate & Advanced SEO | | nicole.healthline0 -
Undocumented anchor-text API result
Regarding the anchor-text api, there is no definition for *imr on the wiki:
Intermediate & Advanced SEO | | sycorr
http://apiwiki.seomoz.org/w/page/13991127/Anchor Text API ie. http://lsapi.seomoz.com/linkscape/anchor-text/google.com?Scope=phrase_to_page&Cols=2048&Sort=domains_linking_page&Expires=1329770786.46868 returns "[{"apuimr":5.422834471373288e-12},{"apuimr":4.785130890652429e-13},{"apuimr":2.922901387480201e-09}]" What is *imr?0 -
How do Google Site Search pages rank
We have started using Google Site Search (via an XML feed from Google) to power our search engines. So we have a whole load of pages we could link to of the format /search?q=keyword, and we are considering doing away with our more traditional category listing pages (e.g. /biology - not powered by GSS) which account for much of our current natural search landing pages. My question is would the GoogleBot treat these search pages any differently? My fear is it would somehow see them as duplicate search results and downgrade their links. However, since we are coding the XML from GSS into our own HTML format, it may not even be able to tell.
Intermediate & Advanced SEO | | EdwardUpton610