Search Results Pages Blocked in Robots.txt?
-
Hi
I am reviewing our robots.txt file. I wondered if search results pages should be blocked from crawling?
We currently have this in the file
/searchterm* Is it a good thing for SEO?
-
HI Dirk,
Sorry I missed this one, thanks for your input
-
It's probably a good thing - I would keep them blocked.
Check https://www.mattcutts.com/blog/search-results-in-search-results/ - quote "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines."
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate page content on numerical blog pages?
Hello everyone, I'm still relatively new at SEO and am still trying my best to learn. However, I have this persistent issue. My site is on WordPress and all of my blog pages e.g page one, page two etc are all coming up as duplicate content. Here are some URL examples of what I mean: http://3mil.co.uk/insights-web-design-blog/page/3/ http://3mil.co.uk/insights-web-design-blog/page/4/ Does anyone have any ideas? I have already no indexed categories and tags so it is not them. Any help would be appreciated. Thanks.
Intermediate & Advanced SEO | | 3mil0 -
Making Filtered Search Results Pages Crawlable on an eCommerce Site
Hi Moz Community! Most of the category & sub-category pages on one of our client's ecommerce site are actually filtered internal search results pages. They can configure their CMS for these filtered cat/sub-cat pages to have unique meta titles & meta descriptions, but currently they can't apply custom H1s, URLs or breadcrumbs to filtered pages. We're debating whether 2 out of 5 areas for keyword optimization is enough for Google to crawl these pages and rank them for the keywords they are being optimized for, or if we really need three or more areas covered on these pages as well to make them truly crawlable (i.e. custom H1s, URLs and/or breadcrumbs)…what do you think? Thank you for your time & support, community!
Intermediate & Advanced SEO | | accpar0 -
When you add 10.000 pages that have no real intention to rank in the SERP, should you: "follow,noindex" or disallow the whole directory through robots? What is your opinion?
I just want a second opinion 🙂 The customer don't want to loose any internal linkvalue by vaporizing link value though a big amount of internal links. What would you do?
Intermediate & Advanced SEO | | Zanox0 -
How to Disallow Tag Pages With Robot.txt
Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
How to optimise for search results which are affected by Query Deserves Freshness?
I am looking to rank a clients site for certain keywords which have a huge exact local search volume in the 200,000 region. Many of these keywords are celebrity names like Victoria Beckham, Pippa Middleton. etc. 9 times out of 10 these people are in the news and the first page is taken up by new article/news results. My client is a large media publishing company so their site is very relevant. Does anyone know how to optimise for getting on the first page with these types of queries? Thanks Barry
Intermediate & Advanced SEO | | HaymarketMediaGroupLtd0 -
Block search engines from URLs created by internal search engine?
Hey guys, I've got a question for you all that I've been pondering for a few days now. I'm currently doing an SEO Technical Audit for a large scale directory. One major issue that they are having is that their internal search system (Directory Search) will create a new URL everytime a search query is entered by the user. This creates huge amounts of duplication on the website. I'm wondering if it would be best to block search engines from crawling these URLs entirely with Robots.txt? What do you guys think? Bearing in mind there are probably thousands of these pages already in the Google index? Thanks Kim
Intermediate & Advanced SEO | | Voonie0 -
Too many on page links - product pages
Some of the pages on my client's website have too many on page links because they have lists of all their products. Is there anything I should/could do about this?
Intermediate & Advanced SEO | | AlightAnalytics0 -
Temporarily Delist Search Results
We have a client that we run campaign sites for. They have asked us to turn off our PPC and SEO in the short term so they can run some tests. PPC no problem straight forward action, but not as straight forward to just turn off SEO. Our campaign site is on Page 1, Position 4, 3 places below our clients site. They have asked us to effectively disappear from the landscape for a period of 1-2 months. Has anyone encountered this before, the ability to delist good SERP for a period of time? Details: Very small site with only 17 pages indexed within google, but home page has good SERP result. My issues are, How to approach this in the most effective manor? Once the delisting process is activated and the site/page disappears, then we reverse the process will we get back to where we were? Anyone encountered this before? I realise this is a ridiculous question and goes against SEO logic, get to page 1 results only to remove it, but hey, clients are always presenting new challenges for us to address..... Thanks
Intermediate & Advanced SEO | | Jellyfish-Agency0