Best practices for robotx.txt -- allow one page but not the others?
-
So, we have a page, like domain.com/searchhere, but results are being crawled (and shouldn't be), results look like domain.com/searchhere?query1. If I block /searchhere? will it block users from crawling the single page /searchere (because I still want that page to be indexed).
What is the recommended best practice for this?
-
SEOmoz used to use Google Search for the site. I am confident Google has a solid method for keeping their own results clean.
It appears SEOmoz recently changed their search widget. If you examine the URL you shared, notice none of the search results actually appear in the HTML of the page. For example, load the view-source URL and perform a find (CTRL+F) for "testing" which is the subject of the search. There are no results. Since the results are not in the page's HTML, they would not get indexed.
-
If Google is viewing the search result pages as soft 404s, then yes, adding the noindex tag should resolve the problem.
-
And, because google can currently crawl these search result pages, there are a number of soft 404 pages popping up. Would adding a noindex tag to these pages fix the issue?
-
Thanks for the links and help.
How does seomoz keep search results from being indexed? They don't block search results with robots.txt and it doesn't appear that they add the noindex tag to the search result pages.(ex: view-source:http://www.seomoz.org/pages/search_results#stq=testing&stp=1)
-
Yeah, but Ryan's answer is the best one if you can go that route.
-
Hi Michelle,
The concept of crawl efficiency is highly misunderstood. Are all your site's pages being indexed? Is new content or changes indexed in a timely manner? If so, that would indicate your site is being crawled efficiently.
Regarding the link you shared, you are on the right track but need to dig a bit deeper. On the page you shared, find the discussion related to robots.txt. There is a link which will lead you to the following page:
https://developers.google.com/webmasters/control-crawl-index/docs/faq#h01
There you will find a more detailed explanation along with several examples of when not to use robots.txt.
robots.txt: Use it if crawling of your content is causing issues on your server. For example, you may want to disallow crawling of infinite calendar scripts. You should not use the robots.txt to block private content (use server-side authentication instead), or handle canonicalization (see our Help Center). If you must be certain that a URL is not indexed, use the robots meta tag or X-Robots-Tag HTTP header instead.
SEOmoz offers a great guide on this topic as well: http://www.seomoz.org/learn-seo/robotstxt
If you desire to go beyond the basic Google and SEOmoz explanation and learn more about this topic, my favorite article related to robots.txt, written by Lindsay, can be found here: http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
-
Hi Ryan,
Wouldn't that cause issues with crawl efficiency?
Also, webmaster guidelines say "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines."
-
Thank you. Are you sure about that?
-
what about if you use "<a title="Click for Help!">Canonical URL" tag ?</a>
You can put this code:
in
/searchhere?page.
-
The best practice would be to add the noindex tag to the search result pages but not the /searchhere page.
Typically speaking, the best robots.txt file is a blank one. The file should only be used as a last resort with respect to blocking content.
-
What you outlined sounds to me like it should work. Disallowing /searchhere? shouldn't disallow the top-level search page at /searchhere, but should disallow all the search result pages with queries after the ?.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I need help on how best to do a complicated site migration. Replacing certain pages with all new content and tools, and keeping the same URL's. The rest just need to disappear safely. Somehow.
I'm completely rebranding a website but keeping the same domain. All content will be replaced and it will use a different theme and mostly new plugins. I've been building the new site as a different site in Dev mode on WPEngine. This means it currently has a made-up domain that needs to replace the current site. I know I need to somehow redirect the content from the old version of the site. But I'm never going to use that content again. (I could transfer it to be a Dev site for the current domain and automatically replace it with the click of a button - just as another option.) What's the best way to replace blahblah.com with a completely new blahblah.com if I'm not using any of the old content? There are only about 4 URL'st, such as blahblah.com/contact hat will remain the same - with all content replaced. There are about 100 URL's that will no longer be in use or have any part of them ever used again. Can this be done safely?
Intermediate & Advanced SEO | | brickbatmove1 -
Why one of my top pages dropped?
Hello here. Our website, virtualsheetmusic.com, is pretty popular in the sheet music realm, and we used to rank on the first page for the keyword "violin sheet music" until a few weeks ago with our violin dedicated page: http://www.virtualsheetmusic.com/downloads/Indici/Violin.html But a couple of weeks ago we dropped to over the 5th page on Google (I can't even find us!) and I have no idea why. Most of our top ranking pages are still there though. This never happened before, after 17 years on the web. Do you have any idea why that could have happened?
Intermediate & Advanced SEO | | fablau0 -
Landing pages, are my pages competing?
If I have identified a keyword which generates income and when searched in google my homepage comes up ranked second, should I still create a landing page based on that keyword or will it compete with my homepage and cause it to rank lower?
Intermediate & Advanced SEO | | The_Great_Projects0 -
Big discrepancies between pages in Google's index and pages in sitemap
Hi, I'm noticing a huge difference in the number of pages in Googles index (using 'site:' search) versus the number of pages indexed by Google in Webmaster tools. (ie 20,600 in 'site:' search vs 5,100 submitted via the dynamic sitemap.) Anyone know possible causes for this and how i can fix? It's an ecommerce site but i can't see any issues with duplicate content - they employ a very good canonical tag strategy. Could it be that Google has decided to ignore the canonical tag? Any help appreciated, Karen
Intermediate & Advanced SEO | | Digirank0 -
Best way to move a page without 301
I have a page that currently ranks high for its term. That page is going away for the main website users, meaning all internal site links pointing to that page are going away and point to a new page. Normally you would just do a 301 redirect to the new URL however the old URL will still need to remain as a landing page since we send paid media traffic to that URL. My question is what is the best way to deal with that? One thought was set up a canonical tag, however my understanding is that the pages need to be identical or very close to the same and the landing page will be light on content and different from the new main page. Not topically different but not identical copy or design, etc.
Intermediate & Advanced SEO | | IrvCo_Interactive0 -
Ranking slipped to page 6 from page 1 over the weekend?
My site has been on page one for 2 phrases consistently from May onwards this year. The site has fewer than 100 backlinks and the link profile looks fairly even. On Friday we were on page 1, we even had a position 1, however now we are on page 6. Do you think this is Penguin or some strange Google blip? We have no webmaster tools messages at all. Thanks for any help!
Intermediate & Advanced SEO | | onlinechester0 -
What is the best practice to optimize page content with strong tags?
For example, if I have a sub page dedicated to the keyword "Houston Leather Furniture" is it best practice to bold ONLY the exact match keyword? Or should ONLY the words from the keyword (so 'Houston' 'Leather' and 'Furniture') Is there a rule to how many times it should be done before its over-optimization? I appreciate any information as I want to do the BEST possible practice when it comes to this topic. Thanks!
Intermediate & Advanced SEO | | MonsterWeb280 -
Best SEO format for a blog page on an ecommerce website.. inc Source Ordered Content
Does anyone know of a page template or code I might want to base a blog on as part of an eccomerce website? I am interested in keeping the look (includes) of the website and paying attention to Source Ordered Content helping crawlers index the new great blogs we have to share. I could just knock up a page with a template from the site but I would like to investigate SOC at this stage as it may benefit us in the long run. Any ideas?
Intermediate & Advanced SEO | | robertrRSwalters0