Application & understanding of robots.txt
-
Hello Moz World!
I have been reading up on robots.txt files, and I understand the basics. I am looking for a deeper understanding on when to deploy particular tags, and when a page should be disallowed because it will affect SEO. I have been working with a software company who has a News & Events page which I don't think should be indexed. It changes every week, and is only relevant to potential customers who want to book a demo or attend an event, not so much search engines. My initial thinking was that I should use noindex/follow tag on that page. So, the pages would not be indexed, but all the links will be crawled.
I decided to look at some of our competitors robots.txt files. Smartbear (https://smartbear.com/robots.txt), b2wsoftware (http://www.b2wsoftware.com/robots.txt) & labtech (http://www.labtechsoftware.com/robots.txt).
I am still confused on what type of tags I should use, and how to gauge which set of tags is best for certain pages. I figured a static page is pretty much always good to index and follow, as long as it's public. And, I should always include a sitemap file. But, What about a dynamic page? What about pages that are out of date? Will this help with soft 404s?
This is a long one, but I appreciate all of the expert insight. Thanks ahead of time for all of the awesome responses.
Best Regards,
Will H.
-
Yup.. also don't forget that robots.txt is just a "recommendation" for robots. they do not obey it
Basically Google does what ever it wants to
Also if you want to block a folder so its inner content wont be "accessed", in case anylink will point to this page, even if its coming from outside of your domain, it will be indexed.. Although the content of it wont be shown on search results but it will show up with a notice stating that the site content is blocked due to the sites robots.txt..best of luck!
-
Great Advice Yossi & Chris. Thanks for taking the time to reply. I will have to dig into the Google Guidelines for additional information, but both of your points are valid. I think I was looking at robots.txt the wrong way. Thanks Again Guys!
-
I completely agree with Yossi here; no need to go blocking that page at all.
I can't really add any further value to the points he has covered but one other part of your question suggested that perhaps you're looking at this the wrong way (and it's very common, don't worry!). Rather than having your site stay as-is and just obscuring the bad parts of it from search engines, the thought process should really around creating a great website instead.
If you're ever considering blocking a page from search engines, the first step should always be "why am I blocking this page(s); could I just fix the issue instead?".
For example, you asked if this might help with soft 404s. Rather than trying to find a way to hide these soft 404s, spend that time fixing them instead!
-
Hi Will
There are some concerns that you have which I do not understand.
Why you want to block News & Events page? If it has unique content and on top of that if it is updated regularly, you have no reason to block access to the page. If it is "relevant to potential customers who want to book a demo" its great. I would definitely keep it indexed and followed.Google explicitly states that you should not block access to a page if you simply want to de-index it/remove it. If the page should not be indexed publicly you should remove it or password protect it (a google suggestion).
About tags, i assume you are talking about meta tags, correct?
There is no need to use any kind of meta tag to signal search engines that they need to index or follow the page, you use it only when you want to limit them not to take certain actions.
Also there is no difference between a static or dynamic page when it comes to tag usage. There is no rules for that. A page perfectly be static for years and still get indexed and ranked very good. (but, well we all know that updating the site is a ranking signal)
If you believe that certain page should be tagged "noindex" it is not because it is not updated within the last month or year. Just for an example: contact us pages, about us pages and terms of use pages. These are super static pages that in many cases probably wont be changed for years.best
Yossi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Question & Review should be seperate page
Hi pls look at the below page, http://www.powerwale.com/store/exide-xplore-xltz4-3ah-battery/76933 is questions and review should be in seperate page, as i think that in the future the comments, will become Key word stuffing for the product page. Pls suggest.. If yes, suggest the best url as well.. thanks
Intermediate & Advanced SEO | | Rahim1191 -
SSL and robots.txt question - confused by Google guidelines
I noticed "Don’t block your HTTPS site from crawling using robots.txt" here: http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html Does this mean you can't use robots.txt anywhere on the site - even parts of a site you want to noindex, for example?
Intermediate & Advanced SEO | | McTaggart0 -
Pagination, Canonical, Prev & Next
Hello All
Intermediate & Advanced SEO | | Vitalized
I have a question about my Magento setup. I have lots of categories which have many products so the categories paginate. I've seen info about making sure the Canonical tag doesn't simply send Search Engines back to the first page meaning the paginated pages won't get indexed. I've also seen info about using the rel=next & rel=prev to help Search Engines understand the category pages are paginated... Is it okay to use both? I've made sure that: category/?p=1 has a canonical of category/ to make sure there isn't duplicate content. Here's an example of category/?p=2 meta data:
http://website.com/category/?p=2" />
http://website.com/category/" />
http://website.com/category/?p=3" />0 -
Any value to shoehorning less applicable rich snippets into a page?
I've been wondering something about rich snippets for a while. I can plainly see how rich snippets and micro-data stuff can be super helpful for pages that feature things like event schedules, recipes, specific products with reviews, and articles written by influential authors. But is it worth trying to force micro-data into pages that don't readily lend themselves to the established rich snippet archetypes? For example, say I was making a website for a carpet cleaning service. The company provides a service, rather than selling a tangible product, so there aren't individual items of which to tag pictures and reviews. The company doesn't hold any kind of events, so the scheduling stuff doesn't apply. The company doesn't necessarily present itself with any one person as the "face of the company", so there isn't anyone to tag as the "author" of the content. And obviously (I hope), people should not be eating/drinking carpet detergents, so recipes wouldn't work. Given these restrictions, is it of any value to use any of the more generic micro-data structures like "thing" (http://schema.org/Thing) or "intangible" (http://schema.org/Intangible) to mark up stuff like "this is a picture of a carpet that we cleaned, but you can't actually buy from us"? Or are the rich snippets more of an "if your content fits with one of Google's promoted use cases, that's great, but otherwise don't bother" situations? Thanks!
Intermediate & Advanced SEO | | BrianAlpert780 -
Why should I add URL parameters where Meta Robots NOINDEX available?
Today, I have checked Bing webmaster tools and come to know about Ignore URL parameters. Bing webmaster tools shows me certain parameters for URLs where I have added META Robots with NOINDEX FOLLOW syntax. I can see canopy_search_fabric parameter in suggested section. It's due to following kind or URLs. http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1728 http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1729 http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1730 http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=2239 But, I have added META Robots NOINDEX Follow to disallow crawling. So, why should it happen?
Intermediate & Advanced SEO | | CommercePundit0 -
Microformats & Microdata
Hi, Does splitting data apart using microformats & Microdata, help Google better understand your content and in turn could be used to increase relevancy? O and does anyone know if it's supported across major browsers.
Intermediate & Advanced SEO | | activitysuper0 -
Use of rel=canonical to view all page & No follow links
Hey, I have a couple of questions regarding e-commerce category pages and filtering options: I would like to implement the rel=canonical to the view all page as suggested on this article on googlewebmastercentral. If you go on one of my category pages you will see that both the "next page link" and the "view all" links are nofollowed. Is that a mistake? How does nofoolow combines with canonical view all? Is it a good thing to nofollow the "sorty by" pages or should I also use Noindex for them?
Intermediate & Advanced SEO | | Ypsilon0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1