XML Sitemap instruction in robots.txt = Worth doing?
-
Hi fellow SEO's,
Just a quick one, I was reading a few guides on Bing Webmaster tools and found that you can use the robots.txt file to point crawlers/bots to your XML sitemap (they don't look for it by default).
I was just wondering if it would be worth creating a robots.txt file purely for the purpose of pointing bots to the XML sitemap?
I've submitted it manually to Google and Bing webmaster tools but I was thinking more for the other bots (I.e. Mozbot, the SEOmoz bot?).
Any thoughts would be appreciated!
Regards,
Ash
-
Thanks for the answer and link John!
Regards,
Ash
-
I think it's worth it as it should only take a few minutes to set up, and it's good to have a robots.txt, even if it's allowing everything. Â Put a text file named "robots.txt" in your root directory with:
<code>User-agent: * Disallow: Sitemap: http://www.yourdomain.com/none-standard-location/sitemap.xml</code>
Read more about robots.txt here:Â http://www.seomoz.org/learn-seo/robotstxt.
-
It is not going to make any difference. Time is better spend in fixing crawling & indexing issues of the website.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I include URLs that are 301'd or only include 200 status URLs in my sitemap.xml?
I'm not sure if I should be including old URLs (content) that are being redirected (301) to new URLs (content) in my sitemap.xml. Does anyone know if it is best to include or leave out 301ed URLs in a xml sitemap?
Intermediate & Advanced SEO | | Jonathan.Smith0 -
Robots.txt Syntax
I have been having a hard time finding any decent information regarding the robots.txt syntax that has been written in the last few years and I just want to verify some things as a review for myself. Â I have many occasions where I need to block particular directories in the URL, parameters and parameter values. Â I just wanted to make sure that I am doing this in the most efficient ways possible and thought you guys could help. So let's say I want to block a particular directory called "this" and this would be an example URL: www.domain.com/folder1/folder2/this/file.html Â
Intermediate & Advanced SEO | | DRSearchEngOpt
or
www.domain.com/folder1/this/folder2/file.html In order for me to block any URL that contains this folder anywhere in the URL I would use: User-agent: *
Disallow: /this/ Now lets say I have a parameter "that" I want to block and sometimes it is the first parameter and sometimes it isn't when it shows up in the URL. Â Would it look like this? User-agent: *
Disallow: ?that=
Disallow: &that= What about if there is only one value I want to block for "that" and the value is "NotThisGuy": User-agent: *
Disallow: ?that=NotThisGuy
Disallow: &that=NotThisGuy My big questions here are what are the most efficient ways to block a particular parameter and block a particular parameter value. Â Is there a more efficient way to deal with ? and & for when the parameter and value are either first or later? Â Secondly is there a list somewhere that will tell me all of the syntax and meaning that can be used for a robots.txt file? Thanks!0 -
Google showing high volume of URLs blocked by robots.txt in in index-should we be concerned?
if we search site:domain.com vs www.domain.com, We see: 130,000 vs 15,000 results. When reviewing the site:domain.com results, we're finding that the majority of the URLs showing are blocked by robots.txt. They are subdomains that we use as production environments (and contain similar content as the rest of our site). And, we also find the message "In order to show you the most relevant results, we have omitted some entries very similar to the 541 already displayed." SEER Interactive mentions that this is one way to gauge a Panda penalty: http://www.seerinteractive.com/blog/100-panda-recovery-what-we-learned-to-identify-issues-get-your-traffic-back We were hit by Panda some time back--is this an issue we should address? Should we unblock the subdomains and add noindex, follow?
Intermediate & Advanced SEO | | nicole.healthline0 -
How to Disallow Tag Pages With Robot.txt
Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Mobile Sitemap Best Practices w/ Responsive Design
I'm looking for insight into mobile sitemap best practices when building sites with responsive design. Â If a mobile site has the same urls as the desktop site the mobile sitemap would be very similar to the regular sitemap. Is a mobile sitemap necessary for sites that utilize responsive design? If so, is there a way to have a mobile sitemap that simply references the regular sitemap or is a new sitemap that has all urls tagged with the "" tag with each url required?
Intermediate & Advanced SEO | | AdamDorfman0 -
Sitemaps: Alternate hreflang
Hi, some time ago I have read that there is a limit of 50.000 URLs per sitemap file (So, you need to create a sitemap index and separate files with 50.000 urls each). [Source]. Now we are about to implement the link hreflang in the sitemap [Source], and we dont know if we have to count each alternate as a different url. We have 21 different well positioned domains (Same name, different cctlds, a little different content [varies in currencies, taxes, some labels, etc] depending in the target country) so the amount of links per url would be high. A) Shall we count each link alternate as a separate url, or just the original ones? For example, if we have to count the link alternates, that would make us have 2380pages per sitemap, each with one original url and 20 alternate links. (Always being aware of the 50mb maximum filesize) B) Actually we have one sitemap per domain. Using this, shall we generate one per domain using the matching domain as original url? Or it would be the same if we upload to every domain the same sitemap? Thanks
Intermediate & Advanced SEO | | marianoSoler980 -
Is it bad to host an XML sitemap in a different subdomain?
Example: sitemap.example.com/sitemap.xml for pages on www.example.com.
Intermediate & Advanced SEO | | SEOTGT0 -
Why specify robots instead of googlebot for a Panda affected site?
Daniweb is the poster child for sites that have recovered from Panda. I know one strategy she mentioned was de-indexing all of her tagged content, fo rexample:Â http://www.daniweb.com/tags/database Why do you think more Panda affected sites specifying 'googlebot' rather than 'robots' to capture traffic from Bing & Yahoo?
Intermediate & Advanced SEO | | nicole.healthline0