What's wrong with this robots.txt
-
Hi. really struggling with the robots.txt file
this is it:User-agent: *
Disallow: /product/#old sitemap
Disallow: /media/name.xmlWhen testing in w3c.org everything looks good, testing is okay, but when uploading it to the server, Google webmaster tools gives 3 errors. Checked it with my collegue we both don't know what's wrong.
Can someone take a look at this and give me the solution.
Thanx in advance!Leonie
-
I think thats a great Idea .net is not my thing.
All the best!
Tom
-
Ah thanks, it's an Azure platform, so no SFTP, SSH or .htaccess. but i'll give the stack link to the technical guys then they have to translate it to our environment ( .net)
-
Believe me it took me plenty of time to realize how to do this but if you're handy with SFTP or SSH you can change the
And for the ultimate in ease if you're using WordPress there is actually a plug-in for 410s so it wasn't something anyone found easy to do.
https://wordpress.org/plugins/wp-410/
Sincerely,
Thomas
-
Hi Leonie,
That's very kind of you I am very happy that you got it working correctly.
All the best,
Thomas
-
Hi ,
i got it working with a proper sitemap. Special thanks to Thomas for the great effort in his answers!
-
Hi, Thanx for your reply, i'm not sure i understand you by "please note you are disallowing more than just media"
the thing with this is the xml file is an old file but somewhere in the google archive. i tried do remove it with the wmt, but returns. It's not on the server anymore. the directory "media" doesn't exist anymore, also from an old website.
Because the file still returns in wmt i thought let's try it with the robots.txt
new robots.txt not tested waiting for deployment
Oh call me stupid, but how do i make a 410?
Grtz, Leonie
-
By the way here is an outdated site map that has when it looks like errors that really is telling me the protocol for putting a site map inside a robots.txt file is not endorsed by Google or Bing however I truly feel it is helpful so I do it. I've also added extra video site maps from an external host which is what's throwing out the errors the red color of the disallows is not a error it is just letting you know they are being blocked. Hopefully this will be of help
bigger photo is right here as well please give me a look at what errors are getting
http://i.imgur.com/Xg7EXwO.png
http status: 200
Syntax check robots.txt on http://www.blueprintmarketing.com/robots.txt (359 bytes)
| Line | Severity | Code |
| 6 | Warning | The official standard does not include Sitemap support even though major crawlers (Google and Bing) support it. It is still nonstandard. |
| 7 | Warning | The official standard does not include Sitemap support even though major crawlers (Google and Bing) support it. It is still nonstandard. |
| 8 | Warning | The official standard does not include Sitemap support even though major crawlers (Google and Bing) support it. It is still nonstandard. |
| 9 | Warning | The official standard does not include Sitemap support even though major crawlers (Google and Bing) support it. It is still nonstandard. |
| 10 | Warning | The official standard does not include Sitemap support even though major crawlers (Google and Bing) support it. It is still nonstandard. |Warnings Detected: 5
Errors Detected: 0
robots.txt source code for http://
| Line | Code |
| <a name="line-1"></a>1 | User-agent: * |
| <a name="line-2"></a>2 | Disallow: /wp-content/plugins/ |
| <a name="line-3"></a>3 | Disallow: /wp-admin/ |
| <a name="line-4"></a>4 | Disallow: /wp-includes/ |
| <a name="line-5"></a>5 | |
| <a name="line-6"></a>6 | Sitemap: http://www.blueprintmarketing.com/sitemap_index.xml |
| <a name="line-7"></a>7 | Sitemap: http://app.wistia.com/sitemaps/11323.xml |
| <a name="line-8"></a>8 | Sitemap: http://app.wistia.com/sitemaps/4339.xml |
| <a name="line-9"></a>9 | Sitemap: http://app.wistia.com/sitemaps/14213.xml |
| <a name="line-10"></a>10 | Sitemap: http://app.wistia.com/sitemaps/23283.xml | -
Hi Leonie,
I believe that you should create a robots.txt file that allows for a user agent disallow a folder /media/ and /.xml file. make the Unwanted xml file a 410 it will be dead to Google. however I think I have come up with a solution below please try pasting that in if it does not work.
A another tool for building robots.txt files and comparing them to the existing file from the same company believe it or not is right here.
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
please note you are disallowing more than just media you are disallowing something that should be more like this is for the xml sitemap why not just set it to a 410 killing the link in Google's eyes then you will not have to Disallow.
User-agent: *
Disallow: /product/
Disallow: /media/
Disallow: /bcc.xmlSitemap: http://example.com/sitemap_index.xml
putting your new site map in where I have placed a site map or where the rule above will give you the spot to put it will help you tell Google where your new site map resides along with of course submitting it to Google Webmaster tools and fetching it as a Google bot.
I would like to look at the architecture of your site if you're getting errors with what you showed me you can send me a private message and I promise I will respond if you are not comfortable showing the URL on Q&A.
I hope this is of help,
Thomas
-
Hi Dean happy to be of help!
-
Thanx for the url: it gives a warning on
Disallow: /product/
and
Disallow: /media/bcc.xmli wonder why?
-
Thomas,
That's an awesome tool, thank you for sharing.
-
if you want to find out anything that could possibly be wrong with that this tool is the holy grail of finding out what's wrong with robots.txt issues in my opinion just expect a lot more info than a simple response from it.
http://tools.seochat.com/tools/robots-txt-validator/
Sincerely,
Thomas
-
if i test the blocked url's they are blocked so it looks like the file is doing what's supposed to do. but still is strange i got these errors.
@Dean Andrews, thanx i will test it without empty lines, though have to wait for another deployment
-
Okay i got these errors in webmaster tools, very strange it is
-
Sounds more like a bug in the tool that you're as I tested the syntax just now in Google Webmaster Tools and it's not causing any issues there.
-
Hi, Lines containing only a comment are discarded completely, and therefore do not indicate a record boundary however you may need to remove the line break (not 100% sure but worth testing): User-agent: * Disallow: /product/ Disallow: /media/bcc.xml
-
Hi, sorry forgot to mention that
syntax error @ User-agent: *
no user agent @ Disallow: /product/
no user agent @ Disallow: /media/name.xml
Thanx, Leonie
-
Hi Leonie, what are the 3 errors as it seems that the robots.txt file syntax is correct.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicated content & url's for e-commerce website
Hi, I have an e-commerce site where I sell greeting cards. Products are under different categories (birthday, Christmas etc) with subcategories (for Mother, for Sister etc) and same product can be under 3 or 6 subcategories, for example: url: .../greeting-cards/Christmas/product1/for-mother
Technical SEO | | jurginga
url:.../greeting-cards/Christmas/product1/for-sister
etc On the CMS I have one description record per each card (product1) with multiple subcategories attached that naturally creates URLs for subcategories. Moz system (and Google for sure) picks these urls (and content) as duplicated.
Any ideas how to solve this problem?
Thank you very much!0 -
Robots.txt Download vs Cache
We made an update to the Robots.txt file this morning after the initial download of the robots.txt file. I then submitted the page through Fetch as Google bot to get the changes in asap. The cache time stamp on the page now shows Sep 27, 2013 15:35:28 GMT. I believe that would put the cache time stamp at about 6 hours ago. However the Blocked URLs tab in Google WMT shows the robots.txt last downloaded at 14 hours ago - and therefore it's showing the old file. This leads me to believe for the Robots.txt the cache date and the download time are independent. Is there anyway to get Google to recognize the new file other than waiting this out??
Technical SEO | | Rich_A0 -
Javascript to manipulate Google's bounce rate and time on site?
I was referred to this "awesome" solution to high bounce rates. It is suppose to "fix" bounce rates and lower them through this simple script. When the bounce rate goes way down then rankings dramatically increase (interesting study but not my question). I don't know javascript but simply adding a script to the footer and watch everything fall into place seems a bit iffy to me. Can someone with experience in JS help me by explaining what this script does? I think it manipulates the reporting it does to GA but I'm not sure. It was supposed to be placed in the footer of the page and then sit back and watch the dollars fly in. 🙂
Technical SEO | | BenRWoodard1 -
Schema Markup and Google's Rich Snippet Tool
Has anyone ever used the snippet tool and gotten the following error "could not fetch website"? When using the tool and placing an url that does not have markup present it will show that as the error. Or if part of markup is wrong, it will diagnose it accordingly. Did a search online and found limited info...one of which someone had this error but when other users tested it, they were not getting the same error.
Technical SEO | | andrewv0 -
Invisible robots.txt?
So here's a weird one... Client comes to me for some simple changes, turns out there are some major issues with the site, one of which is that none of the correct content pages are showing up in Google, just ancillary (outdated) ones. Looks like an issue because even the main homepage isn't showing up with a "site:domain.com" So, I add to Webmaster Tools and, after an hour or so, I get the red bar of doom, "robots.txt is blocking important pages." I check it out in Webmasters and, sure enough, it's a "User agent: * Disallow /" ACK! But wait... there's no robots.txt to be found on the server. I can go to domain.com/robots.txt and see it but nothing via FTP. I upload a new one and, thankfully, that is now showing but I've never seen that before. Question is: can a robots.txt file be stored in a way that can't be seen? Thanks!
Technical SEO | | joshcanhelp0 -
Severe rank drop due to overwritten robots.txt
Hi, Last week we made a change to drupal core for an update to our website. We accidentally overwrote our good robots.txt that blocked hundreds of pages with the default drupal robots.txt. Several hours after that happened (and we didn't catch the mistake) our rankings dropped from mostly first, second place in Google organic to bottom and mid first page. Basically I believe we flooded the index with very low quality pages at once and threw a red flag and we got de-ranked. We have since fixed the robots.txt and have been re-crawled but have not seen a return in rank. Would this be a safe assumption of what happened? I haven't seen any other sites getting hit in the retail vertical yet in regards to any Panda 2.3 type of update. Will we see a return in our results anytime soon? Thanks, Justin
Technical SEO | | BrettKrasnove0 -
A question about RSS feeds and nofollow's
With the nofollow tag used very widely on the internet these days I was just wondering about how an RSS feed might help me find a way around it. Basically my question is this : I post a comment on a blog, it's approved and my comment together with my link(nofollow tag applied) is there. Now when the blogs RSS feed updates, does this nofollow tag get applied to the feed? As far as I can tell it does not - but I'm not too clue'd up on how the feed is generated. Anyone want to help me understand how it works and if what I'm suggesting would be 'a way around the nofollow tag' ? Thanks 🙂
Technical SEO | | DanHill0