Google robots.txt test - not picking up syntax errors?
-
I just ran a robots.txt file through "Google robots.txt Tester" as there was some unusual syntax in the file that didn't make any sense to me...
e.g. /url/?*
/url/?
/url/*and so on. I would use ? and not ? for example and what is ? for! - etc.
Yet "Google robots.txt Tester" did not highlight the issues...
I then fed the sitemap through http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php and that tool actually picked up my concerns.
Can anybody explain why Google didn't - or perhaps it isn't supposed to pick up such errors?
Thanks, Luke
-
Many thanks Beau - much appreciated.
-
Hey Luke,
It appears that in each of the three examples, there was a plausible case for each example. Let's cover each:
- For /url/?* , it can be expressed that a URL can offer a trailing slash and then a query string, see examples here.
- with /url/? , this covers examples of the above and in addition, would plausibly block product pages that generate query strings, similar to this example from H&M. In essence, only allowing the product page to be seen.
- /url/* , well, that's just anything and everything after the trailing slash.
I guess the question you should ask yourself is "Is this the best approach for the issue?"
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
IO Error - what does this mean?
I did a quick check on https://validator.w3.org I got this error IO Error - java.security.cert.CertificateException: Certificates do not conform to algorithm constraints What does this mean?
Intermediate & Advanced SEO | | BeckyKey0 -
Robots.txt wildcards - the devs had a disagreement - which is correct?
Hi – the lead website developer was assuming that this wildcard: Disallow: /shirts/?* would block URLs including a ? within this directory, and all the subdirectories of this directory that included a “?” The second developer suggested that this wildcard would only block URLs featuring a ? that come immediately after /shirts/ - for example: /shirts?minprice=10&maxprice=20 BUT argued that this robots.txt directive would not block URLS featuring a ? in sub directories - e.g. /shirts/blue?mprice=100&maxp=20 So which of the developers is correct? Beyond that, I assumed that the ? should feature a * on each side of it – for example - /? - to work as intended above? Am I correct in assuming that?
Intermediate & Advanced SEO | | McTaggart0 -
Robot.txt File Not Appearing, but seems to be working?
Hi Mozzers, I am conducting a site audit for a client, and I am confused with what they are doing with their robot.txt file. It shows in GWT that there is a file and it is blocking about 12K URLs (image attached). It also shows in GWT that the file was downloaded 10 hours ago successfully. However, when I go to the robot.txt file link, the page is blank. Would they be doing something advanced to be blocking URLs to hide it it from users? It appears to correctly be blocking log-ins, but I would like to know for sure that it is working correctly. Any advice on this would be most appreciated. Thanks! Jared ihgNxN7
Intermediate & Advanced SEO | | J-Banz0 -
Google Indexing Feedburner Links???
I just noticed that for lots of the articles on my website, there are two results in Google's index. For instance: http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html and http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+thewebhostinghero+(TheWebHostingHero.com) Now my Feedburner feed is set to "noindex" and it's always been that way. The canonical tag on the webpage is set to: rel='canonical' href='http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html' /> The robots tag is set to: name="robots" content="index,follow,noodp" /> I found out that there are scrapper sites that are linking to my content using the Feedburner link. So should the robots tag be set to "noindex" when the requested URL is different from the canonical URL? If so, is there an easy way to do this in Wordpress?
Intermediate & Advanced SEO | | sbrault740 -
We are ignored by Google - what should we do?
Hi, We believe that our website - https://en.greatfire.org - is being all but ignored by Google Search. The following two examples illustrate our case. 1. Searching for “China listening in on Skype - Microsoft assumes you approve”. This is the title of a blog post that we wrote which received some 50,000 visits. On Yahoo and Bing search, we rank first for this search. On Google, however, we rank 7th. Each of the six pages ranking higher than us are quoting and linking to our story. 2. Searching for “Online Censorship In China”. This is the title of our front page. Yahoo and Bing both rank us third for this search. On Google, however, we are not even among the first 300 results. Two of the pages among the first 10 results link to us. Our website has an average of around 1000 visits per day. We are quoted in and linked from virtually all Western mainstream media (see https://en.greatfire.org/press). Yet to this day we are receiving almost no traffic from Google Search. Our mission is to bring transparency to online censorship in China. If people could find us in Google, it would greatly help to spread awareness of the extent of Internet restrictions here. If you could indicate to us what the cause of our poor rankings could be, we would be very grateful. Thank you for your time and consideration.
Intermediate & Advanced SEO | | GreatFire.org0 -
Robots.txt: Can you put a /* wildcard in the middle of a URL?
We have noticed that Google is indexing the language/country directory versions of directories we have disallowed in our robots.txt. For example: Disallow: /images/ is blocked just fine However, once you add our /en/uk/ directory in front of it, there are dozens of pages indexed. The question is: Can I put a wildcard in the middle of the string, ex. /en/*/images/, or do I need to list out every single country for every language in the robots file. Anyone know of any workarounds?
Intermediate & Advanced SEO | | IHSwebsite0 -
Google +1 and Yslow
After adding Google's +1 script and call to our site (loading asynchronously), we noticed Yslow is giving us a D for not having expire headers for the following scripts: https://apis.google.com/js/plusone.js
Intermediate & Advanced SEO | | GKLA
https://www.google-analytics.com/ga.js
https://lh4.googleusercontent.com... 1. Is their a workaround for this issue, so expire headers are added to to plusone and GA script? Or, are we being to nit-picky about this issue?0 -
Remove www. in google webmaster
Hi. My baseball blog (mopupduty.com) shows up as www.mopupduty.com in Google Webmaster tools. This is an issue for me, as my Wordpress plug-in sitemap will only show up on http://mopupduty.com/sitemap.xml , not the www. version Is there any way in changing the www. in webmaster tools without deleting my existing index. The website currently has sitelinks in search results, and I'm not too keen in giving them up via deletion. Thanks
Intermediate & Advanced SEO | | mkoster0