Google robots.txt test - not picking up syntax errors?
-
I just ran a robots.txt file through "Google robots.txt Tester" as there was some unusual syntax in the file that didn't make any sense to me...
e.g. /url/?*
/url/?
/url/*and so on. I would use ? and not ? for example and what is ? for! - etc.
Yet "Google robots.txt Tester" did not highlight the issues...
I then fed the sitemap through http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php and that tool actually picked up my concerns.
Can anybody explain why Google didn't - or perhaps it isn't supposed to pick up such errors?
Thanks, Luke
-
Many thanks Beau - much appreciated.
-
Hey Luke,
It appears that in each of the three examples, there was a plausible case for each example. Let's cover each:
- For /url/?* , it can be expressed that a URL can offer a trailing slash and then a query string, see examples here.
- with /url/? , this covers examples of the above and in addition, would plausibly block product pages that generate query strings, similar to this example from H&M. In essence, only allowing the product page to be seen.
- /url/* , well, that's just anything and everything after the trailing slash.
I guess the question you should ask yourself is "Is this the best approach for the issue?"
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt was set to disallow for 14 days
We updated our website and accidentally overwrote our robots file with a version that prevented crawling ( "Disallow: /") We realized the issue 14 days later and replaced after our organic visits began to drop significantly and we quickly replace the robots file with the correct version to begin crawling again. With the impact to our organic visits, we have a few and any help would be greatly appreciated - Will the site get back to its original status/ranking ? If so .. how long would that take? Is there anything we can do to speed up the process ? Thanks
Intermediate & Advanced SEO | | jc42540 -
Google image search
How does google decide which image show up in the image search section ? Is is based on the alt tag of the image or is google able to detect what is image is about using neural nets ? If it is using neural nets are the images you put on your website taken into account to rank a page ? Let's say I do walking tours in Italy and put a picture of the leaning tower of pisa as a top image while I be penalised because even though the picture is in italy, you don't see anyone walking ? Thank you,
Intermediate & Advanced SEO | | seoanalytics1 -
Rich snippets error?
hello everyone, I have this problem with the rich snippets: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.visalietuva.lt%2Fimone%2Ffcr-media-lietuva-uab The problem is that it says some kind of error. But I can't figure it out what it is. We implemented the same code on our other websites: http://www.imones.lt/fcr-media-lietuva-uab and http://www.1588.lt/imone/fcr-media-lietuva-uab . The snippets appear on Google and works perfectly.
Intermediate & Advanced SEO | | FCRMediaLietuva
The only site that has this problem is visalietuva.lt I attached the image to show what I mean. I really need tips for this one. gbozIrt.png0 -
Block in robots.txt instead of using canonical?
When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed? In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?
Intermediate & Advanced SEO | | YairSpolter0 -
Correct Syntax for Meta No Index
Hi, Is this syntax ok for the bots, especially Google to pick up? Still waiting for Google to drop lots of duplicate pages caused by parameters out of the index - if there are parameters in the querystring it inserts the code above into the head. Thanks!
Intermediate & Advanced SEO | | bjs20101 -
Can I use a "no index, follow" command in a robot.txt file for a certain parameter on a domain?
I have a site that produces thousands of pages via file uploads. These pages are then linked to by users for others to download what they have uploaded. Naturally, the client has blocked the parameter which precedes these pages in an attempt to keep them from being indexed. What they did not consider, was they these pages are attracting hundreds of thousands of links that are not passing any authority to the main domain because they're being blocked in robots.txt Can I allow google to follow, but NOT index these pages via a robots.txt file --- or would this have to be done on a page by page basis?
Intermediate & Advanced SEO | | PapaRelevance0 -
Hitting the top of Google Places!
Hi there, I am quite confused about how to get a good rankings on Google Places? It will be much appreciated to get an answers from PROs on following questions: On which websites I need to submit my details in order to rank well? Are there any rules which my website need to follow in order to rank well in GP? **Which techniques are you implementing in order to find proper websites influencing on Google Places? ** If there any 3rd party websites which helps to do so? Could Google Adwords have an impact on my presence in Google Places? Any major mistakes I should to avoid?**** Your detailed answers will help a lot, I am really upset about my knowledge about GP. Cheers, Russel
Intermediate & Advanced SEO | | smokin_ace0 -
Google +1 and Yslow
After adding Google's +1 script and call to our site (loading asynchronously), we noticed Yslow is giving us a D for not having expire headers for the following scripts: https://apis.google.com/js/plusone.js
Intermediate & Advanced SEO | | GKLA
https://www.google-analytics.com/ga.js
https://lh4.googleusercontent.com... 1. Is their a workaround for this issue, so expire headers are added to to plusone and GA script? Or, are we being to nit-picky about this issue?0