Is my robots.txt file working?
-
Greetings from medieval York UK
Everytime to you enter my name & Liz this page is returned in Google:
http://www.davidclick.com/web_page/al_liz.htmBut i have the following robots txt file which has been in place a few weeks
User-agent: * Disallow: /york_wedding_photographer_advice_pre_wedding_photoshoot.htm Disallow: /york_wedding_photographer_advice.htm Disallow: /york_wedding_photographer_advice_copyright_free_wedding_photography.htm Disallow: /web_page/prices.htm Disallow: /web_page/about_me.htm Disallow: /web_page/thumbnails4.htm Disallow: /web_page/thumbnails.html Disallow: /web_page/al_liz.htm Disallow: /web_page/york_wedding_photographer_advice.htm Allow: /
So my question is please...
"Why is this page appearing in the SERPS when its blocked in the robots txt file e.g.: Disallow: /web_page/al_liz.htm"
ANy insights welcome
-
Glad we could help
Fredrik
PS Dont forget to mark as answered
-
Brill answers guys thanks
-
Nightwing
Frederick gives some good pointers and here is a little trick to try: Fetch as Google from GWMT
- On the Webmaster Tools Home page, click the site you want.
- On the Dashboard, under Health, click Fetch as Google.
- In the text box, type the path to the page you want to check.
- In the dropdown list, select the type of fetch you want. To see what our web crawler Googlebot sees, select Web. To see what our mobile crawler Googlebot-Mobile sees, select cHTML (this is used mainly for Japanese web sites) or Mobile XHTML/WML.
- Click Fetch.
This will likely give you a quick re index and you will know whassup...
Best,
Robert
-
Hi David
How long have you had the robots.txt file? Preventeing Google from indexing the page would not automatically remove it if its already indexed. That would take some time.
You could try using the removal tool:
https://www.google.com/webmasters/tools/removals
If its urgent you could check the header and do a 301 redirect if the user comes from Google. But I think it should sort itself out within not too long.
Fredrik
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search console says 'sitemap is blocked by robots?
Google Search console is telling me "Sitemap contains URLs which are blocked by robots.txt." I don't understand why my sitemap is being blocked? My robots.txt look like this: User-Agent: *
Technical SEO | | Extima-Christian
Disallow: Sitemap: http://www.website.com/sitemap_index.xml It's a WordPress site, with Yoast SEO installed. Is anyone else having this issue with Google Search console? Does anyone know how I can fix this issue?1 -
Robots.txt vs. meta noindex, follow
Hi guys, I wander what your opinion is concerning exclution via the robots.txt file.
Technical SEO | | AdenaSEO
Do you advise to keep using this? For example: User-agent: *
Disallow: /sale/*
Disallow: /cart/*
Disallow: /search/
Disallow: /account/
Disallow: /wishlist/* Or do you prefer using the meta tag 'noindex, follow' instead?
I keep hearing different suggestions.
I'm just curious what your opinion / suggestion is. Regards,
Tom Vledder0 -
When to file a Reconsideration Request
Hi all, I don't have any manual penalties from Google but do have a unnatural links message from them back in 2012. We have removed some of the spammy links over the last 2 years but we're now making a further effort and will use the disavow tool once we've done this. Will this be enough once I submit the file or should I / can I submit a Reconsideration Request as well? Do I have to have a manual penalty item in my webmaster account to be able to submit a request? Thanks everyone!
Technical SEO | | KerryK0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Is there a reason to set a crawl-delay in the robots.txt?
I've recently encountered a site that has set a crawl-delay command set in their robots.txt file. I've never seen a need for this to be set since you can set that in Google Webmaster Tools for Googlebot. They have this command set for all crawlers, which seems odd to me. What are some reasons that someone would want to set it like that? I can't find any good information on it when researching.
Technical SEO | | MichaelWeisbaum0 -
File from godaddy.com
Hi, One of our client has received a file from godaddy.com where his site is hosted. Here is the message from the client- "i submitted my site for Search Engine Visibility,but they got some issue on the site need to be fixed. i tried myself could not fix it" The site in question is - http://allkindofessays.com/ Is there any problem with the site ? Contents of the file - bplist00Ó k 0_ WebSubframeArchives_ WebSubresources_ WebMainResource L x Ï Ö Ý ] ¨ ¯ ¼ Û 6 SÓ @ F¡ Ó / :¡ Ó )¡ Ò ¡ Ô _ WebResourceResponse_ WebResourceData_ WebResourceMIMEType^WebResourceURLO cbplist00Ô Z[X$versionX$objectsY$archiverT$top † ¯ "()0 12DEFGHIJKLMNOPTUU$nullÝ !R$6S$10R$2R$7R$3S$11R$8V$classR$4R$9R$0R$5R$1€ € € € € € € € Ó #$%& [NS.relativeWNS.base€ € € _ ¢http://tags.bluekai.com/site/2748?redir=http%3A%2F%2Fsegment-pixel.invitemedia.com%2Fset_partner_uid%3FpartnerID%3D84%26partnerUID%3D%24_BK_UUID%26sscs_active%3D1Ò*+,-Z$classnameX$classesUNSURL¢./UNSURLXNSObject#A´ þ¹ –5 ÈÓ 3456=WNS.keysZNS.objects€ ¦789:;<€ €€ € €€ ¦>?@ABC€ € € € € € \Content-TypeSP3PVServerTDate^Content-LengthYBK-ServerYimage/gif_ nCP="NOI DSP COR CUR ADMo DEVo PSAo PSDo OUR SAMo BUS UNI NAV", policyref="http://tags.bluekai.com/w3c/p3p.xml"_ Apache/2.2.3 (CentOS)_ Sat, 10 Sep 2011 20:23:21 GMTR62T87dfÒ*+QR_ NSMutableDictionary£QS/\NSDictionary >Ò*+VW_ NSHTTPURLResponse£XY/_ NSHTTPURLResponse]NSURLResponse_ NSKeyedArchiverÑ]_ WebResourceResponse€ # - 2 7 R X s v z } € ƒ ‡ Š ‘ ” — š ¢ ¤ ¦ ¨ ª ¬ ® ° ² ´ ¶ ¸ ¿ Ë Ó Õ × Ù ~ ƒ Ž — ¦ ¯ ¸ º Á É Ô Ö Ý ß á ã å ç é ð ò ô ö ø ú ü ( 2 < Å å è í ò 4 8 L Z l o … ^ ‡O >GIF89a ÿÿÿ!ÿ NETSCAPE2.0 !ù , L ;Yimage/gif_ ¢http://tags.bluekai.com/site/2748?redir=http%3A%2F%2Fsegment-pixel.invitemedia.com%2Fset_partner_uid%3FpartnerID%3D84%26partnerUID%3D%24_BK_UUID%26sscs_active%3D1Õ _ WebResourceTextEncodingName_ WebResourceFrameNameO 6
Technical SEO | | seoug_20050 -
Website Structure - What works best for passing the Link Juice?
As most people know it's usual that the main menu is after the top right small links in html. My questions are: What is more important for google or how does google tell which link is more important than the other to pass juice on? If the top right links are in front of the main menu in html would they get more link juice than the main menu? Should i focus in working on better html structure but still keeping the same look ( reverse the html code but keep the same look through css)? Any suggestions?
Technical SEO | | mosaicpro0