Is my robots.txt file working?
-
Greetings from medieval York UK
Everytime to you enter my name & Liz this page is returned in Google:
http://www.davidclick.com/web_page/al_liz.htmBut i have the following robots txt file which has been in place a few weeks
User-agent: * Disallow: /york_wedding_photographer_advice_pre_wedding_photoshoot.htm Disallow: /york_wedding_photographer_advice.htm Disallow: /york_wedding_photographer_advice_copyright_free_wedding_photography.htm Disallow: /web_page/prices.htm Disallow: /web_page/about_me.htm Disallow: /web_page/thumbnails4.htm Disallow: /web_page/thumbnails.html Disallow: /web_page/al_liz.htm Disallow: /web_page/york_wedding_photographer_advice.htm Allow: /
So my question is please...
"Why is this page appearing in the SERPS when its blocked in the robots txt file e.g.: Disallow: /web_page/al_liz.htm"
ANy insights welcome
-
Glad we could help
Fredrik
PS Dont forget to mark as answered
-
Brill answers guys thanks
-
Nightwing
Frederick gives some good pointers and here is a little trick to try: Fetch as Google from GWMT
- On the Webmaster Tools Home page, click the site you want.
- On the Dashboard, under Health, click Fetch as Google.
- In the text box, type the path to the page you want to check.
- In the dropdown list, select the type of fetch you want. To see what our web crawler Googlebot sees, select Web. To see what our mobile crawler Googlebot-Mobile sees, select cHTML (this is used mainly for Japanese web sites) or Mobile XHTML/WML.
- Click Fetch.
This will likely give you a quick re index and you will know whassup...
Best,
Robert
-
Hi David
How long have you had the robots.txt file? Preventeing Google from indexing the page would not automatically remove it if its already indexed. That would take some time.
You could try using the removal tool:
https://www.google.com/webmasters/tools/removals
If its urgent you could check the header and do a 301 redirect if the user comes from Google. But I think it should sort itself out within not too long.
Fredrik
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Bloking pages in roborts.txt that are under a redirected subdomain
Hi Everyone, I have a lot of Marketo landing pages that I don't want to show in SERP. Adding the noindex meta tag for each page will be too much, I have thousands of pages. Blocking it in roborts.txt could have been an option, BUT, the subdomain homepage is redirected to my main domain (with a 302) so I may confuse search engines ( should they follow the redirect or should they block) marketo.mydomain.com is redirected to www.mydomain.com disallow: / (I think this will be confusing with the redirect) I don't have folders, all pages are under the subdomain, so I can't block folders in Robots.txt also Would anyone had this scenario or any suggestions? I appreciate your thoughts here. Thank you Rachel
Technical SEO | | RaquelSaiz0 -
Sub Domains and Robot.txt files...
This is going to seem like a stupid question, and perhaps it is but I am pulling out what little hair I have left. I have a sub level domain on which a website sits. The Main domain has a robots.txt file that disallows all robots. It has been two weeks, I submitted the sitemap through webmaster tools and still, Google has not indexed the sub domain website. My question is, could the robots.txt file on the main domain be affecting the crawlability of the website on the sub domain? I wouldn't have thought so but I can find nothing else. Thanks in advance.
Technical SEO | | Vizergy0 -
Will Redirection of Unnatural Links to the New Domain will work?
Hi All, If a website www.mywebsite.com has received 100 unnatural links and because of that rankings are dropped. I want to follow the below strategy to protect my website from these 100 unnatural links. Please let me know if it will work: Buy a new domain www.newdomain.com with new whois record (Not related to www.mywebsite.com ) Do 301 redirection of all 100 unnatural links (Those are pointing to www.mywebsite.com) to www.newdomain (New domain) Do 302 redirection from www.newdomain (New domain) to www.mywebsite.com (Old Domain) After applying the above strategy, will google still consider those 100 unnatural links for Old domain??
Technical SEO | | RuchiPardal0 -
Google ignores Meta name="Robots"
Ciao from 24 degrees C wetherby UK, On this page http://www.perspex.co.uk/products/palopaque-cladding/ this line was added to block indexing: But it has not worked, when you google "Palopaque PVC Wall Cladding" the page appears in the SERPS. I'm going to upload a robots txt file in a second attempt to block indexing but my question is please:
Technical SEO | | Nightwing
Why is it being indexed? Grazie,
David0 -
Blocked by robots
my client GWT has a number of notices for "blocked by meta-robots" - these are all either blog posts/categories/or tags his former seo told him this: "We've activated following settings: Use noindex for Categories Use noindex for Archives Use noindex for Tag Archives to reduce keyword stuffing & duplicate post tags
Technical SEO | | Ezpro9
Disabling all 3 noindex settings above may remove google blocks but also will send too many similar tags, post archives/category. " is this guy correct? what would be the problem with indexing these? am i correct in thinking they should be indexed? thanks0 -
Empty Meta Robots Directive - Harmful?
Hi, We had a coding update and a side-effect of that was that our directive was emptied, in other words it now reads as: on all of the site. I've since noticed that Google's cache date on all of the pages - at least, the ones I tested - have a Cached date of no later than 17 December '12 - that's the Monday after the directive was removed on mass. So, A, does anyone have solid evidence of an empty directive causing problems? Past experience, Matt Cutts, Fishkin quote, etc. And then B - It seems fairly well correlated but, does my entire site's homogenous Cached date point to this tag removal? Or is it fairly normal to have a particular cache date across a large site (we're a large ecommerce site). Our site: http://www.zando.co.za/ I'm having the directive reinstated as soon as Dev permitting. And then, for extra credit, is there a way with Google's API, or perhaps some other tool, to run an arbitrary list and retrieve Cached dates? I'd want to do this for diagnosis purposes and preferably in a way that OK with Google. I'd avoid CURLing for the cached URL and scraping out that dates with BASH, or any such kind of thing. Cheers,
Technical SEO | | RocketZando0 -
Restricted by robots.txt does this cause problems?
I have restricted around 1,500 links which are links to retailers website and links that affiliate links accorsing to webmaster tools Is this the right approach as I thought it would affect the link juice? or should I take the no follow out of the restricted by robots.txt file
Technical SEO | | ocelot0 -
Any value in external links to image files?
Let's say you have www.example.com. On this website, you have www.example.com/example-image.jpg. When someone links externally to this image - like below... { is < {a href="www.example.com/example-image.jpg"} {img src="www.example.com/example-image.jpg"} {/a} The external site would be using the image hosted on your site, but the image is also linked back to the same image file on your site. Does this have any value even though the link is back to the image file and not the website? Also - how much value do you guys feel image links have in relation to tech links? In terms of passing link juice and adding to a natural link profile. Thanks!
Technical SEO | | qlkasdjfw1