Why are these results being showed as blocked by robots.txt?
-
If you perform this search, you'll see all m. results are blocked by robots.txt: http://goo.gl/PRrlI, but when I reviewed the robots.txt file: http://goo.gl/Hly28, I didn't see anything specifying to block crawlers from these pages.
Any ideas why these are showing as blocked?
-
Hi,
Your robots.txt file is very .. steroid healthy. It has his own universe
Are you 100% sure all of the entries are legit and clean ?
First thing I would do is to check Web M;aster Tools for the mobile subdomain. If you don't have it yet, that will be a good place to start - to verify the m subdomain.
Once in WeB Master Tools - you can debug this in no time.
Cheers.
-
but, even when i search from my mobile device, I get the same results (that m. is blocked)
-
I can't submit because I haven't claimed m. in GWT
-
If you haven't already done so, I recommend testing your robots.txt file against one of your mobile pages (such as m.healthline.com/treatments) in Google Webmaster Tools. You can do this by logging into GWT, then click Health, then Blocked URLs.
If you have already tested it in GWT, can you let us know what the results said?
-
Another good article from the community
-
So after a little it or research as I never ever came past this before as all the site we do are responsive, I found this
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=72462
It seems Google wont index a website that they think is a mobile website within the main serp, and vice verse ...
Hope that helps, cause it had me puzzled
Regards
John
-
Which directory are you storing your mobile website files within ...
-
Oh, sorry, on further investigation I see its just your mobile site that are being blocked ...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Application & understanding of robots.txt
Hello Moz World! I have been reading up on robots.txt files, and I understand the basics. I am looking for a deeper understanding on when to deploy particular tags, and when a page should be disallowed because it will affect SEO. I have been working with a software company who has a News & Events page which I don't think should be indexed. It changes every week, and is only relevant to potential customers who want to book a demo or attend an event, not so much search engines. My initial thinking was that I should use noindex/follow tag on that page. So, the pages would not be indexed, but all the links will be crawled. I decided to look at some of our competitors robots.txt files. Smartbear (https://smartbear.com/robots.txt), b2wsoftware (http://www.b2wsoftware.com/robots.txt) & labtech (http://www.labtechsoftware.com/robots.txt). I am still confused on what type of tags I should use, and how to gauge which set of tags is best for certain pages. I figured a static page is pretty much always good to index and follow, as long as it's public. And, I should always include a sitemap file. But, What about a dynamic page? What about pages that are out of date? Will this help with soft 404s? This is a long one, but I appreciate all of the expert insight. Thanks ahead of time for all of the awesome responses. Best Regards, Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Meta No INDEX and Robots - Optimizing Crawl Budget
Hi, Sometime ago, a few thousand pages got into Google's index - they were "product pop up" pages, exact duplicates of the actual product page but a "quick view". So I deleted them via GWT and also put in a Meta No Index on these pop up overlays to stop them being indexed and causing dupe content issues. They are no longer within the index as far as I can see, i do a site:www.mydomain.com/ajax and nothing appears - So can I block these off now with robots.txt to optimize my crawl budget? Thanks
Intermediate & Advanced SEO | | bjs20100 -
Robot.txt error
I currently have this under my robot txt file: User-agent: *
Intermediate & Advanced SEO | | Rubix
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspx WebMatrix 2.0 On webmaster > Health Check > Blocked URL I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL: User-agent: * Disallow: / WebMatrix 2.0 Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas? Thanks Seda0 -
Anchor Tag around Table / Block
Our homepage (here) has four large promotional sections taking up most of the real estate. Each promo section has an image and styled text. We want each promo section to link to the appropriate page, so we created the promo sections as and wrapped each in an anchor. That works fine for users but I tried viewing our site in a text-only browser (Lynx) and couldn't follow those links! My fear is that GoogleBot can't follow them either and doesn't know what anchor text to pull. So, my question: What's the best way to make this entire block clickable, but still have it crawlable by robots? Or is our current implementation ok? For reference, here's a simplified version of the relevant code block: | | All Diamonds Extra 20% Off | [| | Jessica Simspon Extra 20% Off |](http://jessicasimpson.jewelry.com/shop/)
Intermediate & Advanced SEO | | Richline_Digital0 -
C Block IP Links Strategy
Hi guys i run a web design company and have around 50 sites that i have designed most dont have links but to us i was considering adding a footer link that will link to a blog page within that site, each post on each site will have unique content about the project and about us as a design company. As you can see most of my ip address are c blocks, any advice here please, thanks in advance Example Ip list
Intermediate & Advanced SEO | | Will_Craig
abc.32.230.1
def.20.252.37
ghi.48.68.82
zz.32.229.131
zz.32.231.208
zz.32.253.87
xx.170.40.170
xx.170.40.172
xx.170.40.232
xx.170.40.247
xx.170.40.32
xx.170.43.200
xx.170.44.103
xx.170.44.105
xx.170.44.108
xx.170.44.111
xx.170.44.127
xx.170.44.137
xx.170.44.146
xx.170.44.157
xx.170.44.77
xx.170.44.81
xx.170.44.86
xx.170.44.95
xx.170.44.96 [question edited by staff to remove full IP addresses]0 -
Robots.txt error message in Google Webmaster from a later date than the page was cached, how is that?
I have error messages in Google Webmaster that state that Googlebot encountered errors while attempting to access the robots.txt. The last date that this was reported was on December 25, 2012 (Merry Christmas), but the last cache date was November 16, 2012 (http://webcache.googleusercontent.com/search?q=cache%3Awww.etundra.com/robots.txt&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a). How could I get this error if the page hasn't been cached since November 16, 2012?
Intermediate & Advanced SEO | | eTundra0 -
Rich Snippits - Product Data showing in serps
Google "halloween costumes", see the yandy listing that shows product data and pricing under their listing? How did they do this? I realize its a rich snippit of some kind but i don't see in the code how or where this is. I've yet to see anyone but Amazon influence the SERPS yet with rich snippits. Pretty interesting
Intermediate & Advanced SEO | | iAnalyst.com0 -
10,000 New Pages of New Content - Should I Block in Robots.txt?
I'm almost ready to launch a redesign of a client's website. The new site has over 10,000 new product pages, which contain unique product descriptions, but do feature some similar text to other products throughout the site. An example of the page similarities would be the following two products: Brown leather 2 seat sofa Brown leather 4 seat corner sofa Obviously, the products are different, but the pages feature very similar terms and phrases. I'm worried that the Panda update will mean that these pages are sand-boxed and/or penalised. Would you block the new pages? Add them gradually? What would you recommend in this situation?
Intermediate & Advanced SEO | | cmaddison0