Why are these results being showed as blocked by robots.txt?
-
If you perform this search, you'll see all m. results are blocked by robots.txt: http://goo.gl/PRrlI, but when I reviewed the robots.txt file: http://goo.gl/Hly28, I didn't see anything specifying to block crawlers from these pages.
Any ideas why these are showing as blocked?
-
Hi,
Your robots.txt file is very .. steroid healthy. It has his own universe
Are you 100% sure all of the entries are legit and clean ?
First thing I would do is to check Web M;aster Tools for the mobile subdomain. If you don't have it yet, that will be a good place to start - to verify the m subdomain.
Once in WeB Master Tools - you can debug this in no time.
Cheers.
-
but, even when i search from my mobile device, I get the same results (that m. is blocked)
-
I can't submit because I haven't claimed m. in GWT
-
If you haven't already done so, I recommend testing your robots.txt file against one of your mobile pages (such as m.healthline.com/treatments) in Google Webmaster Tools. You can do this by logging into GWT, then click Health, then Blocked URLs.
If you have already tested it in GWT, can you let us know what the results said?
-
Another good article from the community
-
So after a little it or research as I never ever came past this before as all the site we do are responsive, I found this
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=72462
It seems Google wont index a website that they think is a mobile website within the main serp, and vice verse ...
Hope that helps, cause it had me puzzled
Regards
John
-
Which directory are you storing your mobile website files within ...
-
Oh, sorry, on further investigation I see its just your mobile site that are being blocked ...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Software "card" carousel results
Hi all! Does anyone have advice for getting a software product to appear in the card results at the top of SERPs? Example https://www.google.com/search?q=budgeting+software&rlz=1C1CHBF_enUS784US784&oq=budgeting+software&aqs=chrome..69i57j0l5.2194j0j7&sourceid=chrome&ie=UTF-8 dzTpe2B
Intermediate & Advanced SEO | | SimpleSearch0 -
Google Mobile Friendly designation in Search results
We have recently deployed a mobile (http://m.pssl.com) version of our desktop website (http://www.pssl.com). We've followed the guidelines in their documentation (https://support.google.com/webmasters/answer/6101188) & (http://googlewebmastercentral.blogspot.com/2015/04/rolling-out-mobile-friendly-update.html), added the appropriate rel=alternate/rel=canonical tags updated site maps and robots.txt files, etc. A mobile search for our company shows the "mobile-friendly" flag in the search results for our home page, but for some reason other pages such as category and brand are not showing showing as "mobile-friendly". I can submit the pages using the mobile-friendly tester (https://www.google.com/webmasters/tools/mobile-friendly/) and all of the pages I test come back as mobile friendly. Does anyone have any experience or advice they'd be willing to share that might help us resolve this issue?
Intermediate & Advanced SEO | | ovenbird0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Local Results For Additional Service Categories
Hi Mozzers, My client is prominent in local search for their primary activity, but I would also like them to appear for other service categories they offer. Assuming I add these other service categories in +Local and build corresponding service pages on the site, will this be enough to cause them to appear for these other services? The additional pre set service categories offered in +Local don't match those offered in local citations, so I can't really support these that way.
Intermediate & Advanced SEO | | waynekolenchuk0 -
Rich snippets showing on some pages but not others
Hi, I have rich snippet mark up showing in serps for some pages but not others. All pages test fine using Googles structured data testing tool. Whats really annoying is that they seem to appear for some pages but not others within the same directory / page format. None of googles troubleshooting suggestions on the issue are a problem i.e. Does your markup follow our usage guidelines? Is your marked-up content hidden from users? Is your markup incorrect or misleading? Is your marked-up content representative of the main content of the page? Have you supplied enough information? Have you only recently updated your content? Does your markup include incorrect nesting? Reviews: Does your review use count instead of vote? There are alot of instances when the same mark up is used twice e.g. on product x page in one directory and on product x page in a different directory (theres no dupe content). I wondered if that could be a reason but there are alot of instances when product x in directory a has the snippet when it doesnt in directory b. There doesnt seem to be an identifiable pattern as to why one page whould show the snippet and not another. Any feedback appreciated. Happy to pm example pages. Andy
Intermediate & Advanced SEO | | AndyMacLean0 -
Blocking out specific URLs with robots.txt
I've been trying to block out a few URLs using robots.txt, but I can't seem to get the specific one I'm trying to block. Here is an example. I'm trying to block something.com/cats but not block something.com/cats-and-dogs It seems if it setup my robots.txt as so.. Disallow: /cats It's blocking both urls. When I crawl the site with screaming flog, that Disallow is causing both urls to be blocked. How can I set up my robots.txt to specifically block /cats? I thought it was by doing it the way I was, but that doesn't seem to solve it. Any help is much appreciated, thanks in advance.
Intermediate & Advanced SEO | | Whebb0 -
Robots
I have just noticed this in my code name="robots" content="noindex"> And have noticed some of my keywords have dropped, could this be the reason?
Intermediate & Advanced SEO | | Paul780 -
Google top results API / scraper?
Hi, Does anyone know of an API/scraper that would allow you to get the top N Google search results for a given query? Ideally the results would be "neutral" (no personalization, no localization, etc.) Thanks!
Intermediate & Advanced SEO | | anthematic0