Why are these m. results showing as blocked?
-
If you go to http://bit.ly/173gdWK, you'll see that m. results are showing as blocked by robots.txt, but we don't have anything in our robots.txt file that specifies to block m. results. Any ideas why these URLs show as blocked?
-
Yeah, I was testing exactly the same when you posted the response. I even tried crawling a googlebot-mobile and still I get the 301 redirect. Which, for everything that I am seeing it is correct, as no matter what browser I use (desktop, mobile, spider) I always get a 301 to the www. version.
@michelleh, are you sure there's a mobile version not being redirected to the www. one?
-
(Using example.com instead of your domain in case you want anonymity later)
If you try to go to any of these m.example.com URLs on your desktop computer, you're redirected on the server to a www.example.com URL. I'm guessing Googlebot and Googlebot-Mobile cannot access m. pages (unless you're sniffing out Googlebot-Mobile specifically to serve it m.example.com pages). If you're looking at screen resolution for these redirects, you might not be catching Googlebot-Mobile, as I don't think Googlebot-Mobile gives a screen resolution in its user agent. I believe you want Googlebot indexing your www. content, and Googlebot-Mobile indexing your m. content, so you'll need to sniff out Googlebot-Mobile's user agent (see here), and redirect it to m. content.
Also of note is that I think these should be 302 temporary redirects, and not 301 permanent redirects between your www. and m. versions, as they're not really permanent, just getting a given user to the right version of the site. Also, you don't let me switch from the mobile version to the desktop version, which drives me bananas! Let users choose after the initial redirect. If you allow people to switch, but maintain 301 redirects, the browsers may cache some of the redirects which will lead to weird behavior if people hit a page that redirected before.
You don't have a robots.txt file at m.example.com/robots.txt, as that redirects to www.example.com/robots.txt even on my phone. I don't think this is the root of the problem, but once you figure things out, you can set up a robots.txt file on your m. subdomain.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
How did my dev site end up in the search results?
We use a subdomain for our dev site. I never thought anything of it because the only way you can reach the dev site is through a vpn. Google has somehow indexed it. Any ideas on how that happened? I am adding the noindex tag, should I used canonical? Or is there anything else you can think of?
Intermediate & Advanced SEO | | EcommerceSite0 -
Show parts of page A on page B & C?
Good afternoon,
Intermediate & Advanced SEO | | rayvensoft
A quick question. I am working on a website which has a large page with different sections. Lets say: Page 1
SECTION A
SECTION B
SECTION C Now, they are adding a new area where they want to show only certain sections, so it would look like this: Page 2
SECTION A Page 3
SECTION C Page 4
SECTION D So my question is, would a rel='canonical' tag back to Page 1 be the correct way of preempting any duplicate content issues? I do not need Page 2-4 to even be indexed, it is just a matter of usability and giving the users what they are looking for without all the rest of the extra stuff. Gracias. Tesekürler. Salamat Ko. Thanks. (bonus thumbs up for anybody who knows which languages each of those are) 🙂0 -
Wordpress site, MOZ showing missing meta description but pages do not exist on backend
I've got a wordpress website (a client) and MOZ keeps showing missing meta descriptions. When I look at the pages these are nonsense pages, they do exist somewhere but I am not seeing them on the backend. Questions: 1) how do I fix this? Maybe it's a rel con issue? why is this referring to "non-sense" pages? When I go to the page there is nothing on it except maybe an image or the headline, it's very strange. Any input out there I greatly appreciate. Thank you
Intermediate & Advanced SEO | | SOM240 -
Customer Experience vs Search Result Optimisation
Yes, I know customer experience is king, however, I have a dilema, my site has been live since June 2013 & we get good feedback on site design & easy to follow navigation, however, our rankings arent as good as they could be? For example, the following 2 pages share v similar URLs, but the pages do 2 different jobs & when you get to the site that is easy to see, but my largest Keyword "Over 50 Life Insurance" becomes difficult to target as google sees both pages and splits the results, so I think i must be losing ranking positions? http://www.over50choices.co.uk/Funeral-Planning/Over-50-Life-Insurance.aspx http://www.over50choices.co.uk/Funeral-Planning/Over-50-Life-Insurance/Compare-Over-50s-Life-Insurance.aspx The first page explains the product(s) and the 2nd is the Quote & Compare page, which generates the income. I am currently playing with meta tags, but as yet havent found the right combination! Originally the 2nd page meta tags were focussing on "compare over 50s life insurance" but google still sees "over 50 life insurance" in this phrase, so the results get split. I also had internal anchor text supporting this. What do you think is the best strategy for optimising both pages? Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Show wordpress "archive links" on blog?
I here conflicting reports on whether to show wordpress archive links on the blog or not. Some say it is important for viewers to see, others say it is not and creates way too many links. I think both have good points but for SEO purposes, I lean towards removing them. What do Moz users think?
Intermediate & Advanced SEO | | seomozinator0 -
Rich Snippits - Product Data showing in serps
Google "halloween costumes", see the yandy listing that shows product data and pricing under their listing? How did they do this? I realize its a rich snippit of some kind but i don't see in the code how or where this is. I've yet to see anyone but Amazon influence the SERPS yet with rich snippits. Pretty interesting
Intermediate & Advanced SEO | | iAnalyst.com0