Shopify robots blocking stylesheets causing inconsistent mobile-friendly test results?
-
One of our shopify sites suffered an extreme rankings drop. Recent Google algorithm updates include mobile first so I tested the site and our team got different mobile-friendly test results. However, search console is also flagging pages as not mobile friendly. So, while us end-users see the site as OK on mobile, this may not be the case for Google?
I researched more about inconsistent mobile test results and found answers that say it may be due to robots.txt blocking stylesheets.
Do you recognise any directory blocked that might be affecting Google's rendering? We can't edit shopify robots.txt unfortunately. Our dev said the only thing that stands out to him is Disallow: /design_theme_id and the rest shouldn't be hindering Google bots.
Here are some of the files blocked:
Disallow: /admin
Disallow: /cart
Disallow: /orders
Disallow: /checkout
Disallow: /9103034/checkouts
Disallow: /9103034/orders
Disallow: /carts
Disallow: /account
Disallow: /collections/+
Disallow: /collections/%2B
Disallow: /collections/%2b
Disallow: /blogs/+
Disallow: /blogs/%2B
Disallow: /blogs/%2b
Disallow: /design_theme_id
Disallow: /preview_theme_id
Disallow: /preview_script_id
Disallow: /discount/*
Disallow: /gift_cards/*
Disallow: /apple-app-site-association -
Nikki–if you feel comfortable sharing one of the sites, that would be helpful for further investigation!
It seems like there's a lot you're blocking in your robots.txt that you might need (though that's hard to suss out without knowing your site):
-
Disallow: /blogs/+, Disallow: /blogs/%2B, Disallow: /blogs/%2b
-
assuming /blogs is the primary page path for any other blog content (e.g. /blogs/blog-title-goes-here)
-
Disallow: /design_theme_id
-
assuming this is one of your stylesheets, you should probably remove this.
-
Disallow: /gift_cards/*
-
no idea what comes after /gift_cards/[here], but this may be unnecessary
There may be other reasons Google doesn't reliably see your site as mobile friendly–if you can provide an example site, then we can dig deeper
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking Google from telemetry requests
At Magnet.me we track the items people are viewing in order to optimize our recommendations. As such we fire POST requests back to our backends every few seconds when enough user initiated actions have happened (think about scrolling for example). In order to eliminate bots from distorting statistics we ignore their values serverside. Based on some internal logging, we see that Googlebot is also performing these POST requests in its javascript crawling. In a 7 day period, that amounts to around 800k POST requests. As we are ignoring that data anyhow, and it is quite a number, we considered reducing this for bots. Though, we had several questions about this:
Technical SEO | | rogier_slag
1. Do these requests count towards crawl budgets?
2. If they do, and we'd want to prevent this from happening: what would be the preferred option? Either preventing the request in the frontend code, or blocking the request using a robots.txt line? The latter question is given by the fact that a in-app block for the request could lead to different behaviour for users and bots, and may be Google could penalize that as cloaking. The latter is slightly less convenient from a development perspective, as all logic is spread throughout the application. I'm aware one should not cloak, or makes pages appear differently to search engine crawlers. However these requests do not change anything in the pages behaviour, and purely send some anonymous data so we can improve future recommendations.0 -
Robots.txt in subfolders and hreflang issues
A client recently rolled out their UK business to the US. They decided to deploy with 2 WordPress installations: UK site - https://www.clientname.com/uk/ - robots.txt location: UK site - https://www.clientname.com/uk/robots.txt
Technical SEO | | lauralou82
US site - https://www.clientname.com/us/ - robots.txt location: UK site - https://www.clientname.com/us/robots.txt We've had various issues with /us/ pages being indexed in Google UK, and /uk/ pages being indexed in Google US. They have the following hreflang tags across all pages: We changed the x-default page to .com 2 weeks ago (we've tried both /uk/ and /us/ previously). Search Console says there are no hreflang tags at all. Additionally, we have a robots.txt file on each site which has a link to the corresponding sitemap files, but when viewing the robots.txt tester on Search Console, each property shows the robots.txt file for https://www.clientname.com only, even though when you actually navigate to this URL (https://www.clientname.com/robots.txt) you’ll get redirected to either https://www.clientname.com/uk/robots.txt or https://www.clientname.com/us/robots.txt depending on your location. Any suggestions how we can remove UK listings from Google US and vice versa?0 -
Duda Mobile no_redirect=true
Hi Guys, Just need some clarification if it's okay. I have a client who has the dudamobile software installed for a mobile friendly version of the site.Now I know that it put's on some JS to check if the user is visiting from a desktop or a mobile and then redirects. ?no_redirect=true https://moz.com/community/q/duplicate-content-resulting-from-js-redirect This is creating duplicate page issues when I run a deepcrawl of the site. I understand I can just exclude the URL's in Google's Search Console but I just wanted to double check though that this won't stop Google from indexing the mobile site? Sorry if it's a stupid question Kind Regards Neil
Technical SEO | | nezona1 -
Robots.txt
Hi All Having a robots.txt looking like the below will this stop Google crawling the site User-agent: *
Technical SEO | | internetsalesdrive0 -
How is IT handling multi-page search results for this url?
How is the IT team handling multi-page results? The URL is the same - with out any parameters, but the content changes. Is this best way to handle it from an SEO perspective?
Technical SEO | | S.S.N0 -
When testing the on page report I'm having a few problems
First of all, is this test checking my seo optimization over the whole website or just over one site: Ie. when I type in www.joelolson.ca...is it also checking sites like www.joelolson.ca/realtorresources... Secondly. I have found that it won't find specific websites on my page and says they can't be found when clearly they exist
Technical SEO | | JoelOlson0 -
Mobile redirection
Hi, What would be the best practice for mobile detection: Best practice for redirections Best practice for detection and inclusion of a front-end element inviting to a mobile version of the site I found this on www.W3C.org but it's from 2008 and I was wondering if any of you tried different approaches concerning mobile detection. Thanks! GaB
Technical SEO | | Pherogab0 -
How to recover after blocking all the search engine spiders?
I have the following problem - one of my clients (a Danish home improvement company) decided to block all the international traffic (leaving only Scandiavian one), because they were getting a lot of spammers using their mail form to send e-mails. As you can guess this lead to blocking Google also since the servers of Google Denmark are located in the US. This lead to drop in their rankings. So my question is - What Shall I do now - wait or contact Google? Any help will be appreciated, because to be honest I had never see such thing in action until now 😄 Best Regards
Technical SEO | | GroupM0