Search Console - Mobile Usability Errors
-
A site I'm looking at for a client had 100's of pages flagged as having Mobile Usability errors in Search Console.
I found that the theme uses parameters in the URLs of some of theme resources (.js/.css) to identify the version strings.
These were then being blocked by a rule in the robots.txt: "Disallow: /*?"
I've removed this rule, and now when I inspect URLs and test the live versions of the page they are now being reported as mobile friendly.
I then submitted validation requests in Search Console for both of the errors ("Text to small" and "Clickable Elements too close")
My problem now, is that the validation has completed and the pages are still being reported as having the errors.
I've double checked and they're find if I inspect them individually.
Does anyone else have experience clearing these issues in Search Console? Any ideas what's going on here!
-
Just to follow this up. We're now seeing the mobile usability error reports gradually being removed from pages at approximagely 100 pages / day.
It just seemed to me that the whole validation request process didn't actually appear to do anything and we just had to wait for the site to be recrawled?!
-
Thanks for your response Daniel. The steps you outlined are exactly what I have done - which is why after requesting validate fix I was surprised that the pages came back with errors still!
I've submitted a validate fix request again so I'll see what happens...
-
I've had the same issue when one of my clients disallowed a directory that contained the CSS and some other scripts.
What you should do is make sure 100% that you have removed any conflicting line from the robots.txt file, then you should go to the robots.txt testing tool from Google and see whether it's updated. If not, you just make a request for it to be updated, and usually, it's done within 30 min. Then you should run the Google mobile-friendly test and see if there are any issues.
If both of the tests return positive results, then you should request at the search console to "validate fix". After that, it usually takes up to a week for it to be removed from the pages with errors.
If it's a small number of links, you can also "text live URL" within the search console, and then "request indexing". As mentioned, if everything goes right, it should take up to one week for the errors to be removed.Daniel Rika - Dalerio Consulting
https://dalerioconsulting.com/
info@dalerioconsulting.com
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Bing search results - Site links
My site links in Bing search results are pulling through the footer text instead of the meta description (see image). Is there any way of controlling this? 2L2VusT
Technical SEO | | RWesley0 -
Robots.txt Disallow: / in Search Console
Two days ago I found out through search console that my website's Robots.txt has changed to User-agent: *
Technical SEO | | RAN_SEO
Disallow: / When I check the robots.txt in the website it looks fine - I see its blocked just in search console( in the robots.txt tester). when I try to do fetch as google to the homepage I see its blocked. Any ideas why would robots.txt block my website? it was fine until the weekend. before that, in the last 3 months I saw I had blocked resources in the website and I brought back pages with fetch as google. Any ideas?0 -
Search Console rejecting XML sitemap files as HTML files, despite them being XML
Hi Moz folks, We have launched an international site that uses subdirectories for regions and have had trouble getting pages outside of USA and Canada indexed. Google Search Console accounts have finally been verified, so we can submit the correct regional sitemap to the relevant search console account. However, when submitting non-USA and CA sitemap files (e.g. AU, NZ, UK), we are receiving a submission error that states, "Your Sitemap appears to be an HTML page," despite them being .xml files, e.g. http://www.t2tea.com/en/au/sitemap1_en_AU.xml. Queries on this suggest it's a W3 Cache plugin problem, but we aren't using Wordpress; the site is running on Demandware. Can anyone guide us on why Google Search Console is rejecting these sitemap files? Page indexation is a real issue. Many thanks in advance!
Technical SEO | | SearchDeploy0 -
Rel Canonical errors after seomoz crawling
Hi to all, I can not find which are the errors in my web pages with the tag cannonical ref. I have to many errors over 500 after seomoz crawling my domain and I don't know how to fix it. I share my URL for root page: http://www.vour.gr My rel canonical tag for this page is: http://www.vour.gr"/> Can anyone help me why i get error for this page? Many thanks.
Technical SEO | | edreamis0 -
What caused my huge drop in search ranking?
On February 8th, 2012 my site's non-branded search traffic dropped overnight by over 80%. It appears I've been hit by a Google penalty. I submitted a reconsideration request and one of Google's staff replied letting me know it was NOT a manual penalty. So it's clearly a result of an algorithm. What is the cause? I haven't made significant changes to the website in the previous months do to being out of the country and the holidays. I'm extremely careful not to out source SEO so there is no chance of questionable link building strategies, unless my competitor launched an attack on my site. I've considered these possible causes: Duplicate content - My site had some redundancy from a marketing plan before Google Panda launched. Last month, I removed all duplicate content. Unnatural link building - I've been moving slowly my site, page by page, from adbio.com to bioworldusa.com after a brand change. Perhaps the 301 redirects triggered a flag in Google's algorithm. I've since removed all redirects to see if that will fix the issue. Poor User Experience - I recently upgraded my Drupal CMS and had to change themes. Currently my theme is an ugly, grey theme which may cause a higher than usual bounce rate. I've been trying to compensate by making sure the content is high quality. The penalty affected these website: www.bioworldusa.com (2/8/12) blog.bioworldusa.com (2/8/12) www.adbio.com (2/17/12)
Technical SEO | | bioworld1 -
Search snippet ignors title tag :-(
Good Morning from 1 degrees C light drizzle Wetherby UK 😉 Ok here is todays puzzle. When I Google "Great Inns" ( http://www.greatinns.co.uk ) I get a search snippet which looks like this: http://i216.photobucket.com/albums/cc53/zymurgy_bucket/great-inns-title-glitch.jpg As the screen grab illustrates the Title tag is missing and simply says untitled. So my questions is pleased why is google rendering this when in the source code there is a tag eg: <title>Great Inns of Britain - Home – Independent, Historic Inns, Quality Service Any insights welcome :-)</p></title>
Technical SEO | | Nightwing0 -
Site disappearing from search for a certain keyword
I was wondering if someone has encountered the same problem as me. I was doing some changes on the frontpage of one of my clients' website, especially some redirections, and my site has disappeared from Google for the main keyword on the page. So, if I look for my page on Google, instead of seeing my page first, I no longer see my page, at all. All I've done was a 301 redirection from index.html to the domain name. Now, I changed everything back to how it was before. More precisely, I've done that 2 weeks ago. But, no change in Google. I checked Bing and Yahoo, my site appears first when I search for that specific keyword. Any ideas how long will it take for Google to see that I am not doing anything wrong with redirections? Or any idea at all?
Technical SEO | | webmasterles0 -
Why are my pages getting duplicate content errors?
Studying the Duplicate Page Content report reveals that all (or many) of my pages are getting flagged as having duplicate content because the crawler thinks there are two versions of the same page: http://www.mapsalive.com/Features/audio.aspx http://www.mapsalive.com/Features/Audio.aspx The only difference is the capitalization. We don't have two versions of the page so I don't understand what I'm missing or how to correct this. Anyone have any thoughts for what to look for?
Technical SEO | | jkenyon0