Should I fetch in WMT with all 4 options?
-
When we ask google to Fetch a page, I usually just do the desktop one. However, should I be using the other 3 options as well? Mobile Smartphone, Mobile xHTML, and Mobile cHTML? I guess since they give you the options, just doing desktop means that it won't go to mobile until a regular crawl, but I just want to make sure that is the case.
Thanks,
Ruben
-
All the options for fetch in Google search console has a different purposes
Desktop - This is default and this option is for the website which can crawl images, videos, webpages and all that.
Mobile smartphone - This option is for Google smartphone crawlers.
Mobile xHTML - This option does not support rendering and Uses the SAMSUNG XHTML/WML crawler.
Mobile cHTML - This option is mostly for the Japanese feature phones which Uses the DoCoMo Google Mobile crawler and this option also does not support rendering.
Source here
-
The case is when you have special website that serve different versions of site based on user agent. So in this case you want to see what GoogleBot could retrieve from your site. This also can assist you in mobile redirects too also based on user agents.
In only one way where can't help you - it's responsive web because there bots see only one version of HTML.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is Tag Manager a good option to insert text in websites?
Is Tag Manager a good option to insert text in websites? When a website doesn't have an administration panel adding text is a very big problem.
Technical SEO | | propertyshark0 -
Crawl Issues / Partial Fetch Via Google
We recently launched a new site that doesn't have any ads, but in Webmaster Tools under "Fetch as Google" under the rendering of the page I see: Googlebot couldn't get all resources for this page. Here's a list: URL Type Reason Severity https://static.doubleclick.net/instream/ad_status.js Script Blocked Low robots.txt https://googleads.g.doubleclick.net/pagead/id AJAX Blocked Low robots.txt Not sure where that would be coming from as we don't have any ads running on our site? Also, it's stating the the fetch is a "partial" fetch. Any insight is appreciated.
Technical SEO | | vikasnwu0 -
Fetching & Rendering a non ranking page in GWT to look for issues
Hi I have a clients nicely optimised webpage not ranking for its target keyword so just did a fetch & render in GWT to look for probs and could only do a partial fetch with the below robots.text related messages: Googlebot couldn't get all resources for this page Some boiler plate js plugins not found & some js comments reply blocked by robots (file below): User-agent: *
Technical SEO | | Dan-Lawrence
Disallow: /wp-admin/
Disallow: /wp-includes/ As far as i understand it the above is how it should be but just posting here to ask if anyone can confirm whether this could be causing any prrobs or not so i can rule it out or not. Pages targeting other more competitive keywords are ranking well and are almost identically optimised so cant think why this one is not ranking. Does fetch and render get Google to re-crawl the page ? so if i do this then press submit to index should know within a few days if still problem or not ? All Best Dan0 -
404 Errors in WMT
Currently my website have about 10,000 404 errors for my site as wordpress is adding /feed/ to the end of all url in my website.. Should I restrict /feed/ from the robot txt?
Technical SEO | | thewebguy30 -
Google Fetch and Render - does this fix penalties?
Ran the fetch and render and came up with two "issues". My specific question is how likely would a link to quantcast (which blocks acces via roberts.txt) really hurt us if fetch and render shows it preventing rendering - which it is not. Thoughts and comments are much appreciated.
Technical SEO | | robertdonnell0 -
Salvaging links from WMT “Crawl Errors” list?
When someone links to your website, but makes a typo while doing it, those broken inbound links will show up in Google Webmaster Tools in the Crawl Errors section as “Not Found”. Often they are easy to salvage by just adding a 301 redirect in the htaccess file. But sometimes the typo is really weird, or the link source looks a little scary, and that's what I need your help with. First, let's look at the weird typo problem. If it is something easy, like they just lost the last part of the URL, ( such as www.mydomain.com/pagenam ) then I fix it in htaccess this way: RewriteCond %{HTTP_HOST} ^mydomain.com$ [OR] RewriteCond %{HTTP_HOST} ^www.mydomain.com$ RewriteRule ^pagenam$ "http://www.mydomain.com/pagename.html" [R=301,L] But what about when the last part of the URL is really screwed up? Especially with non-text characters, like these: www.mydomain.com/pagename1.htmlsale www.mydomain.com/pagename2.htmlhttp:// www.mydomain.com/pagename3.html" www.mydomain.com/pagename4.html/ How is the htaccess Rewrite Rule typed up to send these oddballs to individual pages they were supposed to go to without the typo? Second, is there a quick and easy method or tool to tell us if a linking domain is good or spammy? I have incoming broken links from sites like these: www.webutation.net titlesaurus.com www.webstatsdomain.com www.ericksontribune.com www.addondashboard.com search.wiki.gov.cn www.mixeet.com dinasdesignsgraphics.com Your help is greatly appreciated. Thanks! Greg
Technical SEO | | GregB1230 -
Deleting Subdomain - 301 to Homepage Best Option?
We have a subdomain with lots of content that we think Google may consider thin, so we're thinking of removing it to improve our SEO. Is the best way to do this to simply remove the directory and then 301 everything to our homepage? Basically the subdomain consists of product images with links to retailers where they can be purchased. We've basically used it to refer our Facebook fans to when they like a product they've seen on our Facebook page, so the subdomain was not meant to rank for SEO purposes. However, it is integrated with our main webpage, and it is possible that it is hurting our SEO efforts. The subdomain is photos.yournextshoes.com and the main domain www.yournextshoes.com
Technical SEO | | Jantaro0 -
Google has not indexed my site in over 4 weeks, what's the problem?
We recently put in permanent redirects to our new url, but Google seems to not want to index the new url. There was no problems with the old url and the new url is brand new so should have no 'black marks' against it. We have done everything we can think off in terms of submitting site maps, telling google our url has changed in webmaster tools, mentioning the new url on social sites etc...but still nothing. It has been over 4 weeks now since we set up the redirects to the url, any ideas why Google seems to be choosing not to index it? Thanks
Technical SEO | | cewe0