Content Rendering by Googlebot vs. Visitor
-
Hi Moz!
After a different question on here, I tried fetching as Google to see the difference between bot & user - to see if Google finds the written content on my page
The 2 versions are quite different - with Googlebot not even rendering product listings or content, just seems to be the info in the top navigation - guessing this is a massive issue?
Help
Becky
-
Yeh, I have just seen a few ranking drops so I'm now a little concerned.
Thanks for your advice!
-
That's great!
I regularly see category pages on ecomm sites not render all the images in Fetch and Render - haven't been able to figure out why yet. They might just have a limit on the number of thumbnails they display in the tool.
-
Thanks Logan,
I have done this and am seeing a much better result in fetch & render.
On one of my pages (http://www.key.co.uk/en/key/dollies-load-movers-door-skates) for example it is not rendering all the images, only the first 2 - is there anything in particular I should look at for this?
I've attached a screen shot
Thanks for your help
-
Yes, you should allow GoogleBot to crawl all style related files, JS as well. They want to be able to render a page the same way a person would see it. Part of the reason for this is for determining the mobile friendliness of a site. I would assume they also want to be able to make general UX assessments of sites too since they're putting much more emphasis on the user journey and task completion.
-
-
In fetch and render in Search Console, there's usually some notifications below the renderings that explain why there might be discrepancies. Your robots.txt file may be preventing Google from accessing some important CSS (or other) files that drive layout. Check there before you dig too much deeper, it might be a simple robots.txt update that you need.
-
Hi Becky,
You should fix the issue in any case, whether ranking or not ranking it's a risk.
Try to fix all the issues that google shows you.
Regards,
Vijay
-
Hi
The weird thing is the page I checked does rank quite well - so I'm not sure what to make of it?
-
Hi Becky,
This can be a major issue, as fetch as google feature was introduced to show what Google crawler would see on your page.
Many times, websites use complex javascript, JSON, jquery, angular Js etc , these scripts render the content of the page either late or in a different way than what crawler expects.
Work with your developer and get it fixed, I have seen many beautiful websites not rankings due to this error.
I hope this helps, feel free to ask further questions.
Regards,
Vijay
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Defining duplicate content
If you have the same sentences or paragraphs on multiple pages of your website, is this considered duplicate content and will it hurt SEO?
Intermediate & Advanced SEO | | mnapier120 -
Similar content, targeting different states
I have read many answers regarding not having duplicated pages target different states (cities). Here is the problem. We have same content that will serve different pages in some provinces in Canada that we can't change allot intentionally. We don't want these pages compete within the same province. What would be the best approach not to get penalized and keep SERP? Initially we though about hreflang, but we can't really do it on the provice/state attributes. Thanks in advance!
Intermediate & Advanced SEO | | MSaffou20180 -
Content suggestions
Hi, In moz pro you get content suggestions. I was wondering if you can still rank if the topics you cover for a specific keyword on your page are not listed there ? I guess the key is that all the topics covered are related to each other, correct ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Subdomain vs totally new domain
Problem: Our organization publish maps for public viewing using google maps. We are currently getting limited value from these links. We need to separate our public and private maps for infrastructure purposes, and are weighing up the strengths and weaknesses of separating by domain or sub domain with regards SEO and infrastructure. Current situation: maps.mycompany.com currently has a page authority of 30 and mycompany.com has a domain authority of 39. We are currently only getting links from 8 maps which are shared via social media whereas most people embed our maps on their website using an iframe which I believe doesn't do us any favour with SEO. We currently have approx 3K public maps. Question: What SEO impact can you see if we move our public maps from the subdomain maps.mycompany.com to mycompanypublicmaps.com? Thanks in advance for your help and happy to give more info if you need it!
Intermediate & Advanced SEO | | eSpatial0 -
Duplicate Content in News Section
Our clients site is in the hunting niche. According to webmaster tools there are over 32,000 indexed pages. In the new section that are 300-400 news posts where over the course of a about 5 years they manually copied relevant Press Releases from different state natural resources websites (ex. http://gfp.sd.gov/news/default.aspx). This content is relevant to the site visitors but it is not unique. We have since begun posting unique new posts but I am wondering if anything should be done with these old news posts that aren't unique? Should I use the rel="canonical tag or noindex tag for each of these pages? Or do you have another suggestion?
Intermediate & Advanced SEO | | rise10 -
Virtual Domains and Duplicate Content
So I work for an organization that uses virtual domains. Basically, we have all our sites on one domain and then these sites can also be shown at a different URL. Example: sub.agencysite.com/store sub.brandsite.com/store Now the problem comes up often when we move the site to a brand's URL versus hosting the site on our URL, we end up with duplicate content. Now for god knows what damn reason, I currently cannot get my dev team to implement 301's but they will implement 302's. (Dont ask) I also am left with not being able to change the robots.txt file for our site. They say if we allowed people to go in a change this stuff it would be too messy and somebody would accidentally block a site that was not supposed to be blocked on our domain. (We are apparently incapable toddlers) Now I have an old site, sub.agencysite.com/store ranking for my terms while the new site is not showing up. So I am left with this question: If I want to get the new site ranking what is the best methodology? I am thinking of doing a 1:1 mapping of all pages and set up 302 redirects from the old to the new and then making the canonical tags on the old to reflect the new. My only thing here is how will Google actually view this setup? I mean on one hand I am saying
Intermediate & Advanced SEO | | DRSearchEngOpt
"Hey, Googs, this is just a temp thing." and on the other I am saying "Hey, Googs, give all the weight to this page, got it? Graci!" So with my limited abilities, can anybody provide me a best case scenario?0 -
Pages with Little Content
I have a website that lists events in Dublin, Ireland. I want to provide a comprehensive number of listings but there are not enough hours in the day to provide a detailed (or even short) unique description for every event. At the moment I have some pages with little detail other than the event title and venue. Should I try and prevent Google from crawling/indexing these pages for fear of reducing the overall ranking of the site? At the moment I only link to these pages via the RSS feed. I could remove the pages entirely from my feed, but then that mean I remove information that might be useful to people following the events feed. Here is an example page with very little content
Intermediate & Advanced SEO | | andywozhere0 -
Duplicate content on index.htm page
How do I avoid duplicate content on the index.htm page . I need to redirect the spider from the /index.htm file to the main root of http://www.manandhisvan.com.au and hence avoid duplicate content. Does anyone know of a foolproof way of achieving this without me buggering up the complete site Cheers Freddy
Intermediate & Advanced SEO | | Fatfreddy0