Content Rendering by Googlebot vs. Visitor
-
Hi Moz!
After a different question on here, I tried fetching as Google to see the difference between bot & user - to see if Google finds the written content on my page
The 2 versions are quite different - with Googlebot not even rendering product listings or content, just seems to be the info in the top navigation - guessing this is a massive issue?
Help
Becky
-
Yeh, I have just seen a few ranking drops so I'm now a little concerned.
Thanks for your advice!
-
That's great!
I regularly see category pages on ecomm sites not render all the images in Fetch and Render - haven't been able to figure out why yet. They might just have a limit on the number of thumbnails they display in the tool.
-
Thanks Logan,
I have done this and am seeing a much better result in fetch & render.
On one of my pages (http://www.key.co.uk/en/key/dollies-load-movers-door-skates) for example it is not rendering all the images, only the first 2 - is there anything in particular I should look at for this?
I've attached a screen shot
Thanks for your help
-
Yes, you should allow GoogleBot to crawl all style related files, JS as well. They want to be able to render a page the same way a person would see it. Part of the reason for this is for determining the mobile friendliness of a site. I would assume they also want to be able to make general UX assessments of sites too since they're putting much more emphasis on the user journey and task completion.
-
-
In fetch and render in Search Console, there's usually some notifications below the renderings that explain why there might be discrepancies. Your robots.txt file may be preventing Google from accessing some important CSS (or other) files that drive layout. Check there before you dig too much deeper, it might be a simple robots.txt update that you need.
-
Hi Becky,
You should fix the issue in any case, whether ranking or not ranking it's a risk.
Try to fix all the issues that google shows you.
Regards,
Vijay
-
Hi
The weird thing is the page I checked does rank quite well - so I'm not sure what to make of it?
-
Hi Becky,
This can be a major issue, as fetch as google feature was introduced to show what Google crawler would see on your page.
Many times, websites use complex javascript, JSON, jquery, angular Js etc , these scripts render the content of the page either late or in a different way than what crawler expects.
Work with your developer and get it fixed, I have seen many beautiful websites not rankings due to this error.
I hope this helps, feel free to ask further questions.
Regards,
Vijay
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content across domains?
Does anyone have suggestions for managing duplicate product/solution website content across domains? (specifically parent/child company domains) Is it advisable to do this? Will it hurt either domain? Any best practices when going down this path?
Intermediate & Advanced SEO | | pilgrimquality0 -
Duplicate content with URLs
Hi all, Do you think that is possible to have duplicate content issues because we provide a unique image with 5 different URLs ? In the HTML code pages, just one URL is provide. It's enough for that Google don't see the other URLs or not ? Example, in this article : http://www.parismatch.com/People/Kim-Kardashian-sa-securite-n-a-pas-de-prix-1092112 The same image is available on: http://cdn-parismatch.ladmedia.fr/var/news/storage/images/paris-match/people/kim-kardashian-sa-securite-n-a-pas-de-prix-1092112/15629236-1-fre-FR/Kim-Kardashian-sa-securite-n-a-pas-de-prix.jpg http://resize-parismatch.ladmedia.fr/img/var/news/storage/images/paris-match/people/kim-kardashian-sa-securite-n-a-pas-de-prix-1092112/15629236-1-fre-FR/Kim-Kardashian-sa-securite-n-a-pas-de-prix.jpg http://resize1-parismatch.ladmedia.fr/img/var/news/storage/images/paris-match/people/kim-kardashian-sa-securite-n-a-pas-de-prix-1092112/15629236-1-fre-FR/Kim-Kardashian-sa-securite-n-a-pas-de-prix.jpg http://resize2-parismatch.ladmedia.fr/img/var/news/storage/images/paris-match/people/kim-kardashian-sa-securite-n-a-pas-de-prix-1092112/15629236-1-fre-FR/Kim-Kardashian-sa-securite-n-a-pas-de-prix.jpg http://resize3-parismatch.ladmedia.fr/img/var/news/storage/images/paris-match/people/kim-kardashian-sa-securite-n-a-pas-de-prix-1092112/15629236-1-fre-FR/Kim-Kardashian-sa-securite-n-a-pas-de-prix.jpg Thank you very much for your help. Julien
Intermediate & Advanced SEO | | Julien.Ferras0 -
IFrames and Thin Content Worries
Hi everyone, I've read a lot about the impact of iFrames on SEO lately -- articles like http://www.visibilitymagazine.com/how-do-iframes-affect-your-seo/ for example. I understand that iFrames don't cause duplicate content or cloaked content issues, but what about thin content concerns? Here's my scenario: Our partner marketing team would like to use an iframe to pull content detailing how Partner A and my company collaborate from a portal the partners have access to. This would allow the partners to help manage their presence on our site directly. The end result would be that Partner A's portal content would be added to Partner A's page on our website via an iFrame. This would happen about across at least 100 URLs. Currently we have traditional partner pages, with unique HTML content. There's a little standalone value for queries involving the bigger partners' names + use case terms, but only in less than 10% of cases. So I'm concerned about those pages, but I'm more worried about the domain overall. My main concern is that in the eyes of Google I'd be stripping a lot of content off the domain all at once, and then replacing it with these shell pages containing nothing (in terms of SEO) but meta, a headline, navigation links, and an iFrame. If that's the case, would Google view those URLs as having thin content? And could that potentially impact the whole domain negatively? Or would Google understand that the page doesn't have content because of the iFrames and give us a pass? Thoughts? Thanks, Andrew
Intermediate & Advanced SEO | | SafeNet_Interactive_Marketing0 -
Content per page?
We used to have an articles worth of content in a scroll box created by our previous SEO, the problem was that it was very much keyword stuffed, link stuffed and complete crap. We then removed this and added more content above the fold, the problem I have is that we are only able to add 150 - 250 words above the fold and a bit of that is repetition across the pages. Would we benefit from putting an article at the bottom of each of our product pages, and when I say article I mean high quality in depth content that will go into a lot more detail about the product, history and more. Would this help our SEO (give the page more uniqueness and authority rather than 200 - 250 word pages). If I could see one problem it would be would an articles worth of content be ok at the bottom of the page and at that in a div tab or scroll box.
Intermediate & Advanced SEO | | BobAnderson0 -
Product pages content
Hi! I'm doing some SEO work for a new client. I've been tasked with boosting some of their products, such as http://www.lawnmowersdirect.co.uk/product/self-propelled-rear-roller-rotary-petrol-lawnmowers/honda-hrx426qx. It's currently #48 for the term Honda Izy HRG465SD, while http://www.justlawnmowers.co.uk/lawnmowers/honda-izy-hrg-465-sd.htm is #2, behind Amazon. Regarding links, there's no great shakes between the pages or even the domains. However, there's major difference in content. I'm happy to completely revamp it, I just wanted to check I'm not missing anything out before starting to rewrite it altogether! Thanks
Intermediate & Advanced SEO | | neooptic0 -
Http and https duplicate content?
Hello, This is a quick one or two. 🙂 If I have a page accessible on http and https count as duplicate content? What about external links pointing to my website to the http or https page. Regards, Cornel
Intermediate & Advanced SEO | | Cornel_Ilea0 -
Canonical vs noindex for blog tags
Our blog started to user tags & I know this is bad for Panda, but our product team wants use them for user experience. Should we canonizalize these tags to the original blog URL or noindex them?
Intermediate & Advanced SEO | | nicole.healthline0 -
Duplicate page Content
There has been over 300 pages on our clients site with duplicate page content. Before we embark on a programming solution to this with canonical tags, our developers are requesting the list of originating sites/links/sources for these odd URLs. How can we find a list of the originating URLs? If you we can provide a list of originating sources, that would be helpful. For example, our the following pages are showing (as a sample) as duplicate content: www.crittenton.com/Video/View.aspx?id=87&VideoID=11 www.crittenton.com/Video/View.aspx?id=87&VideoID=12 www.crittenton.com/Video/View.aspx?id=87&VideoID=15 www.crittenton.com/Video/View.aspx?id=87&VideoID=2 "How did you get all those duplicate urls? I have tried to google the "contact us", "news", "video" pages. I didn't get all those duplicate pages. The page id=87 on the most of the duplicate pages are not supposed to be there. I was wondering how the visitors got to all those duplicate pages. Please advise." Note, the CMS does not create this type of hybrid URLs. We are as curious as you as to where/why/how these are being created. Thanks.
Intermediate & Advanced SEO | | dlemieux0