Content Rendering by Googlebot vs. Visitor
-
Hi Moz!
After a different question on here, I tried fetching as Google to see the difference between bot & user - to see if Google finds the written content on my page
The 2 versions are quite different - with Googlebot not even rendering product listings or content, just seems to be the info in the top navigation - guessing this is a massive issue?
Help
Becky
-
Yeh, I have just seen a few ranking drops so I'm now a little concerned.
Thanks for your advice!
-
That's great!
I regularly see category pages on ecomm sites not render all the images in Fetch and Render - haven't been able to figure out why yet. They might just have a limit on the number of thumbnails they display in the tool.
-
Thanks Logan,
I have done this and am seeing a much better result in fetch & render.
On one of my pages (http://www.key.co.uk/en/key/dollies-load-movers-door-skates) for example it is not rendering all the images, only the first 2 - is there anything in particular I should look at for this?
I've attached a screen shot
Thanks for your help
-
Yes, you should allow GoogleBot to crawl all style related files, JS as well. They want to be able to render a page the same way a person would see it. Part of the reason for this is for determining the mobile friendliness of a site. I would assume they also want to be able to make general UX assessments of sites too since they're putting much more emphasis on the user journey and task completion.
-
-
In fetch and render in Search Console, there's usually some notifications below the renderings that explain why there might be discrepancies. Your robots.txt file may be preventing Google from accessing some important CSS (or other) files that drive layout. Check there before you dig too much deeper, it might be a simple robots.txt update that you need.
-
Hi Becky,
You should fix the issue in any case, whether ranking or not ranking it's a risk.
Try to fix all the issues that google shows you.
Regards,
Vijay
-
Hi
The weird thing is the page I checked does rank quite well - so I'm not sure what to make of it?
-
Hi Becky,
This can be a major issue, as fetch as google feature was introduced to show what Google crawler would see on your page.
Many times, websites use complex javascript, JSON, jquery, angular Js etc , these scripts render the content of the page either late or in a different way than what crawler expects.
Work with your developer and get it fixed, I have seen many beautiful websites not rankings due to this error.
I hope this helps, feel free to ask further questions.
Regards,
Vijay
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Too Long vs. 301 Redirect
We have a small number of content pages where the urls paths were setup before we started looking really hard at SEO. The paths are longer than recommended (but not super crazy IMHO) and some of the pages get a decent amount of traffic. Moz suggests updating the URLs to make them shorter but I wonder if anyone has experience with the tradeoffs here. Is it better to mark those issues to be ignored and just use good URLs going forward or would you suggest updating the URLs to something shorter and implementing a 301 redirect?
Intermediate & Advanced SEO | | russell_ms0 -
Same content, different languages. Duplicate content issue? | international SEO
Hi, If the "content" is the same, but is written in different languages, will Google see the articles as duplicate content?
Intermediate & Advanced SEO | | chalet
If google won't see it as duplicate content. What is the profit of implementing the alternate lang tag?Kind regards,Jeroen0 -
Content with Read More..?
How does google see content that's static on page & content that has a "see more" or "read more" tag. Where the content collapses & de-collapses on a mouse click. On a condition that the complete is readable via the source code view as well as crawl-able by spiders?
Intermediate & Advanced SEO | | welcomecure0 -
Rotating content = Google Penalty?
Hi all. We have an ecommerce site which features various product sections. In each section you might have 60 products each displayed neatly in pages of 10. We recently added functionality, so that if a product is out of stock, it will automatically drop that product to the back of the list and bring another in stock one forward. We're just worried that Google will view the same information, repeatedly rotating on the first page of 10 products (the page that ranks) and think we're in some way trying to trick Google into thinking the content is fresh? Does anyone have a throw on this? Is it likely to penalise us? Thank you!!! Ben
Intermediate & Advanced SEO | | bnknowles10 -
Duplicate Content for Deep Pages
Hey guys, For deep, deep pages on a website, does duplicate content matter? The pages I'm talk about are image pages associated with products and will never rank in Google which doesn't concern me. What I'm interested to know though is whether the duplicate content would have an overall effect on the site as a whole? Thanks in advance Paul
Intermediate & Advanced SEO | | kevinliao1 -
How to Avoid Duplicate Content Issues with Google?
We have 1000s of audio book titles at our Web store. Google's Panda de-valued our site some time ago because, I believe, of duplicate content. We get our descriptions from the publishers which means a good
Intermediate & Advanced SEO | | lbohen
deal of our description pages are the same as the publishers = duplicate content according to Google. Although re-writing each description of the products we offer is a daunting, almost impossible task, I am thinking of re-writing publishers' descriptions using The Best Spinner software which allows me to replace some of the publishers' words with synonyms. I have re-written one audio book title's description resulting in 8% unique content from the original in 520 words. I did a CopyScape Check and it reported "65 duplicates." CopyScape appears to be reporting duplicates of words and phrases within sentences and paragraphs. I see very little duplicate content of full sentences
or paragraphs. Does anyone know whether Google's duplicate content algorithm is the same or similar to CopyScape's? How much of an audio book's description would I have to change to stay away from CopyScape's duplicate content algorithm? How much of an audio book's description would I have to change to stay away from Google's duplicate content algorithm?0 -
How to promote good content?
Our team just finished a massive piece of content.. very similar to the SEOmoz Begginer's Guide to SEO, but for the salon/aesthetics industry. We have a beautifully designed 10 Chapter, 50-page PDF which will require an email form submission to download. Each chapter is optimized for specific phrases, and will be separate HTML pages that are publicly available... very much like how this is setup: http://www.seomoz.org/beginners-guide-to-seo My question is, what's the best way to promote this thing? Any specific examples would be ideal. I think blogger outreach would likely be the best approach, but is there any specific way that I should be doing this?.. Again a specific start-to-finish example is what I'm looking for here. (I've read almost every outreach post on moz, so no need to reference them) Anyone care to rattle off a list of ideas with accompanying examples? (even if they seem like no-brainers.. I'm all ears)
Intermediate & Advanced SEO | | ATMOSMarketing560 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0