No content using Fetch
-
Wooah, this one makes me feel a bit nervous.
The cache version of the site homepage shows all the text, but I understand that is the html code constructed by the browser. So I get that.
If I Google some of the content it is there in the index and the cache version is yesterday.
If I Fetch and Render in GWT then none of the content is available in the preview - neither Googlebot or visitor view. The whole preview is just the menu, a holding image for a video and a tag line for it. There are no reports of blocked resources apart from a Wistia URL. How can I decipher what is blocking Google if it does not report any problems?
The CSS is visible for reference to, for example, <section class="text-within-lines big-text narrow">
class="data"> some content...
Ranking is a real issue, in part by a poorly functioning main menu. But i'm really concerned with what is happening with the render.
-
I got there in the end. They have a Wistia video loading on the homepage, but Wistia robots blocks this resource. When the resource is blocked the CSS is loading a holding image. However, this is configured to fill the whole page. So therefore when googlebot crawls it cannot render anything further than this image or this defined area in CSS. Dev is fixing.
-
Hi Michael,
We have faced similar issues, the main reason in our case was dynamic content being generated by Jquery, Ajax etc. You will need a good developer to help you through if this is the case.
Also, we consider Google render and fetch as best reference to how google bot will see us.
I hope this helps
Regards,
Vijay
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Scraped content ranking above the original source content in Google.
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google. 4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results. We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console). Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content. We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place. Please suggest.
Intermediate & Advanced SEO | | ontarget-media0 -
Is hidden content bad for SEO?
I am using this plugin to enable Facebook comments on my blog:
Intermediate & Advanced SEO | | soralsokal
https://wordpress.org/plugins/fatpanda-facebook-comments/ This shows the comment in an Facebook iFrame. The plugin author claims it's SEO friendly, because the comments are also integrated in the WordPress database. The are included in the post but hidden. Is that bad for SEO?0 -
Noindexing Duplicate (non-unique) Content
When "noindex" is added to a page, does this ensure Google does not count page as part of their analysis of unique vs duplicate content ratio on a website? Example: I have a real estate business and I have noindex on MLS pages. However, is there a chance that even though Google does not index these pages, Google will still see those pages and think "ah, these are duplicate MLS pages, we are going to let those pages drag down value of entire site and lower ranking of even the unique pages". I like to just use "noindex, follow" on those MLS pages, but would it be safer to add pages to robots.txt as well and that should - in theory - increase likelihood Google will not see such MLS pages as duplicate content on my website? On another note: I had these MLS pages indexed and 3-4 weeks ago added "noindex, follow". However, still all indexed and no signs Google is noindexing yet.....
Intermediate & Advanced SEO | | khi50 -
Merge content pages together to get one deep high quality content page - good or not !?
Hi, I manage the SEO of a brand poker website that provide ongoing very good content around specific poker tournaments, but all this content is split into dozens of pages in different sections of the website (blog section, news sections, tournament section, promotion section). It seems like today having one deep piece of content in one page has better chance to get mention / social signals / links and therefore get a higher authority / ranking / traffic than if this content was split into dozens of pages. But the poker website I work for and also many other website do generate naturally good content targeting long tail keywords around a specific topic into different section of the website on an ongoing basis. Do you we need once a while to merge those content pages into one page ? If yes, what technical implementation would you advice ? (copy and readjust/restructure all content into one page + 301 the URL into one). Thanks Jeremy
Intermediate & Advanced SEO | | Tit0 -
2 URLS pointing to the same content
Hi, We currently have 2 URL's pointing to the same website (long story why we have it) - A & B. A is our main website but we set up B as a rewrite URL to use for our Pay Per Click campaign. Now because its the same site, but B is just a URL rewrite, Google Webmaster Tools is seeing that we have thousands of links coming in from site B to site A. I want to tell Google to ignore site B url but worried it might affect site A. I can't add a no follow link on site B as its the same content so will also be applicable on Site A. I'm also worried about using Google Disavow as it might impact on site A! Can anyone make any suggestions on what to do, as I would like to hear from anyone with experience with this or can recommend a safe option. Thanks for your time!
Intermediate & Advanced SEO | | Party_Experts0 -
Magento Duplicate Content Recovery
Hi, we switched platforms to Magento last year. Since then our SERPS rankings have declined considerably (no sudden drop on any Panda/Penguin date lines). After investigating, it appeared we neglected to No index, follow all our filter pages and our total indexed pages rose sevenfold in a matter of weeks. We have since fixed the no index issue and the pages indexed are now below what we had pre switch to Magento. We've seen some positive results in the last week. Any ideas when/if our rankings will return? Thanks!
Intermediate & Advanced SEO | | Jonnygeeuk0 -
How should I exclude content?
I have category pages on an e-commerce site that are showing up as duplicate pages. On top of each page are register and login, and when selected they come up as category/login and category/register. I have 3 options to attempt to fix this and was wondering what you think is the best. 1. Use robots.txt to exclude. There are hundreds of categories so it could become large. 2. Use canonical tags. 3. Force Login and Register to go to their own page.
Intermediate & Advanced SEO | | EcommerceSite0 -
What is the best way to allow content to be used on other sites for syndication without taking the chance of duplicate content filters
Cookstr appears to be syndicating content to shape.com and mensfitness.com a) They integrate their data into partner sites with an attribution back to their site and skinned it with the partners look. b) they link the image back to their image hosted on cookstr c) The page does not have microformats or as much data as their own page does so their own page is better SEO. Is this the best strategy or is there something better they could be doing to safely allow others to use our content, we don't want to share the content if we're going to get hit for a duplicate content filter or have another site out rank us with our own data. Thanks for your help in advance! their original content page: http://www.cookstr.com/recipes/sauteacuteed-escarole-with-pancetta their syndicated content pages: http://www.shape.com/healthy-eating/healthy-recipes/recipe/sauteacuteed-escarole-with-pancetta
Intermediate & Advanced SEO | | irvingw
http://www.mensfitness.com/nutrition/healthy-recipes/recipe/sauteacuteed-escarole-with-pancetta0