Is my content being fully read by Google?
-
Hi mozzers,
I wanted to ask you a quick question regarding Google's crawlability of webpages. We just launched a series of content pieces but I believe there's an issue.
Based on what I am seeing when I inspect the URL it looks like Google is only able to see a few titles and internal links. For instance, when I inspect one of the URLs on GSC this is the screenshot I am seeing: When I perform the "cache:" I barely see any content**:** VS one of our blog post- Would you agree with me there's a problem here?
- Is this related to the heavy use of JS? If so somehow I wasn't able to detect this on any of the crawling tools?
Thanks!
-
Thanks for your help!
I also did a site: "page content of several different pages" and I am getting our pages within the SERP so maybe there's nothing wrong since these pages are indexed. I guess we need to be waiting as you said. -
A few errors is always within margin of what google encounters. Ive seen broken pages completely that would not even render properly anymore but yet still rank. I'm not sure i was just guessing; but perhaps you need to give it a bit of time. I dont know what is going on really, new domains kind of are dampened for a while in order for Google to keep the spammy domains out of the place really.
-
There are just 6 errors but there's a fatal error though. See screenshot.
-
Is your page valid with W3C standards? https://validator.w3.org/
I mean if it's a spaghetti of errors then googlebot might encounter issues with that.
-
I see the most recent page but with only some headings, breadcrumb links, some images but none of the content within
-
When you put the full url in search, and hit the cache button, what does it show? A 1:1 copy of the current page or an outdated version?
-
Hi Jeroen,
It's been over 2 weeks that we published those pages.
-
If it's one day, i woud'nt worry about it. If the problem is still there after a week, let us know.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Please help - Duplicate Content
Hi, I am really struggling to understand why my site has a lot of duplicate content issues. It's flagging up as ridiculously high and I have no idea how to fix this, can anyone help me, please? Website is www.firstcapitol.co.uk
Intermediate & Advanced SEO | | Alix_SEO1 -
Display None (Read More) Implimentation
Hi Mozzers, This question has been asked a few times over the years, but opinion seems to have changed drastically and i wanted to get an updated opinion from sources i trust. On my category pages I have content above products. The content can push the product too far down, and if placed below is never viewed. To battle this I wanted to implement a "Read More" button so i could keep a couple hundred words there and expand it to the rest of the content if the user wanted. If not the products would remain near the top of the screen for better conversion. I have implemented this on this page to test if it affects my keyword rankings before i go site wide. But also wanted an opinion if this practice is ok. The example page with it implemented can be found here. The content im hiding isn't huge here but on other pages could be more. Is there a set ratio of text i should aim to keep / hide? Any pitfalls i should watch out for? I know google crawls the hidden content as its in the source code but should i be wary of a penalty is too much is hidden?
Intermediate & Advanced SEO | | ATP0 -
Does Google View "SRC", "HREF", TITLE and Alt tags as Duplicate Content on Home Page Slider?
Greetings MOZ Community. A keyword matrix was developed by my SEO firm. I am in the process of integrating primary, secondary and terciary phrases into the text and am also sprinkling three or four other terms. Using a keyword density tool (http://www.webconfs.com/keyword-density-checker.php) the results were somewhat unexpected after I optimized. So I then looked at the source code and noticed text from HREF, ALT and SRC tags that may be effecting how Google would interpret text on the page. Our home page (www.nyc-officespace-leader.com) contains a slider with commercial real estate listings. Would Google index the SRC, HREF, TITLE and ALT tags in these slider items? Would this be detrimental to SEO? The code for one listing (and there are 7-8 in the slider) looks like this: | href="http://www.nyc-officespace-leader.com/listings/305-fifth-avenue-office-suite-1340sf" title="Lease a Prestigious Fifth Avenue Office - Manhattan, New York">Class A Fifth Avenue Offices class="blockLeft"><a< p=""></a<> href="http://www.nyc-officespace-leader.com/listings/305-fifth-avenue-office-suite-1340sf" title="Lease a Prestigious Fifth Avenue Office - Manhattan, New York"> src="http://dr0nu3l9a17ym.cloudfront.net/wp-content/uploads/fsrep/houses/125x100/305.jpg" alt="Lease a Prestigious Fifth Avenue Office - Manhattan, New York" width="125" height="94" /> 1,340 Sq. Ft. $5,918 / month Fifth Avenue Midtown / Grand Central <a< p=""></a<> | Could the repetition of the title text ("lease a Prestigious Fifth...") trigger a duplicate content penalty? Should the slider content be blocked or set to no-index by some kind of a Java script? We have worked very hard to optimize the home page so it would be a real shame if through some technical oversight we got hit by a Google Panda penalty. Thanks, Alan Thanks
Intermediate & Advanced SEO | | Kingalan10 -
Reviews and Other Content in Tabs and SEO
Hello, We are redesigning our product page and have considered putting our customer reviews in a 'tab' on the page, so it is not visible to the user until they click on the tab. Are there any SEO implications of this? Right now, we do have problems with this because we use a third party tool for our reviews and they are in javascript, so they do not get crawled, but going forward we will be using our native platform. We want the text of the reviews to get crawled and indexed. Thanks.
Intermediate & Advanced SEO | | Colbys0 -
Content per page?
We used to have an articles worth of content in a scroll box created by our previous SEO, the problem was that it was very much keyword stuffed, link stuffed and complete crap. We then removed this and added more content above the fold, the problem I have is that we are only able to add 150 - 250 words above the fold and a bit of that is repetition across the pages. Would we benefit from putting an article at the bottom of each of our product pages, and when I say article I mean high quality in depth content that will go into a lot more detail about the product, history and more. Would this help our SEO (give the page more uniqueness and authority rather than 200 - 250 word pages). If I could see one problem it would be would an articles worth of content be ok at the bottom of the page and at that in a div tab or scroll box.
Intermediate & Advanced SEO | | BobAnderson0 -
Duplicate content reported on WMT for 301 redirected content
We had to 301 redirect a large number of URL's. Not Google WMT is telling me that we are having tons of duplicate page titles. When I looked into the specific URL's I realized that Google is listing an old URL's and the 301 redirected new URL as the source of the duplicate content. I confirmed the 301 redirect by using a server header tool to check the correct implementation of the 301 redirect from the old to the new URL. Question: Why is Google Webmaster Tool reporting duplicated content for these pages?
Intermediate & Advanced SEO | | SEOAccount320 -
Duplicate content mess
One website I'm working with keeps a HTML archive of content from various magazines they publish. Some articles were repeated across different magazines, sometimes up to 5 times. These articles were also used as content elsewhere on the same website, resulting in up to 10 duplicates of the same article on one website. With regards to the 5 that are duplicates but not contained in the magazine, I can delete (resulting in 404) all but the highest value of each (most don't have any external links). There are hundreds of occurrences of this and it seems unfeasible to 301 or noindex them. After seeing how their system works I can canonical the remaining duplicate that isn't contained in the magazine to the corresponding original magazine version - but I can't canonical any of the other versions in the magazines to the original. I can't delete the other duplicates as they're part of the content of a particular issue of a magazine. The best thing I can think of doing is adding a link in the magazine duplicates to the original article, something along the lines of "This article originally appeared in...", though I get the impression the client wouldn't want to reveal that they used to share so much content across different magazines. The duplicate pages across the different magazines do differ slightly as a result of the different Contents menu for each magazine. Do you think it's a case of what I'm doing will be better than how it was, or is there something further I can do? Is adding the links enough? Thanks. 🙂
Intermediate & Advanced SEO | | Alex-Harford0 -
Google +1 and Yslow
After adding Google's +1 script and call to our site (loading asynchronously), we noticed Yslow is giving us a D for not having expire headers for the following scripts: https://apis.google.com/js/plusone.js
Intermediate & Advanced SEO | | GKLA
https://www.google-analytics.com/ga.js
https://lh4.googleusercontent.com... 1. Is their a workaround for this issue, so expire headers are added to to plusone and GA script? Or, are we being to nit-picky about this issue?0