Site Audit Tools Not Picking Up Content Nor Does Google Cache
-
Hi Guys,
Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages.
I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content.
What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".
-
@nezona
DM Fitrs
Facing issues with site audit tools and Google Cache not picking up content can be a technical puzzle to solve. It's crucial to address these challenges for a smoother online presence. Similarly, in managing our digital responsibilities, like checking PESCO online bills, reliability is key. Just as we troubleshoot website-related matters, staying on top of utility payments ensures a hassle-free experience. Navigate technical hiccups, both in website diagnostics and bill management, to maintain a seamlessly connected online routine. -
Hi Team,
I am facing problem with one of my website where google is caching the page when checked using cache: operator but displaying a 404 msg in the body of the cached version.
But when i check the same in 'text-only version' the complete content and element is visible to Google and also GSC shows the page with no issue and rendering is also fine.
The canonicals and robots are properly set with no issues on them.
Not able to figure out what is the problem. Experts advice would help!Regards,
Ryan -
Hey Neil
Wow, we are really chuffed here at Effect Digital! I guess... we have a lot of combined experience - and we also try to give something back to the community (as well as making profit, obviously)
We didn't actually know how many people used the Moz Q&A forum until recently. It seemed like a good hub to demonstrate that, not all agency accounts have to exist to give shallow 1-liner replies from a position of complete ignorance (usually just so they can link spam the comments). Groups of people, **can **be insightful and 'to the point'
Again we're just really thrilled that you found our analysis to be useful. It also shows what goes into what we do. Most of the responses on here which are under-detailed have the potential to lead people down rabbit holes. Sometimes you just have to get into the thick of it right?
I think our email address is publicly listed on our profile page. Feel free to hit us up
-
My Friend,
That is some analysis you have done there!! and I am eternally greatful. It's people like you, who are clearly so passionate about SEO, that make our industry amazing!!
I am going to private message you a longer reply, later but i just wanted to publicly say thank you!!
Regards
Neil
-
Ok let's have a look here.
So this is the URL of the page you want me to look at:
I can immediately tell you that, from my end it doesn't look like Google has even cached this page at all:
- http://webcache.googleusercontent.com/search?q=cache:https%3A%2F%2Fwww.nubalustrades.co.uk%2F (live)
- https://d.pr/i/DhmPEr.png (screenshot)
As you know I can't fetch someone else's web page as Google, but I do know Screaming Frog pretty well so let's give that a blast
First let's try a quick crawl with no client-side rendering enabled, see what that comes back with:
- https://d.pr/f/u3bifA.seospider (SF crawl file)
- https://d.pr/f/9TfNR5.xlsx (Excel spreadsheet output)
Seems as if, even without rendered crawling the words are being picked up:
Only the rows highlighted in green (the 'core' site URLs) should have a word count anyway. The other URLs are fragments and resources. They're scripts, stylesheets, images etc (none of which need copy).
Let's try a rendered crawl, see what we get:
- https://d.pr/f/ijprbx.seospider (SF crawl file)
- https://d.pr/f/c8ljoF.xlsx (Excel spreadsheet output)
Again - it seems as if the words are picked up, though oddly fewer are picked up with rendered crawling than with a simple AJAX source scrape:
That could easily be something to do with my time-out or render-wait settings though (that being said I did give a pretty generous 23 seconds so...)
In any case, it seems to me that the content is search readable in either event.
Let's look at the homepage specifically in more detail. Basically if content appears in "inspect element" but not in "view source", **that's **when you know you have a real problem
- view-source:https://www.nubalustrades.co.uk/ - (you can only open this link with Chrome browser, it's free to download from Google)
As you can see, lots of the content does indeed appear in the 'base' source code:
That's a good thing.
That being said, each piece of content seems to be replicated twice in the source code which is really weird and may be creating some content duplication issues, if Google's more simple crawl-bots aren't taking the time to analyse the source code correctly.
Go back here:
- view-source:https://www.nubalustrades.co.uk/ - (this link only works in Chrome!)
Ctrl+F to find the string of text: "issued by the British Standards Institution". Hit enter a few times. You'll see the page jump about.
On the one hand you have this, further up the page which looks alright:
On the other hand you have this further down which looks like a complete mess, embedded within some kind of script or something?
Line 6,212 of the source code is some gigantic JavaScript thing which has been in-lined (and don't get me started on how this site is over-using inline code in general, for CSS, JS - everything). No idea what it's for or does, might be deferred stuff to boost page speed without breaking the visuals or whatever (there are many clever tricks like that, but they make the source code a virtually unreadable mess for a human - let alone a programmed bot!)
What really concerns me is why such a simple page needs to have 6,250 lines of source code. That's mental!
What we all forget is that, whilst the crawl and fetch bots pull information quickly - Google's algorithms have to be run over the top of that source code and data (which is a much more complex affair)
Usually people think that normalizing the code-to-text ratio is a pointless SEO maneuver and in most cases, yes the return is vastly outweighed by the time taken to do it. But in your case it's actually very extreme:
Put your URL in and you'll get this:
I tried like 5-8 different tools and this was the most favorable result :')
It is clear that, even were the page successfully downloaded by Google, their algorithms may have trouble hunting out the nuggets of content within the vast, sprawling and unnecessary coding structure. My older colleagues had always warned me away from Wix... now I can see why, with my own two eyes
Ok. So we know that Google isn't bothering to cache the page, and that - despite the fact your content can 'technically' be crawled, it may be a marathon to do that and dig it out (especially for non-intelligent robots)
But is the content being indexed? Let's check:
- https://www.google.co.uk/search?q=site%3Anubalustrades.co.uk+%22issued+by+the+British+Standards+Institution%22
- https://www.google.co.uk/search?num=100&ei=q_MYXMj3EM_srgSNh6LYCQ&q=site%3Anubalustrades.co.uk+%22product+and+your+happy+with%22
- https://www.google.co.uk/search?num=100&ei=6vMYXPuLC4yYsAXAoKfAAg&q=site%3Anubalustrades.co.uk+%22Some+customers+like+to+have+more+than+one+balustrade%22
- https://www.google.co.uk/search?num=100&ei=CPQYXOmJFYu6tQXi8arwBA&q=site%3Anubalustrades.co.uk+%22installations+which+will+help+you+visualise+your+future+project%22
- https://www.google.co.uk/search?num=100&ei=KvQYXMyhC4LStAWopbqACg&q=site%3Anubalustrades.co.uk+%22Cleanly-designed%2C+high-quality+handrail+systems+combined+with+attention%22
Those are all special Google search queries, designed to specifically search for strings of content on your website from all the different, primary content boxes
Good news fella, it's all being found:
Let's make up an invalid text string and see what Google returns when text can't be found, to validate our findings thus-far:
If nothing is found you get this:
So I guess Google can find your content and is indexing your content
Phew, crisis over! Onto the next one...
-
Hi There,
This is the URL:-
https://www.nubalustrades.co.uk/
Be great if you could give me your opinion. I am thinking that this content isn't being indexed.
Regards
Neil
-
If you can share a link to the site I can probably diagnose it. It's probably that the content is within the modified (client-side rendered) source code, rather than the 'base' (non-modified) source code. Google fetches pages in multiple different ways, so using fetch as Google artificially makes it seem as if they always use exactly the same crawling technology. They don't.
Google 'can' crawl modified content. But they don't always do it, and they don't do it for everyone. Rendered crawling takes like... 10x longer than basic source scraping. Their mission is to index the web!
The fetch tool shows you their best-case scenario crawling methodology. Don't assume their indexation bots, which have a mountain to climb - will always be so favourable
-
Just an update on this one
Looks like it may be a problem with Wix
https://moz.com/community/q/wix-problem-with-on-page-optimization-picking-up-seo
I have another client who also uses Wix and they also show now content in screaming frog but worryingly their pages show in a cached version of the site. I know the "cache" isn't the best way to see what content is indexed and the fetch as Google is fine.
I just get the feeling something isn't right.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does my Google Web Cache Redirects to My Homepage?
Why does my Google Webcache appears in a short period of time and then automatically redirects to my homepage? Is there something wrong with my robots.txt? The only files that I have blocked is below: User-agent: * Disallow: /bin/ Disallow: /common/ Disallow: /css/ Disallow: /download/ Disallow: /images/ Disallow: /medias/ Disallow: /ClientInfo.aspx Disallow: /*affiliateId* Disallow: /*referral*
Technical SEO | | Francis.Magos0 -
301'd site, but new site is not getting picked up in google.
Hi I'm having big issues! Any help would be greatly appreciated This is the 3rd time this happened. Every time I switch my old site greatcleanjokes.com to the new design of chokeonajoke.com traffic goes almost completely down (I even tried out the new design on greatcleanjokes [to see if it was a 301 issue] and traffic also went down.) What can possibly be wrong with this new site that google just doesn't like it ?! I was ranking high up for many big phrase like joke of the day, corny jokes, clean jokes, short jokes. Now It's all gone. I also think it's strange that when I search for site:chokeonajoke.com the post pages show up before the category pages!? Here is the old site http://web.archive.org/web/20140406214615/http://www.greatcleanjokes.com/ Here is the new one http://chokeonajoke.com/ If you can't figure out anything do you know of anyone I can hire who may be able to figure it out?
Technical SEO | | Nickys22111 -
Google not indexing /showing my site in search results...
Hi there, I know there are answers all over the web to this type of question (and in Webmaster tools) however, I think I have a specific problem that I can't really find an answer to online. site is: www.lizlinkleter.com Firstly, the site has been live for over 2 weeks... I have done everything from adding analytics, to submitting a sitemap, to adding to webmaster tools, to fetching each individual page as googlebot and then submitting to index via webmaster tools. I've checked my robot files and code elsewhere on the site and the site is not blocking search engines (as far as I can see) There are no security issues in webmaster tools or MOZ. Google says it has indexed 31 pages in the 'Index Status' section, but on the site dashboard it says only 2 URLS are indexed. When I do a site:www.lizlinketer.com search the only results I get are pages that are excluded in the robots file: /xmlrpc.php & /admin-ajax.php. Now, here's where I think the issue stems from - I developed the site myself for my wife and I am new to doing this, so I developed it on the live URL (I now know this was silly) - I did block the content from search engines and have the site passworded, but I think Google must have crawled the site before I did this - the issue with this was that I had pulled in the Wordpress theme's dummy content to make the site easier to build - so lots of nasty dupe content. The site took me a couple of months to construct (working on it on and off) and I eventually pushed it live and submitted to Analytics and webmaster tools (obviously it was all original content at this stage)... But this is where I made another mistake - I submitted an old site map that had quite a few old dummy content URLs in there... I corrected this almost immediately, but it probably did not look good to Google... My guess is that Google is punishing me for having the dummy content on the site when it first went live - fair enough - I was stupid - but how can I get it to index the real site?! My question is, with no tech issues to clear up (I can't resubmit site through webmaster tools) how can I get Google to take notice of the site and have it show up in search results? Your help would be massively appreciated! Regards, Fraser
Technical SEO | | valdarama0 -
Google Cache showing a different URL
Hi all, very weird things happening to us. For the 3 URLs below, Google cache is rendering content from a different URL (sister site) even though there are no redirects between the 2 & live page shows the 'right content' - see: http://webcache.googleusercontent.com/search?q=cache:http://giltedgeafrica.com/tours/ http://webcache.googleusercontent.com/search?q=cache:http://giltedgeafrica.com/about/ http://webcache.googleusercontent.com/search?q=cache:http://giltedgeafrica.com/about/team/ We also have the exact same issue with another domain we owned (but not anymore), only difference is that we 301 redirected those URLs before it changed ownership: http://webcache.googleusercontent.com/search?q=cache:http://www.preferredsafaris.com/Kenya/2 http://webcache.googleusercontent.com/search?q=cache:http://www.preferredsafaris.com/accommodation/Namibia/5 I have gone ahead into the URL removal Tool and got denied for the first case above ("") and it is still pending for the second lists. We are worried that this might be a sign of duplicate content & could be penalising us. Thanks! ps: I went through most questions & the closest one I found was this one (http://moz.com/community/q/page-disappeared-from-google-index-google-cache-shows-page-is-being-redirected) but it didn't provide a clear answer on my question above
Technical SEO | | SouthernAfricaTravel0 -
Why can no tool crawl this site?
I am trying to perform a crawl analysis on a client's website at https://www.bravosolution.com I have tried to crawl it with IIS for SEO, Sreaming Frog and Xenu and not one of them makes it further than the home page of the site. There is nothing I can see in the robots.txt that is blocking these agents. As far as I can see, Google is able to crawl the site although they have noticed a significant drop in organic traffic. Any advise would be very welcome Regards Danny
Technical SEO | | richdan0 -
Is submitting your site to yahoo & Google still relevant
Good Morning from Sh@t its still raining wetherby UK... I want to just make sure the process i go through when a new site is launched is nort overlooking some fundamentals. Most sites we launch are not brand new, do allready have a link heritage and have been indexed by Google. With that in mid i do not submit a sites url thru the following links: www.google.com/addurl
Technical SEO | | Nightwing
search.yahoo.com/info/submit.html
search.live.com/docs/submit.aspx Am i right in saying you should really only bother with this if the site a newbie ie no history no link heritage and the site is enering cyberspace for the forst time. And i wonder if for example you launched a new site made sure the xml site map was in place and it had a few inbound links anyway it would be indexed anyway. So is the practice of submitting your url to search engined relevant anymore? Any insights welcome 🙂2 -
Google , 301 redirects, and multiple domains pointing to the same content.
Google, 301 redirects, and multiple domains pointing to the same content. This is my first post here. I would like to begin by thanking anyone in advance for their help. It is much appreciated. Secondly, I'm posting in the wrong place or something please forgive me simply point me in the right direction I'm a quick learner. I think I'm battling a redirect problem but I want to be sure before I make changes. In order to accurately assess the situation a little background is necessary. I have had a site called tx-laws.com for about 15 years. It was a site that was used primarily by private resource and as such was never SEO'd. The site itself was in fact quite Seo unfriendly. despite a complete lack of marketing or SEO efforts, over time, SEO aside, this domain eventually made it to page one of Google Yahoo and Bing under the keywords Texas laws. About six months ago I decided to revamp the site and create a new resource aimed at a public market. A good deal of effort was made to re-work the SEO. The new site was developed at a different domain name: easylawlook up.com. Within a few months this domain name surpassed tx-laws in Google and was holding its place in position number eight out of 190 million results. Note that at this point no marketing has been done, that is to say there has been no social networking, no e-mail campaigns, no blogs, -- nothing but content. All was well until a few weeks ago I decided to upgrade our network and our servers. During this period there was some downtime unfortunately. When the upgrade was complete everything seemed fine until a week or so later when our primary domain easy law look up vanished off Google. At first I thought it was downtime but now I'm not so sure. The current configuration reroutes traffic from tx-laws to easylawlookup in IIS by pointing both domains to the same root directory. Everything else was handled through scripting. As far as I know this is how it was always set up. At present there is no 301 Redirect in place for tx-laws (as I'm sure there probably should be). Interestingly enough the back links to easylaw also went away. Even more telling however is that now when I visit link: easylawlookup.com there is only one link, and that link is to a domain which references tx-laws not easy law. So it would appear that I have confused Google with regards to my actual intentions. My question is this. Right now my rankings for tx-laws remain unchanged. The last thing I want to have happen is to see those disappear as well. If easy law has somehow been penalized and I redirect tx-laws to easy through a 301 will I screw up my rankings for this domain as well? Any comments or input on the situation are welcome. I just want to think it through before I start making more changes which might make things worse instead of better. Ultimately though, there is no reason that the old domain can't be redirected to the new domain at this point unless it would mean that I run the risk of losing my listings for tx-laws, ending up with nothing instead of transferring any link juice and traffic to easy law. With regards to the down time, it was substantial over a couple of weeks with many hours off-line. However this downtime would have affected both domains the only difference being that the one domain had been in existence for 15 years as opposed to six months for the other. So is my problem downtime, lack of proper 301 redirect, or something else? and if I implement a 301 at this point do I risk damaging the remaining domain which is operational? Thanks again for any help.
Technical SEO | | Steviebone0 -
Problems with google cache
Hi Can you please advise if the following website is corrupted in the eyes of Google, it has been written in umbraco and I have taken over it from another developer and I am confused to why it is behaving the way it is. cache:www.tangoholidaysolutions.com When I run this all I see is the header, the start of the main content and then the footer. If I view text view all the content is visible. The 2nd issue I have with this site is as follows: Main Page: http://www.tangoholidaysolutions.com/holiday-lettings-spain/ This page is made up of widgets i.e. locations, featured villas, content However the widgets are their own webpages in their own right http://www.tangoholidaysolutions.com/holiday-lettings-spain/location-picker/ My concern is that this part pages will affect the performance of the seo on the site. In an ideal world I would have the CMS setup so these widgets are not classed as pages, but I am working on this. Thanks Andy
Technical SEO | | iprosoftware0