What's going on with google index - javascript and google bot
-
Hi all,
Weird issue with one of my websites.
The website URL: http://www.athletictrainers.myindustrytracker.com/
Let's take 2 diffrenet article pages from this website:
1st: http://www.athletictrainers.myindustrytracker.com/en/article/71232/
As you can see the page is indexed correctly on google:
http://webcache.googleusercontent.com/search?q=cache:dfbzhHkl5K4J:www.athletictrainers.myindustrytracker.com/en/article/71232/10-minute-core-and-cardio&hl=en&strip=1 (that the "text only" version, indexed on May 19th)
2nd: http://www.athletictrainers.myindustrytracker.com/en/article/69811
As you can see the page isn't indexed correctly on google:
http://webcache.googleusercontent.com/search?q=cache:KeU6-oViFkgJ:www.athletictrainers.myindustrytracker.com/en/article/69811&hl=en&strip=1 (that the "text only" version, indexed on May 21th)
They both have the same code, and about the dates, there are pages that indexed before the 19th and they also problematic. Google can't read the content, he can read it when he wants to.
Can you think what is the problem with that? I know that google can read JS and crawl our pages correctly, but it happens only with few pages and not all of them (as you can see above).
-
Hello Or,
I just checked the most recent cache and it looks like Google does NOT see the content on the first URL (ending in /71232/) but does see it on the second one (ending in 69811).
This is the opposite of the situation you described above.
Yes, Google "can" execute Javascript, but just because they can doesn't mean they will every time. Also, perhaps not all of their bots can or do execute Javascript every time. For instance, the bot they use for pure discovery may not, while the one they use to render previews may.
Or they could have given the Javascript only so long to execute.
I also notice the page that is currently not indexed fully has an embedded YouTube video. Not that this would typically cause any problems with getting other content indexed, in your case it may be worth looking into. For example, it could contribute to the load time issue mentioned above.
When it comes to executing scripts, submitting forms, etc... Google is very much at the stage of just randomly "trying stuff out" to "see what happens". It's like a hyperactive baby in a spaceship just pushing buttons like crazy, which is why we run into issues with "spider traps" and with unintentionally getting dynamic pages indexed from form submissions, internal searches and other oddities in site architecture. It is also one of the reasons why markup like Schema.org and JSON-LD are important: They allow us to label the buttons so the bot "understands" what it is pressing (or not).
I apologize that there is not definitive answer for your problem at the moment, but given the behavior has switched completely I'm not sure how to go about investigating. This is why it is still very much a best practice to ensure all of your content is indexable by not rendering it with Javascript. If you can't see the textual content in the source code (as is the case here) then you are at risk of it not being seen by Google.
-
Hi Patrick,
We already tested all the pages with fetch as Google tool, sorry that I didn't mention is before but everything over there is ok. I see the 'Partial" status, but the issues are with one of the social plugins and without any connection to the content.
So, all the tools show that it should be ok, but google isn't indexing correctly the pages.
I already checked:
1. Frontend code.
2. No-index issues
3. Canonical issues
4. Robots.txt issues
5. Fetch as Google issues
I know that google can read JS, and I don't understand why he can read only part of the pages and not all of them (there isn't any difference between them).
-
Hi there
I would take a look at the Fetch as Google tool in your Search Console and see what issues arise there - I would do this for both your desktop and your mobile, so that you can see how these pages are being rendered by Google.
If you get a "Partial" status, Google will return the issues that they have ran into, and you can prioritize your issues & how you want to handle them from there.
You can read more about Javascript and Google here as well as here.
Hope this all helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search console says 'sitemap is blocked by robots?
Google Search console is telling me "Sitemap contains URLs which are blocked by robots.txt." I don't understand why my sitemap is being blocked? My robots.txt look like this: User-Agent: *
Technical SEO | | Extima-Christian
Disallow: Sitemap: http://www.website.com/sitemap_index.xml It's a WordPress site, with Yoast SEO installed. Is anyone else having this issue with Google Search console? Does anyone know how I can fix this issue?1 -
What's the best way to handle Overly Dynamic Url's?
So my question is What the best way to handle Overly Dynamic Url's. I am working on a real estate agency website. They are selling/buying properties and the url is as followed. ttp://www.------.com/index.php?action=calculator&popup=yes&price=195000
Technical SEO | | Angelos_Savvaidis0 -
Google is indexing my directories
I'm sure this has been asked before, but I was looking at all of Google's results for my site and I found dozens of results for directories such as: Index of /scouting/blog/wp-includes/js/swfupload/plugins Obviously I don't want those indexed. How do I prevent Google from indexing those? Also, it only seems to be doing it with Wordpress, not any of the directories on my main site. (We have a wordpress blog, which is only a portion of the site)
Technical SEO | | UnderRugSwept0 -
Site being indexed by Google before it has launched
We are currently coming towards the end of a site migration, and are at the final stage of testing redirects etc. However, to our horror we've just discovered Google has started indexing the new site. Any ideas on how this could have happened? I have most recently asked for robots.txt to exclude anything with a certain parameter in URL. Is there a chance this, wrongly implemented, could have caused this?
Technical SEO | | Sayers0 -
What's the issue?
Hi, We have a client who dropped in the rankings (initially from bottom of the first page to page to page 3, and now page 5) for a single keyword (their most important one - targeted on their homepage) back in the middle of March. So far, we've found that the issue isn't the following: Keyword stuffing on the page External anchor text pointing to the page Internal anchor text pointing to the page In addition to the above, the drop didn't coincide with panda or penguin. Any other ideas as to what could cause such a drop for a single keyword (other related rankings haven't moved). We're starting to think that this may just have been another small change in the algorithm but it seems like too big of a drop in a short space of time for that to be the case. Any thoughts would be much appreciated! Thanks.
Technical SEO | | jasarrow0 -
Directory Indexed in Google, that I dont want, How to remove?
Hi One of my own websites, having a slight issue, Google have indexed over 500+ pages and files from a template directory from my eCommerce website. In google webmaster tools, getting over 580 crawl errors mostly these ones below I went into my robots text file and added Disallow: /skins*
Technical SEO | | rfksolutionsltd
Disallow: /skin1* Will this block Google from searching them again? and how do I go about getting the 500 pages that are already indexed taken out? Any help would be great | http://www.rfkprintsolutions.co.uk/skin1/modules/Subscriptions/subscription_priceincart.tpl | 403 error | Jan 15, 2012 |
|http://www.rfkprintsolutions.co.uk/skin1/modules/Subscriptions/subscription_info_inlist.tpl | 403 error | Jan 15, 2012 |
|http://www.rfkprintsolutions.co.uk/skin1/modules/Subscriptions/subscriptions_admin.tpl | 403 error | Jan 15, 2012 |
0 -
Blocking URL's with specific parameters from Googlebot
Hi, I've discovered that Googlebot's are voting on products listed on our website and as a result are creating negative ratings by placing votes from 1 to 5 for every product. The voting function is handled using Javascript, as shown below, and the script prevents multiple votes so most products end up with a vote of 1, which translates to "poor". How do I go about using robots.txt to block a URL with specific parameters only? I'm worried that I might end up blocking the whole product listing, which would result in de-listing from Google and the loss of many highly ranked pages. DON'T want to block: http://www.mysite.com/product.php?productid=1234 WANT to block: http://www.mysite.com/product.php?mode=vote&productid=1234&vote=2 Javacript button code: onclick="javascript: document.voteform.submit();" Thanks in advance for any advice given. Regards,
Technical SEO | | aethereal
Asim0 -
Removing a site from Google's index
We have a site we'd like to have pulled from Google's index. Back in late June, we disallowed robot access to the site through the robots.txt file and added a robots meta tag with "no index,no follow" commands. The expectation was that Google would eventually crawl the site and remove it from the index in response to those tags. The problem is that Google hasn't come back to crawl the site since late May. Is there a way to speed up this process and communicate to Google that we want the entire site out of the index, or do we just have to wait until it's eventually crawled again?
Technical SEO | | issuebasedmedia0