What's going on with google index - javascript and google bot
-
Hi all,
Weird issue with one of my websites.
The website URL: http://www.athletictrainers.myindustrytracker.com/
Let's take 2 diffrenet article pages from this website:
1st: http://www.athletictrainers.myindustrytracker.com/en/article/71232/
As you can see the page is indexed correctly on google:
http://webcache.googleusercontent.com/search?q=cache:dfbzhHkl5K4J:www.athletictrainers.myindustrytracker.com/en/article/71232/10-minute-core-and-cardio&hl=en&strip=1 (that the "text only" version, indexed on May 19th)
2nd: http://www.athletictrainers.myindustrytracker.com/en/article/69811
As you can see the page isn't indexed correctly on google:
http://webcache.googleusercontent.com/search?q=cache:KeU6-oViFkgJ:www.athletictrainers.myindustrytracker.com/en/article/69811&hl=en&strip=1 (that the "text only" version, indexed on May 21th)
They both have the same code, and about the dates, there are pages that indexed before the 19th and they also problematic. Google can't read the content, he can read it when he wants to.
Can you think what is the problem with that? I know that google can read JS and crawl our pages correctly, but it happens only with few pages and not all of them (as you can see above).
-
Hello Or,
I just checked the most recent cache and it looks like Google does NOT see the content on the first URL (ending in /71232/) but does see it on the second one (ending in 69811).
This is the opposite of the situation you described above.
Yes, Google "can" execute Javascript, but just because they can doesn't mean they will every time. Also, perhaps not all of their bots can or do execute Javascript every time. For instance, the bot they use for pure discovery may not, while the one they use to render previews may.
Or they could have given the Javascript only so long to execute.
I also notice the page that is currently not indexed fully has an embedded YouTube video. Not that this would typically cause any problems with getting other content indexed, in your case it may be worth looking into. For example, it could contribute to the load time issue mentioned above.
When it comes to executing scripts, submitting forms, etc... Google is very much at the stage of just randomly "trying stuff out" to "see what happens". It's like a hyperactive baby in a spaceship just pushing buttons like crazy, which is why we run into issues with "spider traps" and with unintentionally getting dynamic pages indexed from form submissions, internal searches and other oddities in site architecture. It is also one of the reasons why markup like Schema.org and JSON-LD are important: They allow us to label the buttons so the bot "understands" what it is pressing (or not).
I apologize that there is not definitive answer for your problem at the moment, but given the behavior has switched completely I'm not sure how to go about investigating. This is why it is still very much a best practice to ensure all of your content is indexable by not rendering it with Javascript. If you can't see the textual content in the source code (as is the case here) then you are at risk of it not being seen by Google.
-
Hi Patrick,
We already tested all the pages with fetch as Google tool, sorry that I didn't mention is before but everything over there is ok. I see the 'Partial" status, but the issues are with one of the social plugins and without any connection to the content.
So, all the tools show that it should be ok, but google isn't indexing correctly the pages.
I already checked:
1. Frontend code.
2. No-index issues
3. Canonical issues
4. Robots.txt issues
5. Fetch as Google issues
I know that google can read JS, and I don't understand why he can read only part of the pages and not all of them (there isn't any difference between them).
-
Hi there
I would take a look at the Fetch as Google tool in your Search Console and see what issues arise there - I would do this for both your desktop and your mobile, so that you can see how these pages are being rendered by Google.
If you get a "Partial" status, Google will return the issues that they have ran into, and you can prioritize your issues & how you want to handle them from there.
You can read more about Javascript and Google here as well as here.
Hope this all helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicated content & url's for e-commerce website
Hi, I have an e-commerce site where I sell greeting cards. Products are under different categories (birthday, Christmas etc) with subcategories (for Mother, for Sister etc) and same product can be under 3 or 6 subcategories, for example: url: .../greeting-cards/Christmas/product1/for-mother
Technical SEO | | jurginga
url:.../greeting-cards/Christmas/product1/for-sister
etc On the CMS I have one description record per each card (product1) with multiple subcategories attached that naturally creates URLs for subcategories. Moz system (and Google for sure) picks these urls (and content) as duplicated.
Any ideas how to solve this problem?
Thank you very much!0 -
How can I get a photo album indexed by Google?
We have a lot of photos on our website. Unfortunately most of them don't seem to be indexed by Google. We run a party website. One of the things we do, is take pictures at events and put them on the site. An event page with a photo album, can have anywhere between 100 and 750 photo's. For each foto's there is a thumbnail on the page. The thumbnails are lazy loaded by showing a placeholder and loading the picture right before it comes onscreen. There is no pagination of infinite scrolling. Thumbnails don't have an alt text. Each thumbnail links to a picture page. This page only shows the base HTML structure (menu, etc), the image and a close button. The image has a src attribute with full size image, a srcset with several sizes for responsive design and an alt text. There is no real textual content on an image page. (Note that when a user clicks on the thumbnail, the large image is loaded using JavaScript and we mimic the page change. I think it doesn't matter, but am unsure.) I'd like that full size images should be indexed by Google and found with Google image search. Thumbnails should not be indexed (or ignored). Unfortunately most pictures aren't found or their thumbnail is shown. Moz is giving telling me that all the picture pages are duplicate content (19,521 issues), as they are all the same with the exception of the image. The page title isn't the same but similar for all images of an album. Example: On the "A day at the park" event page, we have 136 pictures. A site search on "a day at the park" foto, only reveals two photo's of the albums. 3QolbbI.png QTQVxqY.jpg mwEG90S.jpg
Technical SEO | | jasny0 -
How to fix google index filled with redundant parameters
Hi All This follows on from a previous question (http://moz.com/community/q/how-to-fix-google-index-after-fixing-site-infected-with-malware) that on further investigation has become a much broader problem. I think this is an issue that may plague many sites following upgrades from CMS systems. First a little history. A new customer wanted to improve their site ranking and SEO. We discovered the site was running an old version of Joomla and had been hacked. URL's such as http://domain.com/index.php?vc=427&Buy_Pinnacle_Studio_14_Ultimate redirected users to other sites and the site was ranking for buy adobe or buy microsoft. There was no notification in webmaster tools that the site had been hacked. So an upgrade to a later version of Joomla was required and we implemented SEF URLs at the same time. This fixed the hacking problem, we now had SEF url's, fixed a lot of duplicate content and added new titles and descriptions. Problem is that after a couple of months things aren't really improving. The site is still ranking for adobe and microsoft and a lot of other rubbish and the urls like http://domain.com/index.php?vc=427&Buy_Pinnacle_Studio_14_Ultimate are still sending visitors but to the home page as are a lot of the old redundant urls with parameters in them. I think it is default behavior for a lot of CMS systems to ignore parameters it doesn't recognise so http://domain.com/index.php?vc=427&Buy_Pinnacle_Studio_14_Ultimate displays the home page and gives a 200 response code. My theory is that Google isn't removing these pages from the index because it's getting a 200 response code from old url's and possibly penalizing the site for duplicate content (which don't showing up in moz because there aren't any links on the site to these url's) The index in webmaster tools is showing over 1000 url's indexed when there are only around 300 actual url's. It also shows thousands of url's for each parameter type most of which aren't used. So my question is how to fix this, I don't think 404's or similar are the answer because there are so many and trying to find each combination of parameter would be impossible. Webmaster tools advises not to make changes to parameters but even so I don't think resetting or editing them individually is going to remove them and only change how google indexes them (if anyone knows different please let me know) Appreciate any assistance and also any comments or discussion on this matter. Regards, Ian
Technical SEO | | iragless0 -
Can't get Google to Index .pdf in wp-content folder
We created an indepth case study/survey for a legal client and can't get Google to crawl the PDF which is hosted on Wordpress in the wp-content folder. It is linked to heavily from nearly all pages of the site by a global sidebar. Am I missing something obvious as to why Google won't crawl this PDF? We can't get much value from it unless it gets indexed. Any help is greatly appreciated. Thanks! Here is the PDF itself:
Technical SEO | | inboundauthority
http://www.billbonebikelaw.com/wp-content/uploads/2013/11/Whitepaper-Drivers-vs-cyclists-Floridas-Struggle-to-share-the-road.pdf Here is the page it is linked from:
http://www.billbonebikelaw.com/resources/drivers-vs-cyclists-study/0 -
Why isn't my site not searchable from google?
I am having a hard time figuring out why is it that when I search for my website name, it didn't show up in google's search result? Here's a link to my site. I've been twiddling for days looking for answers in my google webmaster tools. Here's a link of the crawl stats from google webmaster tool. As you can see it is actually crawling some pages. However my looking at my indexed status, I am getting 0 as you can see here (http://cl.ly/image/3G1R1p0b3k1P). I've double checked for my robots.txt and nothing seemed to be out of the ordinary there. I am not blocking anything. Any ideas why?
Technical SEO | | herlamba0 -
Found a Typo in URL, what's the best practice to fix it?
Wordpress 3.4, Yoast, Multisite The URL is supposed to be "www.myexample.com/great-site" but I just found that it's "www.myexample.com/gre-atsite" It is a relatively new site but we already pointed several internal links to "www.myexample.com/gre-atsite" What's the best practice to correct this? Which option is more desirable? 1.Creating a new page I found that Yoast has "301 redirect" option in the Advanced tap Can I just create a new page(exact same page) and put noindex, nofollow and redirect it to http://www.myexample.com/great-site OR 2. htacess redirect rule simply change the URL to http://www.myexample.com/great-site and update it, and add Options +FollowSymLinks RewriteEngine On
Technical SEO | | joony2008
RewriteCond %{HTTP_HOST} ^http://www.myexample.com/gre-atsite$ [NC]
RewriteRule ^(.*)$ http://www.myexample.com/great-site$1 [R=301,L]0 -
No Google cached snapshot image... 'Text-only version' working.
We are having an issue with Googles cached image snapshops... Here is an example: http://webcache.googleusercontent.com/search?q=cache:IyvADsGi10gJ:shop.deliaonline.com/store/home-and-garden/kitchen/morphy-richards-48781-cooking/ean/5011832030948+&cd=308&hl=en&ct=clnk&gl=uk I wondered if anyone knows or can see the cause of this problem? Thanks
Technical SEO | | pekler1 -
Schema Markup and Google's Rich Snippet Tool
Has anyone ever used the snippet tool and gotten the following error "could not fetch website"? When using the tool and placing an url that does not have markup present it will show that as the error. Or if part of markup is wrong, it will diagnose it accordingly. Did a search online and found limited info...one of which someone had this error but when other users tested it, they were not getting the same error.
Technical SEO | | andrewv0