Can JavaScrip affect Google's index/ranking?
-
We have changed our website template about a month ago and since then we experienced a huge drop in rankings, especially with our home page. We kept the same url structure on entire website, pretty much the same content and the same on-page seo. We kind of knew we will have a rank drop but not that huge. We used to rank with the homepage on the top of the second page, and now we lost about 20-25 positions. What we changed is that we made a new homepage structure, more user-friendly and with much more organized information, we also have a slider presenting our main services. 80% of our content on the homepage is included inside the slideshow and 3 tabs, but all these elements are JavaScript. The content is unique and is seo optimized but when I am disabling the JavaScript, it becomes completely unavailable. Could this be the reason for the huge rank drop?
I used the Webmaster Tolls' Fetch as Googlebot tool and it looks like Google reads perfectly what's inside the JavaScrip slideshow so I did not worried until now when I found this on SEOMoz: "Try to avoid ... using javascript ... since the search engines will ... not indexed them ... "
One more weird thing is that although we have no duplicate content and the entire website has been cached, for a few pages (including the homepage), the picture snipet is from the old website.
All main urls are the same, we removed some old ones that we don't need anymore, so we kept all the inbound links. The 301 redirects are properly set. But still, we have a huge rank drop. Also, (not sure if this important or not), the robots.txt file is disallowing some folders like: images, modules, templates... (Joomla components). We still have some html errors and warnings but way less than we had with the old website.
Any advice would be much appreciated, thank you!
-
Thank for looking into this. The new site has been uploaded almost a month ago. Since then, Google crawled our website at least 4 times; nevertheless, the image snippet is still from the old template. Although, as you pointed out, there some minor issues, they are certainly not the reason why we dropped so bad.
As Rand pointed out to me about 2 weeks ago, sometimes Google works in mysterious ways that even the best in SEO cannot understand.
I'll wait for Google to show the snippet image of my Homepage, just to make sure he completely crawled it.Thanks again for your help!
-
Thank you for sharing. The primary concern I have is Google's cache of your home page is from Dec 3rd and looks different then the current site. "This is Google's cache of http://www.echolimousine.com/. It is a snapshot of the page as it appeared on Dec 3, 2011 10:23:14 GMT."
The next step is to wait for Google to recrawl your home page. Your home page has a PA of 47 so it should be updated very soon. Normally your home page is the most crawled page of your site. Your other site changes need to be updated as well to fully capture any changes.
A few other minor items I noticed while looking at the page:
-
As you shared, the page does not validate. I would recommend correcting the errors.
-
You are presently using tables in HTML. That type of coding stopped being used years ago. I would recommend using table-less code.
-
Your meta keywords tag is very long and offers no value to search engines. Unless you use it for another purpose you should remove it.
-
There is no value in adding "index, follow" meta tag since that is the default behavior. It just adds unnecessary code to the page. I recommend removing it.
-
-
_(echo limousine dot com) _
_keywords - Chicago limo service, Chicago limousine service, _Chicago limo | Chicago car service
As we redesigned the website, we decided to focus more on the first 3 keywords to better achieve the positions desired. We decided to remove "car service" from the Title to better target the other ones. I was not sure if by keeping it there it will dilute the first part so I decided to remove it; we ranked on first page for this keyword and on page two at the very top for the first two; the third one is very competitive and there are a couple of very old website on the first page which have a partial match domain which seems to be working in their favor still and it will take longer to beat them based on those facts.
-
All of your responses are fine except one:
“If you remove a group of pages then the PR flow of your site is greatly disrupted” – No important pages have been removed, just some low end pages that did not gain any external links and also did not play any role in the internal linking process.
I disagree with your assessment any missing pages are not important. If you have pages on your site which are missing then there is PR flowing to those pages via your internal links but zero PR flowing back to your site.
Based on your responses, you have taken the right actions and investigated the right areas. In order to offer further assistance I would need the URL involved along with a couple examples of keywords which you used to rank for and have dropped. I would also need to know the URL of the pages which you expect to rank for the given keyword.
-
Thanks for your quick reply Ryan
“If you remove a group of pages then the PR flow of your site is greatly disrupted” – No important pages have been removed, just some low end pages that did not gain any external links and also did not play any role in the internal linking process.
"Before you can begin to troubleshoot any ranking issues your new site should be 100% indexed" - it is indexed 100%
_“_I know you shared the 301 redirects are properly set, but you may wish to check them again” - all ok
_“_robots.txt is blocking the page – all ok _“the page is tagged with noindex, all ok
the page is not properly linked to internally or externally -_ internal links are all in place as well the external ones the page does not have unique content – the content on the homepage has been written about a week after the website has been uploaded and it is 100% unique.
“Check your new site's log for 404 errors where links are broken” – there are 20 broken links (according to Xenu) which are coming from template’s CSS but those are related to some images which have not been published (some those images were part of some old presets from the original template). The user cannot access those broken links from inside our website and they have not been indexed anyway.
“Crawl the new site checking for issues” – in Seomoz campaign there are only 2 warnings and they are related to the Title element being too short for Contact page and Sitemap page.
“Perform a site: search for content that is in JS to see if it is indexed” – we did, every single page has been indexed and that shows in GWT as well.
Other issues:
- Homepage content in 90% new, all keywords have been kept in place and optimized.
- Title has been changed once
- all other webpages have kept their metrics
- we have dropped for every single keyword that we were targeting on our Homepage (echo limousine dot com) but all metrics are the same (PA, DA, Mr, mT...)
In my opinion, it is an issue with our homepage. I’m not sure what to check anymore; the JavaScript slideshow on the Homepage shows a lot of code. We used a third party company to upload the new template and I am not sure if there is an issue with that; upon CCS validation there were 3 errors and 354 warnings and W3C returned 31 errors and 2 warnings. Now, they told us that these errors are somewhat normal because they are not pure html and they will never be validated. They might be right but at this moment I take everything into account. I verified everything I knew and I cannot find an answer.
-
One more weird thing is that although we have no duplicate content and the entire website has been cached, for a few pages (including the homepage), the picture snipet is from the old website.
When you migrate from an old site to a new site, your rankings wont be solid until the entire new site is crawled. Most sites use the same navigation and footers for the entire site. The links on each and every page are important as they control the flow or a large amount of page rank. If you remove a group of pages then the PR flow of your site is greatly disrupted.
Before you can begin to troubleshoot any ranking issues your new site should be 100% indexed. If 30 days have passed and that has not happened, there is a problem. I know you shared the 301 redirects are properly set, but you may wish to check them again. 30 days after a site migration you should search Google's index for every page. Any pages not indexed should be listed, and that list should be reviewed. Determine the root cause of the problem. Some possibilities: the old page is not properly 301'd, robots.txt is blocking the page, the page is tagged with noindex, the page is not properly linked to internally or externally, the page does not have unique content, etc.
Once your site is fully indexed, the next step is to work to improve rankings. Adjust any backlinks which you control so they link directly to the new site without the need for a 301 redirect. Check your new site's log for 404 errors where links are broken. Crawl the new site checking for issues. Continue the normal process of earning links.
With respect to JavaScript, Google has demonstrated the ability to read some JS. It is safest to present all your content in HTML and use JS to perform functions such as control which content is displayed and when. If you decide to use JS to display content, I would always recommend checking Google for the content in JS. Perform a site: search for content that is in JS to see if it is indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How google bot see's two the same rel canonicals?
Hi, I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL. For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's. SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/ Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)... So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates 😄 thanks.
Technical SEO | | dos06590 -
My website is currently failing Google's mobile friendly test. What are my options?
What can I tell my developer so I pass this test? What will they need to develop A web mockup? Is there an easy code to implement?
Technical SEO | | pmull0 -
What should i do to index images in google webmaster?
My website onlineplants.com.au. It's a shopping cart website. I do have nearly 1200 images but none of the images are indexed in google webmaster? what should i do. Thanks
Technical SEO | | Verve-Innovation1 -
Google has deindexed 40% of my site because it's having problems crawling it
Hi Last week i got my fifth email saying 'Google can't access your site'. The first one i got in early November. Since then my site has gone from almost 80k pages indexed to less than 45k pages and the number is lowering even though we post daily about 100 new articles (it's a online newspaper). The site i'm talking about is http://www.gazetaexpress.com/ We have to deal with DDoS attacks most of the time, so our server guy has implemented a firewall to protect the site from these attacks. We suspect that it's the firewall that is blocking google bots to crawl and index our site. But then things get more interesting, some parts of the site are being crawled regularly and some others not at all. If the firewall was to stop google bots from crawling the site, why some parts of the site are being crawled with no problems and others aren't? In the screenshot attached to this post you will see how Google Webmasters is reporting these errors. In this link, it says that if 'Error' status happens again you should contact Google Webmaster support because something is preventing Google to fetch the site. I used the Feedback form in Google Webmasters to report this error about two months ago but haven't heard from them. Did i use the wrong form to contact them, if yes how can i reach them and tell about my problem? If you need more details feel free to ask. I will appreciate any help. Thank you in advance C43svbv.png?1
Technical SEO | | Bajram.Kurtishaj1 -
Fake Links indexing in google
Hello everyone, I have an interesting situation occurring here, and hoping maybe someone here has seen something of this nature or be able to offer some sort of advice. So, we recently installed a wordpress to a subdomain for our business and have been blogging through it. We added the google webmaster tools meta tag and I've noticed an increase in 404 links. I brought this up to or server admin, and he verified that there were a lot of ip's pinging our server looking for these links that don't exist. We've combed through our server files and nothing seems to be compromised. Today, we noticed that when you do site:ourdomain.com into google the subdomain with wordpress shows hundreds of these fake links, that when you visit them, return a 404 page. Just curious if anyone has seen anything like this, what it may be, how we can stop it, could it negatively impact us in anyway? Should we even worry about it? Here's the link to the google results. https://www.google.com/search?q=site%3Amshowells.com&oq=site%3A&aqs=chrome.0.69i59j69i57j69i58.1905j0j1&sourceid=chrome&es_sm=91&ie=UTF-8 (odd links show up on pages 2-3+)
Technical SEO | | mshowells0 -
Should I use my competitor's name in my content to help my rankings?
If I have a competitor that ranks higher than me, would it be helpful to use their name in my content, or in my meta information?
Technical SEO | | greaterstudio0 -
Why can't i get the page if i type/paste url directly?
Hello, just click the following link, http://www.tuscany-cooking-class.com/es/alojamiento/villa-pandolfini/ It might be show the 404 page, but follow this way, www.tuscany-cooking-class.com/es then select alojamiento link, then select first property name with villa-pandolfini, Now you can view the page content, why it behave like this, We are using joomla with customized. Anyone help me to fix this issue Thanks Advance Alex
Technical SEO | | massimobrogi0 -
How can I get a listing of just the URLs that are indexed in Google
I know I can use the site: query to see all the pages I have indexed in Google, but I need a listing of just the URLs. We are doing a site re-platform and I want to make sure every URL in Google has a 301. Is there an easy way to just see the URLs that Google has indexed for a domain?
Technical SEO | | EvergladesDirect0