[wtf] Mysterious Homepage De-Indexing
-
Our homepage, as well as several similar landing pages, have vanished from the index. Could you guys review the below pages to make sure I'm not missing something really obvious?!
URLs: http://www.grammarly.com http://www.grammarly.com/plagiarism-checker
- It's been four days, so it's not just a temporary fluctuation
- The pages don't have a "noindex" tag on them and aren't being excluded in our robots.txt
- There's no notification about a penalty in WMT
Clues:
-
WMT is returning an "HTTP 200 OK" for Fetch, is showing a redirect to grammarly.com/1 (alternate version of homepage, contains rel=canonical back to homepage) for Fetch+Render. Could this be causing a circular redirect?
-
Some pages on our domain are ranking fine, e.g. https://www.google.com/search?q=grammarly+answers
-
A month ago, we redesigned the pages in question. The new versions are pretty script-heavy, as you can see.
-
We don't have a sitemap set up yet.
Any ideas? Thanks in advance, friends!
-
Did this get resolved? I'm seeing your home-page indexed and ranking now.
I'm not seeing any kind of redirect to an alternate URL at this point (either as a browser or as GoogleBot). If you 301'ed to an alternate URL and then rel=canonical'ed back to the source of the 301, that could definitely cause problems. It's sending a pretty strong mixed-signal. In that case you'd probably want to 302 or use some alternate method. Redirects for the home-page are best avoided, in most cases.
-
Are you sure it was missing for a time? Ultimately I wouldn't use a third-party (Google) as a tool to diagnose problems (faulty on-site code) that I know are problems and need to be fixed.I'd fix the problems I know are issues and then go from there. Or hire someone capable of fixing the problems.
-
Thanks, Ryan. I'll get to work on the issues you mentioned.
I do have one question for you - grammarly.com/proofreading (significantly fewer links, identical codebase) is now back on the index. If the issue was too many scripts or HTML errors, wouldn't both pages still be de-indexed?
-
Here are some issues just going down the first few lines of code...
- There's a height attribute in your tag.
- Your cookie on the home page is set to expire in the past, not the future
- Your tag conflicts with your script and other code issues (http://stackoverflow.com/questions/21363090/doctype-html-ruins-my-script)
- Your Google Site Verification meta tag is different than other pages.
- Your link to the Optimizely CDN is incorrect... (missing 'http:' so it's looking for the script on your site)
- You have many other Markup Issues.
And that's prior to getting into the hundreds of lines of code preceding the start of your page at the tag... 300 lines or so on your other indexed pages 1100+ on your home page. So not only are you not following best practices as outlined by Google, but you have broken stuff too.
-
The saga continues...
According to WMT, there are no issues with grammarly.com The page is fetched and rendered correctly.
Google! Y u no index? Any ideas?
-
Like Lynn mentioned below, if you're having redirection take place across several portions of the site, that could cause the spikes, and a big increase in total download time is worrying if you're crossing the average bounce rate threshold for most people's patience.
Here's the Google Page speed take on it: https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fgrammarly.com&tab=desktop. They go over both desktop and mobile.
-
Hmm, was something done to fix the googlebot redirect issue or did it just fix itself? Here it states that googlebot will often identify itself as mozilla and your fetch/render originally seemed to indicate that at least some of the time that was the page google was getting. It is a bit murky technically what exactly is going on there but if google is getting redirected some of the time then as you said you are getting into a circular situation between the redirect and the canonical where it is a bit difficult to predict what will happen. If that is 100% fixed now and google sees the main page all the time then I would wait a day or two to see if the page comes back into the index (but be 100% sure that you know it is fixed!). I still think that is the most likely source of your troubles...
-
Excellent question, Lynn. Thank you for chiming in here. There's a user agent based javascript redirect that keeps Chrome visitors on grammarly.com (Chrome browser extension) and sends other browsers to grammarly.com/1 (Web app that works on all browsers).
UPDATE: According to WMT Fetch+Render, the Googlebot redirection issue has been fixed. It is no longer being redirected anywhere and returning a 200 OK for grammarly.com.
Kelly, if that was causing the problem, how long should I hold my breath for re-indexing after re-submitting the homepage?
-
Yup definitely. Whether you're completely removed or simply dropped doesn't matter. If you're not there anymore, for some reason Google determined you're no longer an authority for that keyword. So you need to find out why. Since you just redesigned, the way way is to back track, double check all the old tags and compare them to the new site, check the text and keyword usage on the website, look for anything that's changed that could contribute to the drop. If you don't find anything, tools like majesticSEO are handy to checking if your backlinks are still healthy.
-
Hi Alex, Thank you for your response. The pages didn't suffer in ranking, they were completely removed from the index. Based on that, do you still think it could be a keyword issue?
-
That's actually a great point. I suppose Google could have been holding on to a pre-redesign cached version of the pages.
There has been a 50-100% increase in page download times as well as some weird 5x spikes for crawled pages. I know there could probably be a million different reasons, but do any of them stick out at you as being potential sources of the problem?
-
How does that second version of the homepage work and how long has it been around for? I get one version of the homepage in one browser and the second in another, what decides which version is served and what kind of redirect is it? I think that is the most likely source of your troubles.
-
Yes, but the pages were indexed prior to the redesign, no? Can you look up your crawl stats in GWT to see if there's been a dramatic up tick in page download times, and a down trend in pages crawled. That will at least give you a starting point as to differences between now and then: https://www.google.com/webmasters/tools/crawl-stats
-
Logo definitely needs to be made clickable to Home.
Did you compare the old design and the new design's text to make sure you're still covering the same keywords. In many cases a redesign is more "streamlined" which also means less text or a re-write which is going to impact the keywords your site is relevant for.
-
Thanks, Ryan. Improving our code-to-text ratio is on our roadmap, but could that really be the issue here? The pages were all fully indexed without problems for a full month after our redesign, and we haven't added any scripts. Was there an algorithm update on Monday that could explain the sudden de-indexing?
-
VERY script heavy. Google has recently released updates on a lot of this (Q4 2014) here: http://googlewebmastercentral.blogspot.mx/2014/10/updating-our-technical-webmaster.html. With further guidance given here: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/optimize-encoding-and-transfer. Without doing a deep dive that's the most glaring issue and obvious difference between pages that are still being indexed and those that are not.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Too many redirects Homepage Problem
During a site move, a sub domain of website a redirected all pages to the homepage of website b. resulting a homepage algorithmic penalty for website b. The sub domain was deleted - and the sitemove for the main domain went through ok. However website b has 80,000 incoming links from the sub domain of website a. is there anything else that needs to be done to "inform" google that the mistake was fixed?
Intermediate & Advanced SEO | | FusionMediaLimited1 -
Links from non-indexed pages
Whilst looking for link opportunities, I have noticed that the website has a few profiles from suppliers or accredited organisations. However, a search form is required to access these pages and when I type cache:"webpage.com" the page is showing up as non-indexed. These are good websites, not spammy directory sites, but is it worth trying to get Google to index the pages? If so, what is the best method to use?
Intermediate & Advanced SEO | | maxweb0 -
How do you remove Authorship photos from your homepage?
Suppose you have a website with a blog on it, and you show a few recent blog posts on the homepage. Google see the headline + by Author Name and associates that user's Google+ profile. This is great for the actual blog posts, but how do you prevent this from happening on the homepage or other blog roll page?
Intermediate & Advanced SEO | | wattssw0 -
Huge google index with un-relevant pages
Hi, i run a site about sport matches, every match has a page and the pages are generated automatically from the DB. pages are not duplicated, but over time some look a little bit similar. after a match finishes it has no internal links or sitemap entry, but it's reachable by direct URL and continues to be on google index. so over time we have more than 100,000 indexed pages. since past matches have no significance and they're not linked and a match can repeat and it may look like duplicate content....what you suggest us to do: when a match is finished - not linked, but appears on the index and SERP 301 redirect the match Page to the match Category which is a higher hierarchy and is always relevant? use rel=canonical to the match Category do nothing.... *301 redirect will shrink my index status, some say a high index status is good... *is it safe to 301 redirect 100,000 pages at once - wouldn't it look strange to google? *would canonical remove the past matches pages from the index? what do you think? Thanks, Assaf.
Intermediate & Advanced SEO | | stassaf0 -
Google Sitemap only indexing 50% Is that a problem?
We have about 18,000 pages submitted on our Google Sitemap and only about 9000 of them are indexed. Is this a problem? We have a script that creates a sitemap on a daily basis and it is submitted on a daily basis. Am I better off only doing it once a week? Is this why I never get to the full 18,000 indexed?
Intermediate & Advanced SEO | | EcommerceSite0 -
Increasing index
Hi! I'm having some trouble getting Google to index pages which once had a querystring in them but now are being redirected with a 301. The pages have a lot of unique content but this doesn't seem to matter. I feels as if there stuck in limbo (or a sandbox 🙂 Any clues on how to fix this? Thanks / Niklas
Intermediate & Advanced SEO | | KAN-Malmo0 -
Thousands of 404 Pages Indexed - Recommendations?
Background: I have a newly acquired client who has had a lot of issues over the past few months. What happened is he had a major issue with broken dynamic URL's where they would start infinite loops due to redirects and relative links. His previous SEO didn't pay attention to the sitemaps created by a backend generator, and it caused hundreds of thousands of pages to be indexed. Useless pages. These useless pages were all bringing up a 404 page that didn't have a 404 server response (it had a 200 response) which created a ton of duplicate content and bad links (relative linking). Now here I am, cleaning up this mess. I've fixed the 404 page so it creates a 404 server response. Google webmaster tools is now returning thousands of "not found" errors, great start. I fixed all site errors that cause infinite redirects. Cleaned up the sitemap and submitted it. When I search site:www.(domainname).com I am still getting an insane amount of pages that no longer exist. My question: How does Google handle all of these 404's? My client wants all the bad pages removed now but I don't have as much control over that. It's a slow process getting Google to remove these pages that are returning a 404. He is continuously dropping in rankings still. Is there a way of speeding up the process? It's not reasonable to enter tens of thousands of pages into the URL Removal Tool. I want to clean house and have Google just index the pages in the sitemap.
Intermediate & Advanced SEO | | BeTheBoss0 -
How long till pages drop out of the index
In your experience how long does it normally take for 301-redirected pages to drop out of Google's index?
Intermediate & Advanced SEO | | bjalc20110