[wtf] Mysterious Homepage De-Indexing
-
Our homepage, as well as several similar landing pages, have vanished from the index. Could you guys review the below pages to make sure I'm not missing something really obvious?!
URLs: http://www.grammarly.com http://www.grammarly.com/plagiarism-checker
- It's been four days, so it's not just a temporary fluctuation
- The pages don't have a "noindex" tag on them and aren't being excluded in our robots.txt
- There's no notification about a penalty in WMT
Clues:
-
WMT is returning an "HTTP 200 OK" for Fetch, is showing a redirect to grammarly.com/1 (alternate version of homepage, contains rel=canonical back to homepage) for Fetch+Render. Could this be causing a circular redirect?
-
Some pages on our domain are ranking fine, e.g. https://www.google.com/search?q=grammarly+answers
-
A month ago, we redesigned the pages in question. The new versions are pretty script-heavy, as you can see.
-
We don't have a sitemap set up yet.
Any ideas? Thanks in advance, friends!
-
Did this get resolved? I'm seeing your home-page indexed and ranking now.
I'm not seeing any kind of redirect to an alternate URL at this point (either as a browser or as GoogleBot). If you 301'ed to an alternate URL and then rel=canonical'ed back to the source of the 301, that could definitely cause problems. It's sending a pretty strong mixed-signal. In that case you'd probably want to 302 or use some alternate method. Redirects for the home-page are best avoided, in most cases.
-
Are you sure it was missing for a time? Ultimately I wouldn't use a third-party (Google) as a tool to diagnose problems (faulty on-site code) that I know are problems and need to be fixed.I'd fix the problems I know are issues and then go from there. Or hire someone capable of fixing the problems.
-
Thanks, Ryan. I'll get to work on the issues you mentioned.
I do have one question for you - grammarly.com/proofreading (significantly fewer links, identical codebase) is now back on the index. If the issue was too many scripts or HTML errors, wouldn't both pages still be de-indexed?
-
Here are some issues just going down the first few lines of code...
- There's a height attribute in your tag.
- Your cookie on the home page is set to expire in the past, not the future
- Your tag conflicts with your script and other code issues (http://stackoverflow.com/questions/21363090/doctype-html-ruins-my-script)
- Your Google Site Verification meta tag is different than other pages.
- Your link to the Optimizely CDN is incorrect... (missing 'http:' so it's looking for the script on your site)
- You have many other Markup Issues.
And that's prior to getting into the hundreds of lines of code preceding the start of your page at the tag... 300 lines or so on your other indexed pages 1100+ on your home page. So not only are you not following best practices as outlined by Google, but you have broken stuff too.
-
The saga continues...
According to WMT, there are no issues with grammarly.com The page is fetched and rendered correctly.
Google! Y u no index? Any ideas?
-
Like Lynn mentioned below, if you're having redirection take place across several portions of the site, that could cause the spikes, and a big increase in total download time is worrying if you're crossing the average bounce rate threshold for most people's patience.
Here's the Google Page speed take on it: https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fgrammarly.com&tab=desktop. They go over both desktop and mobile.
-
Hmm, was something done to fix the googlebot redirect issue or did it just fix itself? Here it states that googlebot will often identify itself as mozilla and your fetch/render originally seemed to indicate that at least some of the time that was the page google was getting. It is a bit murky technically what exactly is going on there but if google is getting redirected some of the time then as you said you are getting into a circular situation between the redirect and the canonical where it is a bit difficult to predict what will happen. If that is 100% fixed now and google sees the main page all the time then I would wait a day or two to see if the page comes back into the index (but be 100% sure that you know it is fixed!). I still think that is the most likely source of your troubles...
-
Excellent question, Lynn. Thank you for chiming in here. There's a user agent based javascript redirect that keeps Chrome visitors on grammarly.com (Chrome browser extension) and sends other browsers to grammarly.com/1 (Web app that works on all browsers).
UPDATE: According to WMT Fetch+Render, the Googlebot redirection issue has been fixed. It is no longer being redirected anywhere and returning a 200 OK for grammarly.com.
Kelly, if that was causing the problem, how long should I hold my breath for re-indexing after re-submitting the homepage?
-
Yup definitely. Whether you're completely removed or simply dropped doesn't matter. If you're not there anymore, for some reason Google determined you're no longer an authority for that keyword. So you need to find out why. Since you just redesigned, the way way is to back track, double check all the old tags and compare them to the new site, check the text and keyword usage on the website, look for anything that's changed that could contribute to the drop. If you don't find anything, tools like majesticSEO are handy to checking if your backlinks are still healthy.
-
Hi Alex, Thank you for your response. The pages didn't suffer in ranking, they were completely removed from the index. Based on that, do you still think it could be a keyword issue?
-
That's actually a great point. I suppose Google could have been holding on to a pre-redesign cached version of the pages.
There has been a 50-100% increase in page download times as well as some weird 5x spikes for crawled pages. I know there could probably be a million different reasons, but do any of them stick out at you as being potential sources of the problem?
-
How does that second version of the homepage work and how long has it been around for? I get one version of the homepage in one browser and the second in another, what decides which version is served and what kind of redirect is it? I think that is the most likely source of your troubles.
-
Yes, but the pages were indexed prior to the redesign, no? Can you look up your crawl stats in GWT to see if there's been a dramatic up tick in page download times, and a down trend in pages crawled. That will at least give you a starting point as to differences between now and then: https://www.google.com/webmasters/tools/crawl-stats
-
Logo definitely needs to be made clickable to Home.
Did you compare the old design and the new design's text to make sure you're still covering the same keywords. In many cases a redesign is more "streamlined" which also means less text or a re-write which is going to impact the keywords your site is relevant for.
-
Thanks, Ryan. Improving our code-to-text ratio is on our roadmap, but could that really be the issue here? The pages were all fully indexed without problems for a full month after our redesign, and we haven't added any scripts. Was there an algorithm update on Monday that could explain the sudden de-indexing?
-
VERY script heavy. Google has recently released updates on a lot of this (Q4 2014) here: http://googlewebmastercentral.blogspot.mx/2014/10/updating-our-technical-webmaster.html. With further guidance given here: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/optimize-encoding-and-transfer. Without doing a deep dive that's the most glaring issue and obvious difference between pages that are still being indexed and those that are not.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page must be internally linked to get indexed?
If a there is page like website.com/page; I think this page will be indexed by Google even we don't link it internally from anywhere. Is this true? Will it makes any difference in-terms of "indexability" if we list this page on sitemap? I know page's visibility will increase when link from multiple internal pages. I wonder will there be any noticeable difference while this page is listed in sitemap.
Intermediate & Advanced SEO | | vtmoz0 -
No Index Question
Hello, We are attempting to have the following page removed from Google search results: view-source:http://www.mndaily.com/1998/04/08/missing-student-has-disappeared A noindex tag was added but we aren't sure if it was done correctly. I'm wondering if there are any experts here that might be able to confirm that this was added correctly and will result in the removal of the page from search results. Thanks in advance for any help you can provide!
Intermediate & Advanced SEO | | jasonMPLS0 -
Indexing isolated webpages
Hi all,
Intermediate & Advanced SEO | | Tarek_Lel
We are running a classifieds website.Due to technical limitations, we will probably not be able to list or search expired ads, but we still can view ad details view page if you landed on expired ad from external page (or google search results).Our concern is, if the ad page is still exists, but it's totally isolated from the website (i.e not found by search option on the website and no following site links) will google remove it from the index?Thanks, T0 -
Links from non-indexed pages
Whilst looking for link opportunities, I have noticed that the website has a few profiles from suppliers or accredited organisations. However, a search form is required to access these pages and when I type cache:"webpage.com" the page is showing up as non-indexed. These are good websites, not spammy directory sites, but is it worth trying to get Google to index the pages? If so, what is the best method to use?
Intermediate & Advanced SEO | | maxweb0 -
Can't get auto-generated content de-indexed
Hello and thanks in advance for any help you can offer me! Customgia.com, a costume jewelry e-commerce site, has two types of product pages - public pages that are internally linked and private pages that are only accessible by accessing the URL directly. Every item on Customgia is created online using an online design tool. Users can register for a free account and save the designs they create, even if they don't purchase them. Prior to saving their design, the user is required to enter a product name and choose "public" or "private" for that design. The page title and product description are auto-generated. Since launching in October '11, the number of products grew and grew as more users designed jewelry items. Most users chose to show their designs publicly, so the number of products in the store swelled to nearly 3000. I realized many of these designs were similar to each and occasionally exact duplicates. So over the past 8 months, I've made 2300 of these design "private" - and no longer accessible unless the designer logs into their account (these pages can also be linked to directly). When I realized that Google had indexed nearly all 3000 products, I entered URL removal requests on Webmaster Tools for the designs that I had changed to "private". I did this starting about 4 months ago. At the time, I did not have NOINDEX meta tags on these product pages (obviously a mistake) so it appears that most of these product pages were never removed from the index. Or if they were removed, they were added back in after the 90 days were up. Of the 716 products currently showing (the ones I want Google to know about), 466 have unique, informative descriptions written by humans. The remaining 250 have auto-generated descriptions that read coherently but are somewhat similar to one another. I don't think these 250 descriptions are the big problem right now but these product pages can be hidden if necessary. I think the big problem is the 2000 product pages that are still in the Google index but shouldn't be. The following Google query tells me roughly how many product pages are in the index: site:Customgia.com inurl:shop-for Ideally, it should return just over 716 results but instead it's returning 2650 results. Most of these 1900 product pages have bad product names and highly similar, auto-generated descriptions and page titles. I wish Google never crawled them. Last week, NOINDEX tags were added to all 1900 "private" designs so currently the only product pages that should be indexed are the 716 showing on the site. Unfortunately, over the past ten days the number of product pages in the Google index hasn't changed. One solution I initially thought might work is to re-enter the removal requests because now, with the NOINDEX tags, these pages should be removed permanently. But I can't determine which product pages need to be removed because Google doesn't let me see that deep into the search results. If I look at the removal request history it says "Expired" or "Removed" but these labels don't seem to correspond in any way to whether or not that page is currently indexed. Additionally, Google is unlikely to crawl these "private" pages because they are orphaned and no longer linked to any public pages of the site (and no external links either). Currently, Customgia.com averages 25 organic visits per month (branded and non-branded) and close to zero sales. Does anyone think de-indexing the entire site would be appropriate here? Start with a clean slate and then let Google re-crawl and index only the public pages - would that be easier than battling with Webmaster tools for months on end? Back in August, I posted a similar problem that was solved using NOINDEX tags (de-indexing a different set of pages on Customgia): http://moz.com/community/q/does-this-site-have-a-duplicate-content-issue#reply_176813 Thanks for reading through all this!
Intermediate & Advanced SEO | | rja2140 -
Problem with Google reading https homepage?
Hi Moz Community, In July, we changed our homepage to https via a 301 redirect from http (the only page on our site with https). Our homepage receives an A grade in the ‘On Page Grader’ by Moz for our desired keyword. We have increased our backlink efforts directly to our homepage since we switched to the SSL homepage. However, we still have not increased in search ranking for our specific keyword. Is there something we could have missed when doing the 301 redirect (submitting a new sitemap, changing rotbots.txt files, or anything else??) that has resulted in Google not correctly accessing the https version? (the https page has been indexed by Google). Any help would be greatly appreciated.
Intermediate & Advanced SEO | | G.Anderson0 -
What may cause a page not to be indexed (be de-indexed)?
Hi All, I have a main category page, a landing page, that does not appear in the SERPS at all (even if I serach for a whole sentence from it). This page once ranked high. What may cause such a punishment for a specific page? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Removing pages from index
Hello, I run an e-commerce website. I just realized that Google has "pagination" pages in the index which should not be there. In fact, I have no idea how they got there. For example, www.mydomain.com/category-name.asp?page=3434532
Intermediate & Advanced SEO | | AlexGop
There are hundreds of these pages in the index. There are no links to these pages on the website, so I am assuming someone is trying to ruin my rankings by linking to the pages that do not exist. The page content displays category information with no products. I realize that its a flaw in design, and I am working on fixing it (301 none existent pages). Meanwhile, I am not sure if I should request removal of these pages. If so, what is the best way to request bulk removal. Also, should I 301, 404 or 410 these pages? Any help would be appreciated. Thanks, Alex0