Website dropped out from Google index
-
Howdy, fellow mozzers.
I got approached by my friend - their website is https://www.hauteheadquarters.com
She is saying that they dropped from google index over night - and, as you can see if you google their name, website url or even site: , most of the pages are not indexed. Home page is nowhere to be found - that's for sure.
I know that they were indexed before. Google webmaster tools don't have any manual actions (at least yet). No sudden changes in content or backlink profile. robots.txt has some weird rule - disallow everything for EtaoSpider. I don't know if google would listen to that - robots checker in GWT says it's all good.
Any ideas why that happen? Any ideas what I should check?
P.S. Just noticed in GWT there was a huge drop in indexed pages within first week of August. Still no idea why though.
P.P.S. Just noticed that there is noindex x-robots-tag in headers... Anyone knows where this can be set?
-
"P.P.S. Just noticed that there is noindex x-robots-tag in headers"
That will do it. You are telling Google to take all of your pages out of Google. You set that at the web server level and so you will need to get into your apache or nginx setup
https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag
Get on this ASAP!
-
Hi Dmitri,
I also see the homepage in Google, but very few pages indexed beyond that, so there does appear to be a serious problem. I don't see anything immediately regarding problems with robots.txt or no index tags. Screaming Frog was able to crawl this site without any problems.
One thing I did see in the few pages that are indexed is the presence of a lot of internal search results pages being indexed.
For example:
https://www.hauteheadquarters.com/shop/rings/2?sort_price=ascandhttps://www.hauteheadquarters.com/shop/rings/2?sort_price=descThese two pages are exactly the same products, just in different order. This page also exists: https://www.hauteheadquarters.com/shop/rings/2 - the same products again. For all practical purposes all three of these pages are exactly the same content. Unfortunately, they price sort pages are not blocked from being crawled and indexed AND they are using self-referencing canonical tags.Based on pages like these and other duplicate/thin content issues across the site, I wouldn't rule out a Panda Penalty. It is highly likely that this site may have been penalized. Just because there is no manual action doesn't mean a penalty isn't in play.Recommendations:1. Audit sitewide content and determine which pages should be in Google2. Implement directives in the robots.txt file to prevent the URLs containing query parameters that don't provide unique content from being crawled.3. Implement canonical tags referencing the original URL without query parameters. Examplehttps://www.hauteheadquarters.com/shop/rings/2?sort_price=ascandhttps://www.hauteheadquarters.com/shop/rings/2?sort_price=descShould both be canonicalized to https://www.hauteheadquarters.com/shop/rings/24. Rebuild the XML sitemap and include only important URLs5. Resubmit the XML sitemap in GSC Wait a anywhere from a couple of days to a couple of weeks after resubmitting the sitemap, then evaluate if this has remedied the problem.Don't file a reconsideration request. This won't do any good because if it is a penalty, it was done via the algorithm and not manually.Hope that helps a little and good luck!Sincerely,Dana
-
Me too!
-
Absolutely, I'm glad you got things squared!
-
Thanks for response!
Well, basically, as I mentioned, the problem was due to http-header robots tag. So, after removing it, and requesting "fetch as google", it's all up and running now. The crawl time proves that as well.
Thanks for giving me idea for looking into cache times in the future though!
-
I see the homepage in my results - https://www.google.com/#q=site%3Ahttps%3A%2F%2Fwww.hauteheadquarters.com
Homepage was also cached today: http://webcache.googleusercontent.com/search?q=cache:https://www.hauteheadquarters.com&bav=on.2,or.r_cp.&biw=1920&bih=955&dpr=1&ion=1&ech=1&psi=EYbEV6CLN8aweJDvktgD.1472497162096.3&ei=EYbEV6CLN8aweJDvktgD&emsg=NCSR&noj=1
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Bad Backlinks to my website. Rankings Dropped
Hi guys, My name is Flavius and I work for a website named LaserPRocessing.ro from Romania. In april this year the site was infected with a virus and got around 1k bad backlinks. I found those backlinks on Moz Backlinks Tools. I added them on GSC disawow file. What next steps do you recommend to tell Google to block those links? Cheers!
Intermediate & Advanced SEO | | nojaflaviusalex0 -
How do we decide which pages to index/de-index? Help for a 250k page site
At Siftery (siftery.com) we have about 250k pages, most of them reflected in our sitemap. Though after submitting a sitemap we started seeing an increase in the number of pages Google indexed, in the past few weeks progress has slowed to a crawl at about 80k pages, and in fact has been coming down very marginally. Due to the nature of the site, a lot of the pages on the site likely look very similar to search engines. We've also broken down our sitemap into an index, so we know that most of the indexation problems are coming from a particular type of page (company profiles). Given these facts below, what do you recommend we do? Should we de-index all of the pages that are not being picked up by the Google index (and are therefore likely seen as low quality)? There seems to be a school of thought that de-indexing "thin" pages improves the ranking potential of the indexed pages. We have plans for enriching and differentiating the pages that are being picked up as thin (Moz itself picks them up as 'duplicate' pages even though they're not. Thanks for sharing your thoughts and experiences!
Intermediate & Advanced SEO | | ggiaco-siftery0 -
Removing Parameterized URLs from Google Index
We have duplicate eCommerce websites, and we are in the process of implementing cross-domain canonicals. (We can't 301 - both sites are major brands). So far, this is working well - rankings are improving dramatically in most cases. However, what we are seeing in some cases is that Google has indexed a parameterized page for the site being canonicaled (this is the site that is getting the canonical tag - the "from" page). When this happens, both sites are being ranked, and the parameterized page appears to be blocking the canonical. The question is, how do I remove canonicaled pages from Google's index? If Google doesn't crawl the page in question, it never sees the canonical tag, and we still have duplicate content. Example: A. www.domain2.com/productname.cfm%3FclickSource%3DXSELL_PR is ranked at #35, and B. www.domain1.com/productname.cfm is ranked at #12. (yes, I know that upper case is bad. We fixed that too.) Page A has the canonical tag, but page B's rank didn't improve. I know that there are no guarantees that it will improve, but I am seeing a pattern. Page A appears to be preventing Google from passing link juice via canonical. If Google doesn't crawl Page A, it can't see the rel=canonical tag. We likely have thousands of pages like this. Any ideas? Does it make sense to block the "clicksource" parameter in GWT? That kind of scares me.
Intermediate & Advanced SEO | | AMHC0 -
Remove Google penalty or make a new website, Which is better??
My local website was hit by google and I have done all steps to remove the penalty, But it's still not ranked. So it is better to make a new website with new content and start working on it?
Intermediate & Advanced SEO | | Dan_Brown10 -
Google Is Indexing The Wrong Page For My Keyword
For a long time (almost 3 mounth) google indexing the wrong page for my main keyword.
Intermediate & Advanced SEO | | Tiedemann_Anselm
The problem is that each time google indexed another page each time for a period of 4-7 days, Sometimes i see the home page, sometimes a category page and sometimes a product page.
It seems though Google has not yet decided what his favorite / better page for this keyword. This is the pages google index: (In most cases you can find the site on the second or third page) Main Page: http://bit.ly/19fOqDh Category Page: http://bit.ly/1ebpiRn Another Category: http://bit.ly/K3MZl4 Product Page: http://bit.ly/1c73B1s All links I get to the website are natural links, therefore in most cases the anchor we got is the website name. In addition I have many links I get from bloggers that asked to do a review on one of my products, I'm very careful about that and so I'm always checking the blogger and their website only if it is something good, I allowed it. also i never ask for a link back (must of the time i receive without asking), and as I said, most of their links are anchor with my website name. Here some example of links that i received from bloggers: http://bit.ly/1hF0pQb http://bit.ly/1a8ogT1 http://bit.ly/1bqqRr8 http://bit.ly/1c5QeC7 http://bit.ly/1gXgzXJ Please Can I get a recommendation what should you do?
Should I try to change the anchor of the link?
Do I need to not allow bloggers to make a review on my products? I'd love to hear what you recommend,
Thanks for the help0 -
How does Google index pagination variables in Ajax snapshots? We're seeing random huge variables.
We're using the Google snapshot method to index dynamic Ajax content. Some of this content is from tables using pagination. The pagination is tracked with a var in the hash, something like: #!home/?view_3_page=1 We're seeing all sorts of calls from Google now with huge numbers for these URL variables that we are not generating with our snapshots. Like this: #!home/?view_3_page=10099089 These aren't trivial since each snapshot represents a server load, so we'd like these vars to only represent what's returned by the snapshots. Is Google generating random numbers going fishing for content? If so, is this something we can control or minimize?
Intermediate & Advanced SEO | | sitestrux0 -
How do I get rid of all the 404 errors in google webmaster tools after building a new website under the same domiain
I recently launched my new website under the same domain as the old one. I did all the important 301 redirects but it seems like every url that was in google index is still their but now with a 404 error code. How can I get rid of this problem? For example if you google my company name 'romancing diamonds' half the link under the name are 404 errors. Look at my webmaster tools and you'll see the same thing. Is their anyway to remove all those previous url's from google's indexes and start anew? Shawn
Intermediate & Advanced SEO | | Romancing0 -
Need to duplicate the index for Google in a way that's correct
Usually duplicated content is a brief to fix. I find myself in a little predicament: I have a network of career oriented websites in several countries. the problem is that for each country we use a "master" site that aggregates all ads working as a portal. The smaller nisched sites have some of the same info as the "master" sites since it is relevant for that site. The "master" sites have naturally gained the index for the majority of these ads. So the main issue is how to maintain the ads on the master sites and still make the nische sites content become indexed in a way that doesn't break Google guide lines. I can of course fix this in various ways ranging from iframes(no index though) and bullet listing and small adjustments to the headers and titles on the content on the nisched sites, but it feels like I'm cheating if I'm going down that path. So the question is: Have someone else stumbled upon a similar problem? If so...? How did you fix it.
Intermediate & Advanced SEO | | Gustav-Northclick0