Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Home page suddenly dropped from index!!
-
A client's home page, which has always done very well, has just dropped out of Google's index overnight!
Webmaster tools does not show any problem. The page doesn't even show up if we Google the company name.The Robot.txt contains:
Default Flywheel robots file
User-agent: *
Disallow: /calendar/action:posterboard/
Disallow: /events/action~posterboard/The only unusual thing I'm aware of is some A/B testing of the page done with 'Optimizely' - it redirects visitors to a test page, but it's not a 'real' redirect in that redirect checker tools still see the page as a 200. Also, other pages that are being tested this way are not having the same problem.
Other recent activity over the last few weeks/months includes linking to the page from some of our blog posts using the page topic as anchor text.
Any thoughts would be appreciated.
Caro -
Woot! So glad to see it wasn't a penalty!
-
Michael,
Duplicate content wasn't the issue in the end, but your response prompted me to analyse their home page text more closely and I discovered that there was room for improvement - too much of the home page content was also present on other pages of the site. Thanks for that!
-
Everyone, this has been resolved! The problem turned out to be a code error in the canonical tag for the page. There was an extra space and slash. Ironically, the canonical tag was one of the first things we looked at, yet we all overlooked that error
Thank you all so much for your input and assistance.
-
Thank you Michael...I'll do that.
-
I've seen a client have an internal page just suddenly be de-indexed. What appears to have happened is that Google saw it as a near duplicate of another page on their site, and dropped it from the index for that reason. Then, magically, it reappeared a week later.
You may be seeing something like this here. See what Moz Pro thinks in terms of duplicate content on your site, and if the home page gets called out along with another page.
-
Thanks so much for that info. I had not heard of Kerboo...I'll definitely check that out right away. Your input has been extremely helpful Kristina.
Caro
-
I would be incredibly surprised if internal links to the homepage caused the issue. Google expects you to have a bunch of internal links to the homepage.
What you're going to need to do now is do a thorough review of all of the external links pointing to your homepage. I would do this with a tool - I recommend Kerboo, although I'm sure there are others that could do the same thing. Otherwise, you can look through all of the links yourself and look for spam indications (steps outlined in this handy Moz article).
Either way, make sure that you pull your list of links from Ahrefs or Majestic. Ideally both, and merge the lists. Moz doesn't crawl nearly as many links.
Since you haven't gotten a manual penalty warning, you're going to have to take as many of the spammy links you find down as you can and disavow the others. For speed, I'd recommend that you immediately upload a list of spammy links with Google's disavow tool, then start asking for an actual removal.
Keep in mind that you're probably going to disavow links that were helping rankings, so expect that your homepage won't come back ranking as well for nonbranded search terms as it used to. You'll probably want to start out uploading a very conservative set of URLs to the disavow tool, wait a couple of days to see if that fixes the problem, upload a bigger set, check, etc.
Good luck!
-
No luck Kristina
I'm wondering if it's an algorithmic penalty in response to back links. We've never done shady linking, but over the years the site has gathered some strange links. Or, is there some chance that about two dozen anchor text links from their blog to the home page could have done it? I deleted them. But I can't request reconsideration if the penalty isn't manual.
-
Any luck so far? Usually it only takes a few hours for Google to crawl new pages after you submit them in GSC, in my experience.
-
I see no serious crawl issues. Mostly things we're already addressing, like duplicate content caused by blog tags and categories, missing meta descriptions (mostly in our knowledge base, so not an issue) and stuff like that.
When I checked the home page alone it said zero high, medium or low priority issues.
The page only de-indexed very recently. Maybe the next crawl will catch something. Same with GSC...it looks like the last 2 days of info is not available yet.
I should mention the home page Optimizely test had been running for at least a week before the page got dropped (will get actual date from client) , plus they have had a product page running a test for weeks with no problem. But I still think your suggestion to pause the test is a good one as I don't want anything to hinder the process of fixing this.
Update: Optimizely has been paused, code removed, home page submitted in GSC.
-
Okay, I ran some tests, and can't see anything that could've gone wrong. That does make it seem like a penalty, but given that this coincided with setting up Optimizely, let's go down that path first.
While your team is taking down the test - have you checked Moz to see if its crawler sees anything that could be causing an issue? I set up my Moz crawler to look into it, but it'll take a few days.
-
Thanks Kristina,
We have not tried pausing the test, but I can request they do that. It may be a good idea to do it regardless of whether it's causing the problem or not, while we get this issue sorted out.
Fetch as Google gave this result:HTTP/1.1 200 OK - so looks ok. I understand this also submits your page to Google as an actual indexing request?
site:https://website.com shows all our pages except the home page.
So, it looks like it's decided not to rank it for some reason.
I deleted some links from the blog to the home page - they had a keyword phrase as the anchor text. There were about 20 links that had accumulated over a few months. Not sure if that's the issue.
Still no manual penalty notice from Google.
-
Hm, I've done a lot with Optimizely in the past, and it's never caused an SEO problem, but it's completely possible something went wrong. Since that's your first inkling, have you tried pausing that test and removing the Optimizely code from the homepage? Then you can determine whether or not it's an Optimizely problem.
Another thing you can do is use the Fetch as Googlebot feature in GSC. Does GSC say it can fetch the page properly?
If it says it can, try searching for "site:www.yourcompanysite.com". This will show if Google's got your URL in its index. If nothing comes up, it's not there; if it comes up, Google's decided not to rank it for some reason.
After those steps, get back to us so we can figure out where to go from there!
Good luck,
Kristina
-
Jordan, not on the original version of the home page, but there is on the B test version.
The way I understand it the B version is a javascript page that is noindexed. Their redirect system seems to leave the original page looking like there is no redirect. Are you suggesting we use a 302 instead? -
Also, Google recommends you 302 those url's instead of returning a 200 http code. You can read more about their best practices about a/b testing.
-
Is there a 'meta no index no follow tag' implemented by chance?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Indexing Of Pages As HTTPS vs HTTP
We recently updated our site to be mobile optimized. As part of the update, we had also planned on adding SSL security to the site. However, we use an iframe on a lot of our site pages from a third party vendor for real estate listings and that iframe was not SSL friendly and the vendor does not have that solution yet. So, those iframes weren't displaying the content. As a result, we had to shift gears and go back to just being http and not the new https that we were hoping for. However, google seems to have indexed a lot of our pages as https and gives a security error to any visitors. The new site was launched about a week ago and there was code in the htaccess file that was pushing to www and https. I have fixed the htaccess file to no longer have https. My questions is will google "reindex" the site once it recognizes the new htaccess commands in the next couple weeks?
Intermediate & Advanced SEO | | vikasnwu1 -
My blog is indexing only the archive and category pages
Hi there MOZ community. I am new to the QandA and have a question. I have a blog Its been live for months - but I can not get the posts to rank in the serps. Oddly only the categories rank. The posts are crawled it seems - but seen as less important for a reason I don't understand. Can anyone here help with this? See here for what i mean. I have had several wp sites rank well in the serps - and the posts do much better. Than the categories or archives - super odd. Thanks to all for help!
Intermediate & Advanced SEO | | walletapp0 -
I have a lot of spammy links coming to my 404 page (the URLs have been removed now). Should i re-direct to Home?
I have a lot of spammy links pointing at my website according to MOZ. Thankfully all of them were for some URLs that we've long since removed so they're hitting my 404. Should i change the 404 with a 301 and Re-Direct that Juice to my home page or some other page or will that hurt my ranking?
Intermediate & Advanced SEO | | jagdecat0 -
Links from non-indexed pages
Whilst looking for link opportunities, I have noticed that the website has a few profiles from suppliers or accredited organisations. However, a search form is required to access these pages and when I type cache:"webpage.com" the page is showing up as non-indexed. These are good websites, not spammy directory sites, but is it worth trying to get Google to index the pages? If so, what is the best method to use?
Intermediate & Advanced SEO | | maxweb0 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
Dynamic pages - ecommerce product pages
Hi guys, Before I dive into my question, let me give you some background.. I manage an ecommerce site and we're got thousands of product pages. The pages contain dynamic blocks and information in these blocks are fed by another system. So in a nutshell, our product team enters the data in a software and boom, the information is generated in these page blocks. But that's not all, these pages then redirect to a duplicate version with a custom URL. This is cached and this is what the end user sees. This was done to speed up load, rather than the system generate a dynamic page on the fly, the cache page is loaded and the user sees it super fast. Another benefit happened as well, after going live with the cached pages, they started getting indexed and ranking in Google. The problem is that, the redirect to the duplicate cached page isn't a permanent one, it's a meta refresh, a 302 that happens in a second. So yeah, I've got 302s kicking about. The development team can set up 301 but then there won't be any caching, pages will just load dynamically. Google records pages that are cached but does it cache a dynamic page though? Without a cached page, I'm wondering if I would drop in traffic. The view source might just show a list of dynamic blocks, no content! How would you tackle this? I've already setup canonical tags on the cached pages but removing cache.. Thanks
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0 -
Disallowed Pages Still Showing Up in Google Index. What do we do?
We recently disallowed a wide variety of pages for www.udemy.com which we do not want google indexing (e.g., /tags or /lectures). Basically we don't want to spread our link juice around to all these pages that are never going to rank. We want to keep it focused on our core pages which are for our courses. We've added them as disallows in robots.txt, but after 2-3 weeks google is still showing them in it's index. When we lookup "site: udemy.com", for example, Google currently shows ~650,000 pages indexed... when really it should only be showing ~5,000 pages indexed. As another example, if you search for "site:udemy.com/tag", google shows 129,000 results. We've definitely added "/tag" into our robots.txt properly, so this should not be happening... Google showed be showing 0 results. Any ideas re: how we get Google to pay attention and re-index our site properly?
Intermediate & Advanced SEO | | udemy0