Duplicate website pages indexed: Ranking dropped. Does Google checks the duplicate domain association?
-
Hi all,
Our duplicate website which is used for testing new optimisations got indexed and we dropped in rankings. But I am not sure whether this is exact reason as it happened earlier too where I don't find much drop in rankings. Also I got replies in the past that it'll not really impact original website but duplicate website. I think this rule applies to the third party websites. But if our own domain has exact duplicate content; will Google knows that we own the website from any other way we are associated like IP addresses and servers, etc..to find the duplicate website is hosted by us? I wonder how Google treats duplicate content from third party domains and own domains.
Thanks
-
Hi John,
Thanks for the response. I agree with you about using "rel=canonical". But there are too many pages to manually add these tags. Is there any other way to implement this?
Thanks
-
It's never a good idea to rely on the search engines "figuring out" confusing/contradictory signals from your website, vtmoz. While they claim they can do it in some cases, in reality, they very frequently get it wrong. (Though I've never seen any indication from Google that it makes any effort to find any connecting ownership signals between dupe sites when deciding whether to index a dupe. That would be way outside the scope of the crawler.)
So yes, it's entirely possible that wholly duplicate website is causing ranking and authority issues. Assuming what you mean by a site for "testing new optimisations" is a development website for testing new code before deploying to the live site, you'll need to get the dev site placed behind a password-protected login, or at the very least, have no-index meta tags added to every page. Once those are in place, you can use Google Search Console to request removal of the site from the index to hopefully speed things up. It's possible that when you had the dupe site before, it didn't get fully indexed, but this time it did, which would be why you're seeing more problems now. And having a dupe site in the index can definitely cause problems with the original site, as well as the dupe.
There's no guarantee removing the dupe site from the index will immediately resolve all the problems you are having, but leaving the dupe site in place in the index is just too big a risk. You'll still have to look deeper into what else may be causing or exacerbating the ranking issues.
Paul
-
This can be partially resolved by setting up "rel=canonical" tags on the website that you want Google to index. This is particularly advantageous when there is live duplicate content. It is basically a hint to the search engine that says "Look here!!".
Also, a better strategy to test landing pages, may be using a tool called Google Optimize, this will allow you to employ A/B and multivariate testing, without having a separate domain
Hopefully this helps!
John
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Increase of non-relevant back-links drop page ranking?
Hi community, Let's say there is a page with 50 back-links where 40 are non-relevant back-links and only 10 are relevant in-terms of content around the link, etc....Will these non-relevant back-links impact the ranking of the page by diluting the back-link profile? Thanks
Algorithm Updates | | vtmoz0 -
Domain location is a ranking factor? Back links & website?
If a website trying to rank in US and it has received many back-links from domains hosting from other countries; how it will impact website ranking? Can a website hosted in country will rank well in other country? How much the hosted location matters? Like....domain hosted in Germany but trying to rank in US?
Algorithm Updates | | vtmoz0 -
Google has indexed some of our old posts. What took so long and will we lose rank for their brevity?
Hi, We just had a few of our old blog posts indexed by Google. There are short formed posts, and I want to make sure we're not going to get dinged by Google for their length. Can you advise?https://www.policygenius.com/blog/guaranteed-issue
Algorithm Updates | | francoisdelame0 -
Google indexing https sites by default now, where's the Moz blog about it!
Hello and good morning / happy Friday! Last night an article from of all places " Venture Beat " titled " Google Search starts indexing and letting users stream Android apps without matching web content " was sent to me, as I read this I got a bit giddy. Since we had just implemented a full sitewide https cert rather than a cart only ssl. I then quickly searched for other sources to see if this was indeed true, and the writing on the walls seems to indicate so. Google - Google Webmaster Blog! - http://googlewebmastercentral.blogspot.in/2015/12/indexing-https-pages-by-default.html http://www.searchenginejournal.com/google-to-prioritize-the-indexing-of-https-pages/147179/ http://www.tomshardware.com/news/google-indexing-https-by-default,30781.html https://hacked.com/google-will-begin-indexing-httpsencrypted-pages-default/ https://www.seroundtable.com/google-app-indexing-documentation-updated-21345.html I found it a bit ironic to read about this on mostly unsecured sites. I wanted to hear about the 8 keypoint rules that google will factor in when ranking / indexing https pages from now on, and see what you all felt about this. Google will now begin to index HTTPS equivalents of HTTP web pages, even when the former don’t have any links to them. However, Google will only index an HTTPS URL if it follows these conditions: It doesn’t contain insecure dependencies. It isn’t blocked from crawling by robots.txt. It doesn’t redirect users to or through an insecure HTTP page. It doesn’t have a rel="canonical" link to the HTTP page. It doesn’t contain a noindex robots meta tag. It doesn’t have on-host outlinks to HTTP URLs. The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL. The server has a valid TLS certificate. One rule that confuses me a bit is : **It doesn’t redirect users to or through an insecure HTTP page. ** Does this mean if you just moved over to https from http your site won't pick up the https boost? Since most sites in general have http redirects to https? Thank you!
Algorithm Updates | | Deacyde0 -
18 years later, Page Rank 6 Drops to 0, All +1s disappear, Scrapers outrank us
18 years ago I put up our first website at http://oz.vc/6 Traffic grew and our forums reached hundreds of thousands of posts, our website had a page rank of 6 and our forums and other content areas ranked 5-6, the others usually 4-6. Panda 2.2 came along and whacked it. No measures recommended by SEO experts and the Matt Cutts videos even made a dent, including some pretty severe measures that were supposed to make a difference. Bing and Yahoo traffic both grew since Panda 2.2 and only Google kept dropping every few updates without recovery. Several few weeks ago Google provides the ultimate whack. It seems every page other than the home page has either a PR of 0 or not generating any PR at all. Every +1 disappeared off of the site. Now three pages have +1 back and the entire guide section (hundreds of articles) are still missing all +1s. I discovered two scrapers, one of which was copying all of our forum posts and ranking a PR 2 for it (while we have a zero. They were taken down but I still can't imagine how this result could happen. I am going to have an RSS feed aggregator taken down that is ranking a 2 and knows we can't prevent them from taking our Wordress feeds and storing them (we use them for areas on the site.) How can Google provide us with a zero page rank and give obvious scrapers page rank? What should have been years worth of awesome rich added content and new features was wasted chasing Google ghosts. I've had two SEO people look at the site and none could point to any major issue that would explain what we've seen, especially the latest page rank death penalty. We haven't sold paid links. We have received no warnings from Google (nor should we have.) The large "thin" area you may see in a directory were removed entirely from Google (and made no difference and a drop in Google doing the "right" thing!) Most think we have been stuck for a very long time in the rare Google glitch. Would be interested in your insights.
Algorithm Updates | | seoagnostic0 -
How could Penguin kill my top ten rank and promote this garbage page to a #5 spot
Hey, Before penguin, I had a #9 rank for the term "yoga poses". So as many of us are doing, I started looking at my link profile... and yes, there were around 300 links from an old yoga news website (anchor: yoga poses)... that lead to the page on my site optimized for this term. The problem is they took the site down, but not properly... I.E. they generate a "not available" message for browsers, but underneath, I guess the bots can still index all the pages... so I guess they were interpreting these links as coming from a cloaked site. So, I was able to get them to remove the links... webmaster tools reports half of them gone now. What I don't get though... is how Google can give this garbage page a #5 spot for a competitive term like "yoga poses"... Check out http://www.ebmyoga.com/beginyoga.html and compare it to my page... http://www.yogaclassplan.com/yoga-poses/ This page leads to highly quality 100% unique yoga pose articles... in my mind we deliver so much more value than the site with a #5 rank. I don't understand. Any insight? Thanks,
Algorithm Updates | | biomat0 -
Trying to figure out why one of my popular pages was de-indexed from Google.
I wanted to share this with everyone for two reasons. 1. To try to figure out why this happened, and 2 Let everyone be aware of this so you can check some of your pages if needed. Someone on Facebook asked me a question that I knew I had answered in this post. I couldn't remember what the url was, so I googled some of the terms I knew was in the page, and the page didn't show up. I did some more searches and found out that the entire page was missing from Google. This page has a good number of shares, comments, Facebook likes, etc (ie: social signals) and there is certainly no black / gray hat techniques being used on my site. This page received a decent amount of organic traffic as well. I'm not sure when the page was de-indexed, and wouldn't have even known if I had't tried to search for it via google; which makes me concerned that perhaps other pages are being de-indexed. It also concerns me that I have done something wrong (without knowing) and perhaps other pages on my site are going to be penalized as well. Does anyone have any idea why this page would be de-indexed? It sure seems like all the signals are there to show Google this page is unique and valuable. Interested to hear some of your thoughts on this. Thanks
Algorithm Updates | | NoahsDad0 -
Indexing well in Google but not in Yahoo/Bing - WHY?
Been using SEOMOZ now to analyze and crawl a client's website for a while now. One thing I've noticed is that our client's website is indexing well with Google. a few thousand pages are being indexed. However, when it comes to Yahoo and Bing, the website only has a 100+ pages indexed. We've submitted updated sitemaps to Google and Bing and have been fixing any broken links, and on-page SEO. Content is also good. Here's the website: www.imaginet.com.ph Any suggestions/recommendations are highly appreciated. Thank you!
Algorithm Updates | | TheNorthernOffice790