Penguin Recovery Problem - Weird
-
I had an old URL and the link profile of this URL wasn't good - I had been using article syndication and Penguin threw me to the wolves.
I decided to start over with a new URL and build a new natural link profile. I specifically did NOT do a 301 redirect to the new URL and did not make any request to Google to transfer domain as I didn't want old site being associated to the new one. To redirect our old users, I put a link on the old URL index page (nofollowed) that say that we have moved.
I was very surprised to find that in GWT all the links of the old URL have now been associated to the new URL....why is that? I started over to have a clean natural profile and follow Google guidelines.Has anyone heard of this before? All I can guess is that Google itself "decided" to do its own pseudo-301, since the site was the same, page for page.This has Major implications for anyone attempting a "clean start" to recover from Penguin.
-
Nakul -
Re: "other then seeing those links in GWT, are there any other red flags that you are seeing in terms of not ranking, any penalty messages, unnatural links warning on the new?"
No - no red flags. My "new" site has only been live for about a week and is already reached page 2 or 3 of google serp for my main keywords.
But since I know those backlinks cause my old site to go from #2-3 of page 1 to past page 20, I freaked when I saw them following me.
or was there any messages on the old domain when you got penalized ?
No - I crashed on April 24 and never recovered, even though I removed all the pages that had been syndicated and asked Google for reconsideration, they said there was no manual penalty.
-
Thanks for your thoughts, Marie.
** And I think that sometimes Google gets it wrong as to who is the more authoritative.**
I am confident that they are very often wrong.
-
I personally think it is just a WMT glitch. When the link is shown in WMT it says, "Via this intermediate link..." and the intermediate link is the original page.
However, according again to Dejan SEO, if you copy a site's page and your site has a higher PageRank, you can actually outrank the original page. Here is the article on how they did this (with permission) for Rand Fishkin's blog and other pages:
http://dejanseo.com.au/hijacked/
That makes me think that it's possible that link juice is granted to the more authoritative of the two sites. And I think that sometimes Google gets it wrong as to who is the more authoritative.
While I still think that these links would not cause Penguin to affect a site, I wouldn't chance it!
-
**Basically, when Google sees a duplicate of a page they will assign the page's links to that site. **
Oh.... do you think somebody could grab an article from your website, post it on theirs and kidnap your linkjuice?
-
Those pages can still be in the cache. That was my theory as to what was going on with the previous site. When we used the url removal tool (not the disavow tool by the way) to remove them from the cache this seemed to solve the problem.
-
Considering what's done is done and the point that your old domain is penalized, can you possibly do/try any of the following ?
1. Ignore the fact that those links are appearing in your backlink profile for the new domain. See whether this new website works/ranks.
2. If it doesn't (at all), can you possibly disavow those "article marketing" links for the old domain and do nothing at all for the new domain (since those links are not really linking to your new domain).
Coming back to point 1, what I'd like to ask is, other then seeing those links in GWT, are there any other red flags that you are seeing in terms of not ranking, any penalty messages, unnatural links warning on the new or was there any messages on the old domain when you got penalized ?
-
Thanks Marie - I will try your suggestion.
I did a search using the regular operators to see if my old site was still indexed and Google returned a "we can't find it on this server-that's all we know" Sergent Shultz response, but knowing Google that does not necessarily mean pages are not still in their index.
-
I had this happen with a client I worked with. The client's previous site had a severe Penguin issue so he decided to start over. We did everything properly and did not do any redirects from the old site to the new. But we were surprised when suddenly the WMT console for the new site was showing all of the links that went to the old site!
What happened? It's complicated but it has to do with something that is described here by Dejan SEO: http://dejanseo.com.au/mind-blowing-hack/
Basically, when Google sees a duplicate of a page they will assign the page's links to that site.
What I don't know is whether those links are carrying any link juice and also any penalty with them.
What we did was go back into the WMT console for the old site and use the url removal tool to remove every single url from the index AND the cache for the old site.
It took about 2 weeks for the links to disappear from WMT for the new site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I worry about rendering problems of my pages in google search console fetch as google?
Some elements are not properly shown when I preview our pages in search console (fetch as google), e.g.
Intermediate & Advanced SEO | | lcourse
google maps, css tables etc. and some parts are not showing up since we load them asynchroneously for best page speed. Is this something should pay attention to and try to fix?0 -
Recovery after recent Google update
Hi guys. This is somewhat a continuation for this topic: https://moz.com/community/q/january-2016-massive-rankings-fluctuations After that update, several of our clients and our website as well have experienced high fluctuation rankings period, which ended up in huge drops - 10-20 spots. Which, obviously, made everyone unhappy. Anybody knows what exactly the change was about? What should we fix/take a look at, analyze again? We aren't using any shady techniques or black hat. Everything is honest. All metrics, number of backlinks etc are going up, no major changes have been recently made. Please, help!
Intermediate & Advanced SEO | | DmitriiK0 -
Is there any problem if we migrate the entire site to HTTPS except for the blog ?
Hello guys,
Intermediate & Advanced SEO | | newrankbg
I have a question to those of you, who have migrated from HTTP to HTTPS. We are planning to migrate the site of our customer to Always SSL. In other words, we want to redirect all site pages to HTTPS, except for the blog. Currently, the whole site is using the HTTP protocol (except the checkout page).
After the change, our customer's site should look like this: https://www.domain.com
http://www.domain.com/blog/ The reasons we do not want to migrate the blog to HTTPS are as follows: The blog does not collect any sensitive user information, as opposed to the site. We all know that on-site algorithms like Panda are having sitewide effect. If the Panda doesn’t like part of the blog (if any thin or low quality content), we do not want this to reflect on the rankings of the entire website. Having in mind that for Google, HTTP and HTTPS are two different protocols, a possible blog penalty should not reflect the web site, which will use HTTPS. Point 2 is the reason I am writing here, as this is just a theory. I would like to hear more thoughts from the experts here. Also, I would like to know your opinion, regarding this mixed use of protocols – could this change lead to a negative effect for any of the properties and why? For me, there should be no negative effect at all. The only disadvantage is that we will have to monitor both metrics – the blog and the site separately in webmaster tools. Thank you all and looking forward for your comments.0 -
If Penguin 2.0 targets specific pages and keywords, should I spend less SEO effort on them since will they be harder to optimize? Penalty repair is only starting at end of year.
I’m working with a company that got hit by Penguin 2.0. They’re going to switch to white-hat only for a few months and review analytics before considering repairing the penalty. In the meantime, would it make sense to focus less SEO effort (on-site optimization, link building, etc.) on any pages or keywords that were penalized or hit hardest? Or are those the pages we should work on the most? Thanks for reading!
Intermediate & Advanced SEO | | DA20130 -
Duplicate site (disaster recovery) being crawled and creating two indexed search results
I have a primary domain, toptable.co.uk, and a disaster recovery site for this primary domain named uk-www.gtm.opentable.com. In the event of a disaster, toptable.co.uk would get CNAMEd (DNS alias) to the .gtm site. Naturally the .gtm disaster recover domian is an exact match to the toptable.co.uk domain. Unfortunately, Google has crawled the uk-www.gtm.opentable site, and it's showing up in search results. In most cases the gtm urls don't get redirected to toptable they actually appear as an entirely separate domain to the user. The strong feeling is that this duplicate content is hurting toptable.co.uk, especially as .gtm.ot is part of the .opentable.com domain which has significant authority. So we need a way of stopping Google from crawling gtm. There seem to be two potential fixes. Which is best for this case? use the robots.txt to block Google from crawling the .gtm site 2) canonicalize the the gtm urls to toptable.co.uk In general Google seems to recommend a canonical change but in this special case it seems robot.txt change could be best. Thanks in advance to the SEOmoz community!
Intermediate & Advanced SEO | | OpenTable0 -
Has anyone ever seen this canonicalization problem with Google?
I recently move my site (www.leatherhidestore.com/servlet/StoreFront) off of the ProStores platform because I could never get Google to show my homepage in SERP results - instead always selecting random product pages to rank. However, I never had this problem with Yahoo and Bing as they always defaulted to the homepage except when the category was a better match. Fast forward and I have just launched the site (www.leatherhidestore) on Magento Community and I STILL CANNOT GET GOOGLE TO USE MY HOMEPAGE FOR SERP RESULTS although I'm getting okay SERPS for random pages.....ERRRRRR! Of course, as if to rub salt in the wound, Yahoo and Bing are behaving just perfect. Still, I must think that if my Google would recognize my homepage (where the PR is and backlinks point to) I would be doing 10x better. I am showing duplicate page content and title problems which the developer is trying to solve but I do not know if this will fix the homepage Google issue. I feel like I must be in some sort of canonicalization death spiral. Has anybody dealt with this issue before and will mercifully share what I should do to fix it...please! Hunter
Intermediate & Advanced SEO | | leatherhidestore0 -
How to Find problem domain history
Hi I have what most of you may think is a dumb question but here goes. please be nice... 🙂 So I have a client (http://www,ace-alarms.co.uk) who are having a real problem ranking for ANY of their key words. I know it's a reasonably competitive area but I've not seen such a stubborn domain and it seems that no matter what we do there's nothing listed. i'm thinking that there may be a problem with the domain name. My question is; how can I find out if this is a problem domain. Thanks in advance Steve
Intermediate & Advanced SEO | | stevecounsell0 -
Think I may have found a problem with site. Can you confirm my suspicions?
So I've been wracking my brain about a problem. I had posted earlier about our degrading rank that we haven't been able to arrest. I thought we were doing everything right. Many years ago we had a program that would allow other stores in our niche use our site as a storefront if they couldn't deal with setting up their own site. They would have their own homepage with their own domain but all links from that page would go to our site to avoid duplicate content issues (before I knew about canonical meta tags or before they existed, I don't remember). I just realize that we had dozens of these domains pointing to our site without nofollow meta tags. Is it possible that this pattern looked like we were trying to game Google and have been penalized as some kind of link farm since Panda? I've added nofollow meta tags to these domains. If we were being penalized for this, should this fix the problem?
Intermediate & Advanced SEO | | IanTheScot0