Disavow Issues
-
Hi
We have a client who was hit by Penguin about 18 months ago.
We disavowed all the bad links about 10 months ago however this has not resulted in an uplift in traffic or rankings.
The client is asking me whether it would be better to dump the domain and move the website to a fresh domain.
Can you provide thoughts / experience on this please?
Thanks.
-
Just wanted to clarify (for the sake of others who may read this post) that the question was in regards to Penguin and I think in your situation, you're dealing with manual penalties. With Penguin, there is no reconsideration request. You've got to clean up the best you can and then hope that things improve when Google refreshes the Penguin algorithm.
It's still up for debate whether removing links (as opposed to disavowing) is important for Penguin. My current advice is that if a link is easy to remove then do it. But, otherwise I disavow. While you're right that it is important to show Google your efforts in regards to link removal for a manual penalty, no one is going to look at your work for an algorithmic issue.
I asked John Mueller in a hangout once whether disavowing was as good as removing for Penguin and he said, "essentially yes". However, because there are potential problems that could come up with the disavow tool (such as improper formatting or taking too long to recrawl to disavow), if you can remove the link that's not a bad thing to do.
-
Hi Paul,
I realise it's been a couple of weeks since this was submitted, but I wanted to follow up. At my former agency, we went through a few reconsideration procedures for new clients. We managed to be successful with all of them, but some took quite a long time (August - February being the longest).
We have found that disavowing alone is not nearly enough to make a difference - it is far preferable for the links to be removed. Unlike Claudio below, we have had a far higher rate than 5%, but it all depends on where the links come from. Sometimes it's hard to even find a live email address to contact webmasters, and some people want payment to remove links (worth doing if the payment is not too high). We crafted templates and _always _followed up within two weeks if we did not get a response from first emailing someone for a link removal with another specifically crafted email template.
It's true that if you cannot remove links, it is still worthwhile demonstrating to Google that you attempted to do so, with email screenshots or at least a list of the sites you contacted. They want to see effort. They want to see that you removed, or attempted to remove, the vast majority of the bad links. It's time consuming and tedious, but it's worth it if you get the penalty removed.
As I said, the longest process we went through was over six months, but the site in question had a TERRIBLE backlink profile that was the result of years of abuse by bad link builders. We're talking removing thousands of links. However, it came through - the penalty was removed and the client's rankings are on the rise.
I hope this helps. The short version is: remove remove remove. You won't maintain a penalty if there are no more bad links holding the site back, and those links aren't helping it rank anyway.
If you'd like some advice on how to decide which links to remove and which to keep, please let me know. In the meantime, check out this post from my former colleague Brandon at Ayima. It's a good resource for link analysis.
Cheers,
Jane
-
Does the site have a good base of truly natural links? There have been very few reported cases of Penguin recovery. But, the ones that I have seen recover are ones that have had some excellent links left once the bad ones were cleaned up.
-
Did you have a manual penalty? Did you get it revoked? or did you assume you had a Penguin issue and were proactive about it to avoid a manual penalty?
-
Recovery from Link Penalty (manual or algorithm) procedure:
1. Collect inboud links from Google Webmaster Tools + Moz link explorer + Link Majestic.
2. Include all domains in a Excel worksheet.
3. Contact site owners asking for link removal (usually 5% of sucess, but the effort counts for Google).
4. Wait several weeks for the removal of the links.
5. Fill a disavow file and upload it to Google https://www.google.com/webmasters/tools/disavow-links-main?pli=1
6. Wait for 3 or 6 weeks and start a link building campain starting with a few links per week and increase it if you can (only natural links comming from authority sites related to your niche).
Recovers from Content problems.
1. Look for repetitive title and descriptions, use Google Webmaster Tools and Moz.
2. Look for pages with similar or identical content and fix it.
3. Look for pages with less than 200 words of convent and add content or simply remove them (404).
4. Add new fresh and original content.
Google will consider your effort and it will be increasing your rank step by step.
I hope it helps
Claudio
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
403 Errors Issue
Hi, all! I've been working with a Wordpress site that I inherited that gets little to no organic traffic, despite being content rich, optimized, etc. I know there's something wrong on the backend but can't find a satisfactory culprit. When I emulate googlebot, most pages give me a 403 error. Also, google will not index many urls which makes sense and is a massive headache. All advice appreciated! The site is https://www.diamondit.pro/ It is specific to WP Engine, using GES (Global Edge Security) and WPWAF
Technical SEO | | SimpleSearch0 -
Recover google INdexing issue after fixing malware attack.
Dear My Niche site attacked by malware on 1 st march 2018. Hacker inject a php file on my blogpage. Injected link like: mydomain.com/blog/dmy4xa.php? Then I scan My site by wordfence. Identifying all malware code.Then manually clean whole site with database. My site is completely free from malware. and remove all malware link from webmaster tools. Even Block my blog page by robots.txt . But new malware link index every week. So i need to remove those link every week. So this issue I decided to rebuild my site. Finally I rebuild my site another server. Then I flash my current server and migrate my site from those server on 10th january 2019 . I wait 1 month to deindex malware link. But new link are indexing every week. I discourage site for over 1 week and even delete site from google webmaster tools with all properties as well as verification file from server. Over 1 week , Link are showing. I feel boar to delete malware link every week. I need permanent solution. Please give me a perfect solution for this malware link index. Google index about 100 url .After that I clean my site with some tools. My site was free from malware. But Ne
Technical SEO | | Gfound1230 -
Wrapping my head around an e-commerce anchor filter issue, need help
I am having a hard time understanding how Google will deal with this scenario, I would love to hear what you guys think or suggest. Ok a category page on the site in question looks like this. http://makeupaddict.me/6-skin-care All fine and well, But a paginated page or a filtered category pages look like these http://makeupaddict.me/6-skin-care#/page-2 and http://makeupaddict.me/6-skin-care#/price-391-1217 From my understanding Google does not index an anchor without a shebang (#!), but that doesn't mean that they do not still crawl them, correct? That is where the issue comes in, since anchors are not indexed and dropped from the urls, when Google crawls a filtered or paginated page, it is getting different results. From the best of my understanding, and someone can correct me if I am wrong but an anchor is not passed in web languages like a querystring is. So if I am using php and land on http://makeupaddict.me/6-skin-care or http://makeupaddict.me/6-skin-care#/price-391-1217 and use something like .$_SERVER['SELF'] to get the url both pages will return http://makeupaddict.me/6-skin-care since the anchor is handled client side. With that being the case, is it imagined that Google uses that standard or is it thought they have a custom function that grabs the whole url anchor in all? Also if they are crawling the page with the anchor, but seeing it anchor less how are they handling the changing content?
Technical SEO | | LesleyPaone0 -
Interesting indexing issue - any input would be greatly appreciated!
A few months ago we did SEO for a website, just like any other website. However, we did not see crawl/indexing results that we have with all of our other SEO projects - the Google webmaster tool was indicating that only 1 page of the site (although only 20 pages) was indexed. The site was older & originally developed in Dreamweaver, so although that shouldn't have been an issue, we were desperate to solve the problem & ended up rebuilding the site in WordPress. While this actually helped increase the number of pages on the site that Google indexed (now all 20) - we are still seeing strange things in the search results. For example, when we check rankings manually for a particular term, the new description is showing, however, it is displaying the old title text. Does anyone know what the problem could be? Thank you so much!!
Technical SEO | | ZAG0 -
How do I Address Low Quality/Duplicate Content Issue for a Job portal?
Hi, I want to optimize my job portal for maximum search traffic. Problems Duplicate content- The portal takes jobs from other portals/blogs and posts on our site. Sometimes employers provide the same job posting to multiple portals and we are not allowed to change it resulting in duplicate content Empty Content Pages- We have a lot of pages which can be reached via filtering for multiple options. Like IT jobs in New York. If there are no IT jobs posted in New York, then it's a blank page with little or no content Repeated Content- When we have job postings, we have about the company information on each job listing page. If a company has 1000 jobs listed with us, that means 1000 pages have the exact same about the company wording Solutions Implemented Rel=prev and next. We have implemented this for pagination. We also have self referencing canonical tags on each page. Even if they are filtered with additional parameters, our system strips of the parameters and shows the correct URL all the time for both rel=prev and next as well as self canonical tags For duplicate content- Due to the volume of the job listings that come each day, it's impossible to create unique content for each. We try to make the initial paragraph (at least 130 characters) unique. However, we use a template system for each jobs. So a similar pattern can be detected after even 10 or 15 jobs. Sometimes we also take the wordy job descriptions and convert them into bullet points. If bullet points already available, we take only a few bullet points and try to re-shuffle them at times Can anyone provide me additional pointers to improve my site in terms of on-page SEO/technical SEO? Any help would be much appreciated. We are also thinking of no-indexing or deleting old jobs once they cross X number of days. Do you think this would be a smart strategy? Should I No-index empty listing pages as well? Thank you.
Technical SEO | | jombay3 -
Https enabled site with seo issues
Hello, Is there a problem with seo bots etc to crawl and rank my wesbite well if the entire site is https enabled? We have a sign in button which results on the next page being https along with the main homepage and all other pages are https enabled. Any major setbacks to the seo strategies? How do I overcome these issues?
Technical SEO | | shanky10 -
Duplicate Title Tag issue due to Shopify CMS
Hi guys, I'm a novice really when it comes to SEO, yet have taken it in house for the next year or so, firstly because I have had my fingers burnt twice...and secondly, to allow me to recoup some of the loss from my prior campaigns. One thing I have noticed on my site (which uses a Shopify E-commerce CMS), is that Shopify duplicates a url for each my products. An example of this is http://www.vidahomes.co.uk/collections/designer-radiators-heating/products/reina-aliano
Technical SEO | | philscott2006
http://www.vidahomes.co.uk/products/reina-aliano Both products provide exactly the same information, yet appear in different ways subject to how the customer finds them. I contacted Shopify to find a fix to this issue when I noticed a high amount of Duplicate Title Tags in my SEO crawl. Their response was as follows. Using a rel canonical link will help prevent duplicate content issues with search engines. All you need to do is add this line of code: **<link rel="canonical" href="{{ canonical_url }}" />** ** before the tag in the theme.liquid file. It’s that simple :)** The theme liquid file basically generates the outer template for the whole site, and is only compromised when over-ruled. This all seems a little too easy for me, so I am hoping whether someone can elaborate as to whether this will work or not, as I'm not entirely sold on their response. I was always under the impression with canonical tags, that they should be added to the header section of the duplicate page in question, which refers back to the original page. The code I have been told to add above implies that the canonical tag would be added to every page in my site so the Google robot would have a hard time in finding anything at all of relevance Thanks in advance for any assistance with this. Kind Regards Phil Scott Vida Homes0 -
Aspx filters causing duplicate content issues
A client has a url which is duplicated by filters on the page, for example: - http://www.example.co.uk/Home/example.aspx is duplicated by http://www.example.co.uk/Home/example.aspx?filter=3 The client is moving to a new website later this year and is using an out-of-date Kentico CMS which would need some development doing to it in order to enable implementation of rel canonical tags in the header, I don't have access to the server and they have to pay through the nose everytime they want the slightest thing altering. I am trying to resolve this duplicate content issue though and am wondering what is the best way to resolve it in the short term. The client is happy to remove the filter links from the page but that still leaves the filter urls in Google. I am concerned that a 301 redirect will cause a loop and don't understand the behaviour of this type of code enough. I hope this makes sense, any advice appreciated.
Technical SEO | | travelinnovations0