Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What can I do if my reconsideration request is rejected?
-
Last week I received an unnatural link warning from Google. Sad times.
I followed the guidelines and reviewed all my inbound links for the last 3 months. All 5000 of them! Along with several genuine ones from trusted sites like BBC, Guardian and Telegraph there was a load of spam. About 2800 of them were junk. As we don't employ any SEO agency and don't buy links (we don't even buy adwords!) I know that all of this spam is generated by spam bots and site scrapers copying our content.
As the bad links have not been created by us and there are 2800 of them I cannot hope to get them removed. There are no 'contact us' pages on these Russian spam directories and Indian scraper sites. And as for the 'adult book marking website' who have linked to us over 1000 times, well I couldn't even contact that site in company time if I wanted to! As a result i did my manual review all day, made a list of 2800 bad links and disavowed them.
I followed this up with a reconsideration request to tell Google what I'd done but a week later this has been rejected "We've reviewed your site and we still see links to your site that violate our quality guidelines." As these links are beyond my control and I've tried to disavow them is there anything more to be done?
Cheers
Steve
-
Tom has given you good advice. I'll put in my 2 cents' worth as well.
There are 3 main reasons for a site to fail at reconsideration:
1. Not enough links were assessed by the site owner to be unnatural.
2. Not enough effort was put into removing links and documenting that to Google.
3. Improper use of the disavow tool.
In most cases #1 is the main cause. Almost every time I do a reconsideration request my client is surprised at what kind of links are considered unnatural. From what I have seen, Google is usually pretty good at figuring out whether you have been manually trying to manipulate the SERPS or whether links are just spam bot type of links.
Here are a few things to consider:
Are you being COMPLETELY honest with yourself about the spammy links you are seeing? How did Russian and porn sites end up linking to you? Most sites don't just get those by accident. Sometimes this can happen when sites use linkbuilding companies that use automated methods to build links. Even still, do all you can to address those links, and then for the ones that you can't get removed, document your efforts, show Google and then disavow them.
Even if these are foreign language sites, many of them will have whois emails that you can contact.
Are you ABSOLUTELY sure that your good links are truly natural? Just because they are from news sources is not a good enough reason. Have you read all the interflora stuff recently? They had a pile of links from advertorials (amongst other things) that now need to be cleaned up.
-
Hi Steve
If Google is saying there are still a few more links, then it might be an idea to manually review a few others that you haven't disavowed. I find the LinkDetox tool very useful for this. It's free with a tweet and will tell you if a link from a site is toxic (the site is deindexed) or if it's suspicious (and why it's suspicious). You still need to use your own judgement on these, but it might help you to find the extra links you're talking about.
However, there is a chance you have gone and disavowed every bad link, but still got the rejection. In this case, I'd keep trying but make your reconsideration request more detailed. Create an excel sheet and list the bad URLs and/or domains and give a reason explaining why you think they're bad links. Then provide information on how you found their contact details. If there are no contact us pages, check the whois registrar's email. After that, say when you contacted them (give a sample of your letter to them too), and if they replied, along with a follow up date if you got silence. If there are no details in the whois, explicitly mention that there are no contact details and so you have proceeded straight to disavowing.
Then list the URLs you've disavowed (upload the .txt file with your reconsideration email). You've now told Google that you've found bad links, why you think their bad (also include how you discovered them), that you've contacted the webmaster on numerous occasions and, if no removal was made, you've disavowed as a last resort. This is a very thorough process and uses the disavow tool in the way that Google wants us to - as a last resort to an unresponsive or anonymous webmaster.
Please forgive me if you've already done all this and it seems like repetition. I only mention it because I've found it's best to be as thorough as possible with Google in these situations. Remember, a reconsideration request is manual and if they see that you've gone through all this effort to be reinstated, you've got a better chance of being approved.
Keep trying, mate. It can be disheartening, but if you think it's worth the time and effort, then keep going for it. I would bear in mind the alternatives, however, such as starting fresh on a new domain. If you find yourself going round the bend with endless reconsiderations, sometimes your time, effort and expertise can be better put elsewhere.
All the best!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can I stop a tracking link from being indexed while still passing link equity?
I have a marketing campaign landing page and it uses a tracking URL to track clicks. The tracking links look something like this: http://this-is-the-origin-url.com/clkn/http/destination-url.com/ The problem is that Google is indexing these links as pages in the SERPs. Of course when they get indexed and then clicked, they show a 400 error because the /clkn/ link doesn't represent an actual page with content on it. The tracking link is set up to instantly 301 redirect to http://destination-url.com. Right now my dev team has blocked these links from crawlers by adding Disallow: /clkn/ in the robots.txt file, however, this blocks the flow of link equity to the destination page. How can I stop these links from being indexed without blocking the flow of link equity to the destination URL?
Technical SEO | | UnbounceVan0 -
"5XX (Server Error)" - How can I fix this?
Hey Mozers! Moz Crawl tells me I am having an issue with my Wordpress category - it is returning a 5XX error and i'm not sure why? Can anyone help me determine the issue? Crawl Issues and Notices for: http://www.refusedcarfinance.com/news/category/news We found 1 crawler issue(s) for this page. High Priority Issues 1 5XX (Server Error) 5XX errors (e.g., a 503 Service Unavailable error) are shown when a valid request was made by the client, but the server failed to complete the request. This can indicate a problem with the server, and should be investigated and fixed.
Technical SEO | | RocketStats0 -
Why Can't Googlebot Fetch Its Own Map on Our Site?
I created a custom map using google maps creator and I embedded it on our site. However, when I ran the fetch and render through Search Console, it said it was blocked by our robots.txt file. I read in the Search Console Help section that: 'For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot." I did not setup our robtos.txt file. However, I can't imagine it would be setup to block google from crawling a map. i will look into that, but before I go messing with it (since I'm not familiar with it) does google automatically block their maps from their own googlebot? Has anyone encountered this before? Here is what the robot.txt file says in Search Console: User-agent: * Allow: /maps/api/js? Allow: /maps/api/js/DirectionsService.Route Allow: /maps/api/js/DistanceMatrixService.GetDistanceMatrix Allow: /maps/api/js/ElevationService.GetElevationForLine Allow: /maps/api/js/GeocodeService.Search Allow: /maps/api/js/KmlOverlayService.GetFeature Allow: /maps/api/js/KmlOverlayService.GetOverlays Allow: /maps/api/js/LayersService.GetFeature Disallow: / Any assistance would be greatly appreciated. Thanks, Ruben
Technical SEO | | KempRugeLawGroup1 -
Can I use a 410'd page again at a later time?
I have old pages on my site that I want to 410 so they are totally removed, but later down the road if I want to utilize that URL again, can I just remove the 410 error code and put new content on that page and have it indexed again?
Technical SEO | | WebServiceConsulting.com0 -
Can I mark up breadcrumbs without showing them? (responsive design)
I am working on a site that has responsive design. We use faceted search for the desktop version but implemented a style of breadcrumbs for the mobile version as sidebars take up too much screen real estate. On the desktop design we are putting a display:none in front of the breadcrumbs. If we mark up those breadcrumbs and they are behind a display none, can we still get the rich snippets? Will Google see this is cloaking? In follow up, is there a way to markup breadcrumbs in the or somewhere else that is constant?
Technical SEO | | MarloSchneider0 -
Can dynamically translated pages hurt a site?
Hi all...looking for some insight pls...i have a site we have worked very hard on to get ranked well and it is doing well in search. The site has about 1000 pages and climbing and has about 50 of those pages in translated pages and are static pages with unique urls. I have had no problems here with duplicate content and that sort of thing and all pages were manually translated so no translation issues. We have been looking at software that can dynamically translate the complete site into a handfull of languages...lets say about 5. My problem here is these pages get produced dynamically and i have concerns that google will take issue with this aswell as the huge sudden influx of new urls....as now we could be looking at and increase of 5000 new urls. (which usually triggers an alarm) My feeling is that it could be risking the stability of the site that we have worked so hard for and maybe just stick with the already translated static pages. I am sure the process could be fine but fear a manual inspection and a slap on the wrist for having dynamically created content?? and also just risk a review trigger period. These days it is hard to know what could get you in "trouble" and my gut says keep it simple and as is and dont shake it up?? Am i being overly concerned? Would love to here from others who have tried similar changes and also those who have not due to similar "fear" thanks
Technical SEO | | nomad-2023230 -
Is there such thing as a good text/code ratio? Can it effect SERPs?
As it says on the tin; Is there such thing as a good text/code ratio? And can it effect SERPs? I'm currently looking at a 20% ratio whereas some competitors are closer to 40%+. Best regards,
Technical SEO | | ARMofficial
Sam.0 -
On a dedicated server with multiple IP addresses, how can one address group be slow/time out and all other IP addresses OK?
We utilize a dedicated server to host roughly 60 sites on. The server is with a company that utilizes a lady who drives race cars.... About 4 months ago we realized we had a group of sites down thanks to monitoring alerts and checked it out. All were on the same IP address and the sites on the other IP address were still up and functioning well. When we contacted the support at first we were stonewalled, but eventually they said there was a problem and it was resolved within about 2 hours. Up until recently we had no problems. As a part of our ongoing SEO we check page load speed for our clients. A few days ago a client who has their site hosted by the same company was running very slow (about 8 seconds to load without cache). We ran every check we could and could not find a reason on our end. The client called the host and were told they needed to be on some other type of server (with the host) at a fee increase of roughly $10 per month. Yesterday, we noticed one group of sites on our server was down and, again, it was one IP address with about 8 sites on it. On chat with support, they kept saying it was our ISP. (We speed tested on multiple computers and were 22MB down and 9MB up +/-2MB). We ran a trace on the IP address and it went through without a problem on three occassions over about ten minutes. After about 30 minutes the sites were back up. Here's the twist: we had a couple of people in the building who were on other ISP's try and the sites came up and loaded on their machines. Does anyone have any idea as to what the issue is?
Technical SEO | | RobertFisher0