Our Robots.txt and Reconsideration Request Journey and Success
-
We have asked a few questions related to this process on Moz and wanted to give a breakdown of our journey as it will likely be helpful to others!
A couple of months ago, we updated our robots.txt file with several pages that we did not want to be indexed. At the time, we weren't checking WMT as regularly as we should have been and in a few weeks, we found that apparently one of the robots.txt files we were blocking was a dynamic file that led to the blocking of over 950,000 of our pages according to webmaster tools. Which page was causing this is still a mystery, but we quickly removed all of the entries.
From research, most people say that things normalize in a few weeks, so we waited. A few weeks passed and things did not normalize. We searched, we asked and the number of "blocked" pages in WMT which had increased at a rate of a few hundred thousand a week were decreasing at a rate of a thousand a week. At this rate it would be a year or more before the pages were unblocked.
This did not change. Two months later and we were still at 840,000 pages blocked.
We posted on the Google Webmaster Forum and one of the mods there said that it would just take a long time to normalize. Very frustrating indeed considering how quickly the pages had been blocked.
We found a few places on the interwebs that suggested that if you have an issue/mistake with robots.txt that you can submit a reconsideration request. This seemed to be our only hope. So, we put together a detailed reconsideration request asking for help with our blocked pages issue.
A few days later, to our horror, we did not get a message offering help with our robots.txt problem. Instead, we received a message saying that we had received a penalty for inbound links that violate Google's terms of use. Major backfire. We used an SEO company years ago that posted a hundred or so blog posts for us. To our knowledge, the links didn't even exist anymore. They did....
So, we signed up for an account with removeem.com. We quickly found many of the links posted by the SEO firm as they were easily recognizable via the anchor text. We began the process of using removem to contact the owners of the blogs. To our surprise, we got a number of removals right away! Others we had to contact another time and many did not respond at all. Those we could not find an email for, we tried posting comments on the blog.
Once we felt we had removed as many as possible, we added the rest to a disavow list and uploaded it using the disavow tool in WMT. Then we waited...
A few days later, we already had a response. DENIED. In our request, we specifically asked that if the request were to be denied that Google provide some example links. When they denied our request, they sent us an email and including a sample link. It was an interesting example. We actually already had this blog in removem. The issue in this case was, our version was a domain name, i.e. www.domainname.com and the version google had was a wordpress sub domain, i.e. www.subdomain.wordpress.com.
So, we went back to the drawing board. This time we signed up for majestic SEO and tied it in with removem. That added a few more links. We also had records from the old SEO company we were able to go through and locate a number of new links. We repeated the previous process, contacting site owners and keeping track of our progress. We also went through the "sample links" in WMT as best as we could (we have a lot of them) to try to pinpoint any other potentials.
We removed what we could and again, disavowed the rest. A few days later, we had a message in WMT. DENIED AGAIN! This time it was very discouraging as it just didn't seem there were any more links to remove. The difference this time, was that there was NOT an email from Google. Only a message in WMT. So, while we didn't know if we would receive a response, we responded to the original email asking for more example links, so we could better understand what the issue was.
Several days passed we received an email back saying that THE PENALTY HAD BEEN LIFTED! This was of course very good news and it appeared that our email to Google was reviewed and received well.
So, the final hurdle was the reason that we originally contacted Google. Our robots.txt issue. We did not receive any information from Google related to the robots.txt issue we originally filed the reconsideration request for. We didn't know if it had just been ignored, or if there was something that might be done about it. So, as a last ditch final effort, we responded to the email once again and requested help as we did the other times with the robots.txt issue.
The weekend passed and on Monday we checked WMT again. The number of blocked pages had dropped over the weekend from 840,000 to 440,000! Success! We are still waiting and hoping that number will continue downward back to zero.
So, some thoughts:
1. Was our site manually penalized from the beginning, yet without a message in WMT? Or, when we filed the reconsideration request, did the reviewer take a closer look at our site, see the old paid links and add the penalty at that time? If the latter is the case then...
2. Did our reconsideration request backfire? Or, was it ultimately for the best?
3. When asking for reconsideration, make your requests known? If you want example links, ask for them. It never hurts to ask! If you want to be connected with Google via email, ask to be!
4. If you receive an email from Google, don't be afraid to respond to it. I wouldn't over do this or spam them. Keep it to the bare minimum and don't pester them, but if you have something pertinent to say that you have not already said, then don't be afraid to ask.
Hopefully our journey might help others who have similar issues and feel free to ask any further questions.
Thanks for reading!
TheCraig
-
considering this thread has only 36 views I think you should go ahead a post on youmoz, as I think its deservers more exposure ( maybe added pieter point and your warning about not to blindly follow removem)
-
Thanks Paddy! Yeah debated whether to post here or on youmoz... You are probably right.
Thanks for reading!
-
Indeed Pieter! Additionally, removem showed us a LOT of links that "needed" to be removed, that didn't actually need to be removed. It's important to know your backlinks if at all possible and know for yourself which ones are the spammy ones. If we went on what removem told us we should remove, we would have removed WAY more links than we needed to.
Thanks for the response!
-
Another thing: don't trust one tool when having a lot of bad links. removeem.com is only one source where you can find your links.
-
Hopefully I'll never be in the situation you found yourselves in, but a great read and now I know what to expect if I ever do (touch wood).
This might have been better as a youmoz post than a forum post btw.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots txt is case senstive? Pls suggest
Hi i have seen few urls in the html improvements duplicate titles Can i disable one of the below url in the robots.txt? /store/Solar-Home-UPS-1KV-System/75652
Intermediate & Advanced SEO | | Rahim119
/store/solar-home-ups-1kv-system/75652 if i disable this Disallow: /store/Solar-Home-UPS-1KV-System/75652 will the Search engines scan this /store/solar-home-ups-1kv-system/75652 im little confused with case senstive.. Pls suggest go ahead or not in the robots.txt0 -
Do you add 404 page into robot file or just add no index tag?
Hi, got different opinion on this so i wanted to double check with your comment is. We've got /404.html page and I was wondering if you would add this page to robot text so it wouldn't be indexed or would you just add no index tag? What would be the best approach? Thanks!
Intermediate & Advanced SEO | | Rubix0 -
Blocking out specific URLs with robots.txt
I've been trying to block out a few URLs using robots.txt, but I can't seem to get the specific one I'm trying to block. Here is an example. I'm trying to block something.com/cats but not block something.com/cats-and-dogs It seems if it setup my robots.txt as so.. Disallow: /cats It's blocking both urls. When I crawl the site with screaming flog, that Disallow is causing both urls to be blocked. How can I set up my robots.txt to specifically block /cats? I thought it was by doing it the way I was, but that doesn't seem to solve it. Any help is much appreciated, thanks in advance.
Intermediate & Advanced SEO | | Whebb0 -
Robot.txt help
Hi, We have a blog that is killing our SEO. We need to Disallow Disallow: /Blog/?tag*
Intermediate & Advanced SEO | | Studio33
Disallow: /Blog/?page*
Disallow: /Blog/category/*
Disallow: /Blog/author/*
Disallow: /Blog/archive/*
Disallow: /Blog/Account/.
Disallow: /Blog/search*
Disallow: /Blog/search.aspx
Disallow: /Blog/error404.aspx
Disallow: /Blog/archive*
Disallow: /Blog/archive.aspx
Disallow: /Blog/sitemap.axd
Disallow: /Blog/post.aspx But Allow everything below /Blog/Post The disallow list seems to keep growing as we find issues. So rather than adding in to our Robot.txt all the areas to disallow. Is there a way to easily just say Allow /Blog/Post and ignore the rest. How do we do that in Robot.txt Thanks0 -
Do you have to wait after disavowing before submitting a reconsideration request
Hi all We have a link penalty at the moment it seems. I went through 40k links in various phases and have disavowed over a thousand domains that date back to old SEO work. I was barely able to have any links removed as the majority are on directories etc that no one looks after any more etc and / or which are spammy and scraped anyway. According to link research tools link detox tool, we now have a very low risk profile (I loaded the disavowed links into the tool for it to take into consideration when assessing our profile). I then submitted a reconsideration request on the same day as loading the new disavowed file (on the 26th of April). However today (7th May) we got a message in webmaster central that says our link profile is still unnatural. Aaargh. My question: is the disavow file taken into consideration when the reconsideration request is reviewed (ie is that information immediately available to the reviewer)? Or do we have to wait for the disavow file to flow through in the crawl stats? If so, how long do we have to wait? I've checked a link that I disavowed last time and it's still showing up in the links that I pull down from Webmaster Central, and indeed links that I disavowed at the start of April are still showing up in the list of links that can be downloaded. Any help gratefully received. I'm pulling my hair out here, trying to undo the dodgy work of a few random people many months ago! Cheers, Will
Intermediate & Advanced SEO | | ArenaFlowers.com0 -
Google Reconsideration Request - Most Efficient Process
Hi, I'm working on a Google reconsideration request for a site with a longstanding penalty. Here's what I did: Round 1 Downloaded a CSV of all the domains and all the pages linking to the site. Went through the lot manually and sorted them into three types: Disavow Domain, Disavow Page, Keep - All low-quality domains were disavowed, all pages from places like blogspot with low-quality links on certain blogs were disavowed. Submitted disavow file, then sent a detailed reconsideration request including a link to the disavow file. Reconsideration request was not successful. Google gave two examples of links I should remove, bizarrely the examples they gave were already disavowed, which seemd a bit odd. So I took this to mean Google Webmaster Tools and disavow files were in themselves not enough. The links I kept were largely from PRWeb syndication which seems legit. Round 2
Intermediate & Advanced SEO | | jeremymgp
Here's what I'm doing now. Any ideas for how the below process can be improved to get the maximum chance of a successful request, please let me know. Get all linking pages from Webmaster Tools as before and also MajesticSEO's Historic Index. This gave me around three times more domains to remove. The additionnal domains from Majestic that weren't in Webmaster tools I just put them all in the disavow file. Conduct a manual link removal email campaign. I've got around 2500 domains to go through, so how can I best do this. My process at the moment is:
- Use software to get email addresses from whois records
- send them an email
- make a spreadsheet of responses
- include link to spreadsheet in Google Docs as well as link to new disavow file Should I research each site manually to get email addresses? It does seem rather a waste of an offshorer's time, from what I've seen some people use offshorers and others have used software tools successfully. The other thing is sending the emails, how can I do this? Any smtp email campaign site won't let me use their service because the emails are not opt-in, they classify it as spam. Does anyone know a solution to send 2500 emails legitimately from a webmail account for example? I'm having to send bulk emails to get rid of spam links. Finally most of the offending links have keyword anchor text from spun articles, I've deleted all the sites except EzineArticles. Would you delete this too, it's an awful site but client is hung up on it. ExineArticle links may have some value, on the other hand it's more of the same keyword-rich anchor text articles. Keep or disavow the individual pages? Finally, anything else I've missed? Anything to add? Thanks for all your help 🙂0 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
XML Sitemap instruction in robots.txt = Worth doing?
Hi fellow SEO's, Just a quick one, I was reading a few guides on Bing Webmaster tools and found that you can use the robots.txt file to point crawlers/bots to your XML sitemap (they don't look for it by default). I was just wondering if it would be worth creating a robots.txt file purely for the purpose of pointing bots to the XML sitemap? I've submitted it manually to Google and Bing webmaster tools but I was thinking more for the other bots (I.e. Mozbot, the SEOmoz bot?). Any thoughts would be appreciated! 🙂 Regards, Ash
Intermediate & Advanced SEO | | AshSEO20110