Disavow questions
-
Pretty sure I know the answers to these but someone asked me to make absolutely sure so here goes, any opinions welcome:
-
If i disavow a whole domain does it include all sub-domains on the domain also?- my answer is clearly yes.
-
If i have network of links really bad linking to my website that are already nofollow but awful websites to be linked on, is it worth putting them in the disavow list anyway to basically tell Google literally no association? I know the whole point of disavow is to essentially nofollow the link.
Opinions much appreciated, thank you guys.
-
-
Great questions. I'll give my 2 cents based on what we've witnessed at Penalty Pros:
1 - Yes, this will do the trick. Just make sure that you are referencing the non-www version in the disavow. For example "domain:site.com" and NOT "domain:www.site.com". If you want to be super safe, just include the exact subdomains as separate line items.
2 - Google's official word is that you don't need to worry about nofollows, but we've encountered a few situations where nofollow links were pointed out as problematic in failed recon requests. This may be human error on the part of the manual team, but it's probably worth it just disavowing regardless.
Hope this helps and good luck!
-
In most cases if you disavow the root, the subdomains will be disavowed as well. But, this may not work for certain large hosts. If you're trying to disavow all of wordpress.com or blogspot.com you really should disavow the subdomains individually.
Regarding nofollow links, I am 99% sure that you can ignore them. John Mueller said that you don't need to include nofollowed links in a disavow: http://goo.gl/EhpI5O
The reason I say 99% sure and not 100% sure is because a colleague of mine was recently given a nofollowed link as an example link on a failed reconsideration request. The site is not available on archive.org, so I can't go back and check but my suspicion is that the site may have recently changed their links to nofollow. I have done MANY disavows and have had many successful reconsideration requests and I do not disavow nofollowed links.
-
I totally agree with Andy here. PageRank is not the only metric - not by a long shot. Any association can be negative or positive whether PageRank is passing or not. I would disavow just to get out of the bad neighborhood.
-
I wouldn't take any chances with poor links, even if they are nofollowed. Even though it passes no pagerank, it is still going to be a signal that passes a message back to Google.
I never take a chance and disavow these as a matter of course. You never fully know just how much Google is using them.
-Andy
-
According to Dr. Pete: "My understanding is that you can block the root domain that way, yes, but Google seemed to qualify that sub-domains were at their discretion. Unfortunately, we don't have much data yet. If you know that every link from the domain is bad, then I'd use the "domain:example.com" format."
Second Question--Can nofollow links hurt my site? Matt Cutts said "Typically, no unless it is abused and manual action applied."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
General SSL Questions After Move
Hello, We have moved our site to https, Google Analytics seems to be tracking correctly. However, I have seen some conflicting information, should I create a new view in analytics? Additionally, should I also create a new https property in Google search console and set it as the preferred domain? If so, should I keep the old sitemap for my http property while updating the sitemap to https only for the https property? Thirdly, should I create a new property as well as new sitemaps in Bing webmaster? Finally, after doing a crawl on our http domain which has a 301 to https, the crawl stopped after the redirect, is this a result of using a free crawling tool or will bots not be able to crawl my site after this redirect? Thanks for all the help in advance, I know there are a lot of questions here.
Technical SEO | | Tom3_150 -
Question about spammy links to 404 Pages we never created ...
FYI I'm a beginner within the company, so this might be a basic question, but ...I was going through open site explorer and checking www.partnermd.com for opportunities to reclaim links and I found a bunch of 404 pages that we never created that had nothing to do with the business. Out of curiousity, I plugged in one of the weird links like this one:http://www.partnermd.com/images/2015-best-space-heater-best-wers.html into open site explorer and found several bad spammy links pointing to it. When I clicked on one of them I got a notice that the site might have been hacked.I did some research and it looks like Google doesn't penalize you for spammy links to 404 pages, but how do we prevent this from occurring in the first place if possible?
Technical SEO | | WhittingtonConsulting1 -
Questions About The Right Hosting
Hi All, I have a few questions about the right type of hosting that I should be using. I understand that many people say we should be using the best hosting that we can afford. However, when I have a website with just 650 pages / posts is it really worth worrying too much about where I am hosting. I am UK based so at the moment I am using a UK host along with a CDN. I have a unique IP address and on a server that has a limited amount of websites on it. The main question is there really any need to be looking at anything else. The truth is I have used cloud hosting before and the website loaded slower around the world with that than it does with my current setup. Thanks
Technical SEO | | TTGUK0 -
Disavow links and domain of SPAM links
Hi, I have a big problem. For the past month, my company website has been scrape by hackers. This is how they do it: 1. Hack un-monitored and/or sites that are still using old version of wordpress or other out of the box CMS. 2. Created Spam pages with links to my pages plus plant trojan horse and script to automatically grab resources from my server. Some sites where directly uploaded with pages from my sites. 3. Pages created with title, keywords and description which consists of my company brand name. 4. Using http-referrer to redirect google search results to competitor sites. What I have done currently: 1. Block identified site's IP in my WAF. This prevented those hacked sites to grab resources from my site via scripts. 2. Reach out to webmasters and hosting companies to remove those affected sites. Currently it's not quite effective as many of the sites has no webmaster. Only a few hosting company respond promptly. Some don't even reply after a week. Problem now is: When I realized about this issue, there were already hundreds if not thousands of sites which has been used by the hacker. Literally tens of thousands of sites has been crawled by google and the hacked or scripted pages with my company brand title, keywords, description has already being index by google. Routinely everyday I am removing and disavowing. But it's just so much of them now indexed by Google. Question: 1. What is the best way now moving forward for me to resolve this? 2. Disavow links and domain. Does disavowing a domain = all the links from the same domain are disavow? 3. Can anyone recommend me SEO company which dealt with such issue before and successfully rectified similar issues? Note: SEAGM is company branded keyword 5CGkSYM.png
Technical SEO | | ahming7770 -
Site Migration Questions
Hello everyone, We are in the process of going from a .net to a .com and we have also done a complete site redesign as well as refreshed all of our content. I know it is generally ideal to not do all of this at once but I have no control over that part. I have a few questions and would like any input on avoiding losing rankings and traffic. One of my first concerns is that we have done away with some of our higher ranking pages and combined them into one parallax scrolling page. Basically, instead of having a product page for each product they are now all on one page. This of course has made some difficulty because search terms we were using for the individual pages no longer apply. My next concern is that we are adding keywords to the ends of our urls in attempt to raise rankings. So an example: website.com/product/product-name/keywords-for-product if a customer deletes keywords-for-product they end up being re-directed back to the page again. Since the keywords cannot be removed is a redirect the best way to handle this? Would a canonical tag be better? I'm trying to avoid duplicate content since my request to remove the keywords in urls was denied. Also when a customer deletes everything but website.com/product/ it goes to the home page and the url turns to website.com/product/#. Will those pages with # at the end be indexed separately or does google ignore that? Lastly, how can I determine what kind of loss in traffic we are looking at upon launch? I know some is to be expected but I want to avoid it as much as I can so any advice for this migration would be greatly appreciated.
Technical SEO | | Sika220 -
A few misc Webmaster tools questions & Robots.txt etc
Hi I have a few general misc questions re Robots.tx & GWT: 1) In the Robots.txt file what do the below lines block, internal search ? Disallow: /?
Technical SEO | | Dan-Lawrence
Disallow: /*? 2) Also the sites feeds are blocked in robots.txt, why would you want to block a sites feeds ? **3) **What's the best way to deal with the below: - old removed page thats returning a 500 response code ? - a soft 404 for an old removed page that has no current replacement old removed pages returning a 404 The old pages didn't have any authority or inbound links hence is it best/ok to simply create a url removal request in GWT ? Cheers Dan0 -
Duplicate Content Question (E-Commerce Site)
Hi All, I have a page that ranks well for the keyword “refurbished Xbox 360”. The ranking page is an eCommerce product details page for a particular XBOX 360 system that we do not currently have in stock (currently, we do not remove a product details page from the website, even if it sells out – as we bring similar items into inventory, e.g. more XBOX 360s, new additional pages are created for them). Long story short, given this way of doing things, we have now accumulated 79 “refurbished XBOX 360” product details pages across the website that currently, or at some point in time, reflected some version of a refurbished XBOX 360 in our inventory. From an SEO standpoint, it’s clear that we have a serious duplicate content problem with all of these nearly identical XBOX 360 product pages. Management is beginning to question why our latest, in-stock, XBOX 360 product pages aren't ranking and why this stale, out-of-stock, XBOX 360 product page still is. We are in obvious need of a better process for retiring old, irrelevant (product) content and eliminating duplicate content, but the question remains, how exactly is Google choosing to rank this one versus the others since they are primarily duplicate pages? Has Google simply determined this one to be the original? What would be the best practice approach to solving a problem like this from an SEO standpoint – 301 redirect all out of stock pages to in stock pages, remove the irrelevant page? Any thoughts or recommendations would be greatly appreciated. Justin
Technical SEO | | JustinGeeks0 -
Robots.txt Question
In the past, I had blocked a section of my site (i.e. domain.com/store/) by placing the following in my robots.txt file: "Disallow: /store/" Now, I would like the store to be indexed and included in the search results. I have removed the "Disallow: /store/" from the robots.txt file, but approximately one week later a Google search for the URL produces the following meta description in the search results: "A description for this result is not available because of this site's robots.txt – learn more" Is there anything else I need to do to speed up the process of getting this section of the site indexed?
Technical SEO | | davidangotti0