Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How can I block incoming links from a bad web site ?
-
Hello all,
We got a new client recently who had a warning from Google Webmasters tools for manual soft penalty. I did a lot of search and I found out one particular site that sounds roughly 100k links to one page and has been potentialy a high risk site.
I wish to block those links from coming in to my site but their webmaster is nowhere to be seen and I do not want to use the disavow tool.
Is there a way I can use code to our htaccess file or any other method?
Would appreciate anyone's immediate response.
Kind Regards
-
Hi Yiannis,
As far as I'm aware, there isn't really a way to "block" a link. The link is seen on the other site. Returning a 404 for the page being linked to doesn't change the fact that there are a 100K links from one site pointing at your site. The only options I'm aware of are to 1.) contact the owner of the website with the links and ask them to remove the links and 2.) if that doesn't work disavow the links.
I understand your hesitancy to use the disavow tool, but quite frankly, this is exactly what it is intended for.
If you feel comfortably with the links being there and think Google has already dealt with them, then do nothing, but if you want to do something about the links, you either have to get them removed or disavow them.
BTW - My understanding of the partial manual actions is that often times Google not only deals with the suspicious links (devaluing them), but they also penalize the pages/keywords they think you were attempting to manipulate. So, just because it was a partial action and not a full site action doesn't mean it's not effecting some of your rankings. It's just not going to affect all your rankings for all your pages.
Kurt Steinbrueck
OurChurch.Com -
Hi eyepaq and thanks for your reply, much appreciated
The reason I do not want to use the disavow tool is because
- Google sent on the message that "took targeted action on the unnatural links instead of on the site’s ranking as a whole", meaning they took care of the problem 2) Ranking and traffic looking solid
- I have seen a lot of cases where people used it and lost rankings (some never recovered).
My thoughts were to block the spammy links and monitor if the traffic will be affected (which I doubt as it seems most of it is for branded searches). If then I face de-index, drops then use the disavow/reconsideration request.
What do you think?
-
Hi,
If you are talking about one or only a few sites then it's easy.
Just build a disavow file as there is no down side to that . The disavow with domain:domainname.com (not individual pages) and upload it via Web master Tools. After sumbiting the file - send a reconsideration request explaining the situation and mentioning the disavow file. You should be safe after that.
Alternatively - but to be honest I don't see the reason why not going with the first option - is to return a 404 if the referral is that domain.
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Web-site Structure/ SEO Strategy for an online travel agency?
Dear Experts! I need your help with pointing me in the right direction. So far I have found scattered tips around the Internet but it's hard to make a full picture with all these bits and pieces of information without a professional advice. My primary goal is to understand how I should build my online travel agency web-site’s (https://qualistay.com) structure, so that I target my keywords on correct pages and do not create a duplicate content. In my particular case I have very similar properties in similar locations in Tenerife. Many of them are located in the same villa or apartment complex, thus, it is very hard to come up with the unique description for each of them. Not speaking of amenities and pricing blocks, which are standard and almost identical (I don’t know if Google sees it as a duplicate content). From what I have read so far, it’s better to target archive pages rather than every single property. At the moment my archive pages are: all properties (includes all property types and locations), a page for each location (includes all property types). Does it make sense adding archive pages by property type in addition OR in stead of the location ones if I, for instance, target separate keywords like 'villas costa adeje' and 'apartments costa adeje'? At the moment, the title of the respective archive page "Properties to rent in costa adeje: villas, apartments" in principle targets both keywords... Does using the same keyword in a single property listing cannibalize archive page ranking it is linking back to? Or not, unless Google specifically identifies this as a duplicate content, which one can see in Google Search Console under HTML Improvements and/or archive page has more incoming links than a single property? If targeting only archive pages, how should I optimize them in such a way that they stay user-friendly. I have created (though, not yet fully optimized) descriptions for each archive page just below the main header. But I have them partially hidden (collapsible) using a JS in order to keep visitors’ focus on the properties. I know that Google does not rank hidden content high, at least at the moment, but since there is a new algorithm Mobile First coming up in the near future, they promise not to punish mobile sites for a collapsible content and will use mobile version to rate desktop one. Does this mean I should not worry about hidden content anymore or should I move the descirption to the bottom of the page and make it fully visible? Your feedback will be highly appreciated! Thank you! Dmitry
Technical SEO | | qualistay1 -
Can I set a canonical tag to an anchor link?
I have a client who is moving to a one page website design. So, content from the inner pages is being condensed in to sections on the 'home' page. There will be a navigation that anchor links to each relevant section. I am wondering if I should leave the old pages and use rel=canonical to point them to their relevant sections on the new 'home' page rather than 301 them. Thoughts?
Technical SEO | | Vizergy0 -
Why Can't Googlebot Fetch Its Own Map on Our Site?
I created a custom map using google maps creator and I embedded it on our site. However, when I ran the fetch and render through Search Console, it said it was blocked by our robots.txt file. I read in the Search Console Help section that: 'For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot." I did not setup our robtos.txt file. However, I can't imagine it would be setup to block google from crawling a map. i will look into that, but before I go messing with it (since I'm not familiar with it) does google automatically block their maps from their own googlebot? Has anyone encountered this before? Here is what the robot.txt file says in Search Console: User-agent: * Allow: /maps/api/js? Allow: /maps/api/js/DirectionsService.Route Allow: /maps/api/js/DistanceMatrixService.GetDistanceMatrix Allow: /maps/api/js/ElevationService.GetElevationForLine Allow: /maps/api/js/GeocodeService.Search Allow: /maps/api/js/KmlOverlayService.GetFeature Allow: /maps/api/js/KmlOverlayService.GetOverlays Allow: /maps/api/js/LayersService.GetFeature Disallow: / Any assistance would be greatly appreciated. Thanks, Ruben
Technical SEO | | KempRugeLawGroup1 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Self Referencing Links - Good or Bad?
As an agency we get quite a few of our clients come to us saying "Ooo, this company just contacted me saying they've run an SEO report on my site and we need to improve on these following things" We had one come through the other day that had reported on something we had not seen in any others before. They called them self-referencing links and marked it as a point of action should be taken. They had stated that 100% of the pages on our clients website had self-referencing links. The definition of self-referencing is when there is a link on a page that is linking to the page you are currently on. So for example you're on the home page and there is a link in the nav bar at the top that says "Home" with a link to the home page, the page you are already currently on. Is it bad practice? And if so can we do anything about it as it would seem strange from a UI point of view not to have a consistent navigation. I have not heard anything about this before but I wanted to get confirmation before going back to our client and explaining. Thanks Mozzers!
Technical SEO | | O2C0 -
Updating inbound links vs. 301 redirecting the page they link to
Hi everyone, I'm preparing myself for a website redesign and finding conflicting information about inbound links and 301 redirects. If I have a URL (we'll say website.com/website) that is linked to by outside sources, should I get those outside sources to update their links when I change the URL to website.com/webpage? Or is it just as effective from a link juice perspective to simply 301 redirect the old page to the new page? Are there any other implications to this choice that I may want to consider? Thanks!
Technical SEO | | Liggins0 -
How Can I Block Archive Pages in Blogger when I am not using classic/default template
Hi, I am trying to block all the archive pages of my blog as Google is indexing them. This could lead to duplicate content issue. I am not using default blogger theme or classic theme and therefore, I cannot use this code therein: Please suggest me how I can instruct Google not to index archive pages of my blog? Looking for quick response.
Technical SEO | | SoftzSolutions0 -
Delete old site but redirect domain to a new domain and site
I just have a quick query and I have a feeling about what the answer is so just wanted to see what you guys thought... Basically I am working on a client site. This client has a few other websites that are divisions of their company. However these divisions/websites are no longer used. They are wanting to delete the websites but redirect the domains to their name main website. They believe this will pass on SEO benefits as these old division sites are old and have a good PR and history. I'm unsure for DEFINITE, which way is correct?
Technical SEO | | Weerdboil0