Does my "spam" site affect my other sites on the same IP?
-
I have a link directory called Liberty Resource Directory. It's the main site on my dedicated IP, all my other sites are Addon domains on top of it.
While exploring the new MOZ spam ranking I saw that LRD (Liberty Resource Directory) has a spam score of 9/17 and that Google penalizes 71% of sites with a similar score. Fair enough, thin content, bunch of follow links (there's over 2,000 links by now), no problem. That site isn't for Google, it's for me.
Question, does that site (and linking to my own sites on it) negatively affect my other sites on the same IP? If so, by how much? Does a simple noindex fix that potential issues?
Bonus: How does one go about going through hundreds of pages with thousands of links, built with raw, plain text HTML to change things to nofollow? =/
-
@Tom Roberts, your thinking is about on the same page as mine. I've always been suspect of "C-Blocks" as a ranking too. I don't use a CMS for this site, as I said it's all hard coded. Does the nofollow tag in the head section have the same effect as a nofollow on individual links? At least PHP could solve that issue pretty easily.
-
Hi Ethan
In theory - yes it could. We know that Google looks at a domain's Class-C IP level (at least - now that Google is a registrar it may extend to the full IP) when judging its quality. If a site is in a "bad neighbourhood" - ie sitting on an IP range with a number of 'spam' sites - then theoretically it could be affected, as a kind of guilty by association.
However, in reality I have some doubts as to whether this would happen. The fact is that the vast majority of the web uses shared hosting (particularly sme's) and so good sites are invariably always going to be mixed in with 'bad' sites. And what's stopping me from deliberately making a bad site on your IP in order to 'poison it'? I'm no way near that evil, but someone might be.
What I'm getting at here is that it seems extremely unlikely that there is a manageable way to differentiate these sites efficiently - which leads me to believe that having your spam site on the same IP as some 'good' sites _shouldn't _make a difference.
What you can do to reduce this risk even further would be to make sure the 'spam' site doesn't link to any properties of yours that you want to protect, to nofollow those links if feasible (not sure what CMS you're using but WordPress has a few plugins that would do this on bulk) and, if it isn't required, noindexing the site would pretty much get rid of all risk completely.
Hope this helps!
-
If you don't use the site in Google, you should noindex it just to clear up any potential issues (especially if the domains link together in any way.)
- Bonus: How does one go about going through hundreds of pages with thousands of links, built with raw, plain text HTML to change things to nofollow? =/
Download the full site and open all the pages in Notepad++. Find & replace. Save, reupload.
-
Hi Ethan,
I will say yes & no. Please watch below Matt cutts video on the exact issue.
https://www.youtube.com/watch?v=AsSwqo16C8s&noredirect=1
I hope it helps.
Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Question: Is there any value for ecomm sites in having a reverse "breadcrumb" in the URL?
Wondering if there is any value for e-comm sites to feature a reverse breadcrumb like structure in the URL? For example: Example: https://www.grainger.com/category/anchor-bolts/anchors/fasteners/ecatalog/N-8j5?ssf=3&ssf=3 where we have a reverse categorization happening? with /level2-sub-cat/level1-sub-cat/category in the reverse order as to the actual location on the site. Category: Fasteners
Technical SEO | | ROI_DNA
Sub-Cat (level 1): Anchors
Sub-Cat (level 2): Anchor Bolts0 -
Redundant categorization - "boys" and "girls" category. Any other suggestions than implementing filtering?
One of our clients (a children's clothing company) has split their categories (outwear, tops, shoes) between boys and girls - There's one category page for girls outwear, and one category for boys outwear. I am suspecting that this redundant categorisation is diluting link juice and rankings for the related search queries. Important points: The clothes themselves are rather gender-neutral, girl's sweaters don't differ that much from the boy's sweaters. Our keyword research indicates that norwegians' search queries are also pretty gender neutral - people are generally searching after "children's dresses", "shoes for kids", "snowsuits", etc. So these gender specific categories are not really reflective of people's search behavior. I acknowledge that implementing a filter for "boys" and "girls" would be the best way to solve this redundant categorization, but that would simply be to expensive for our client. I'm thinking that some sort of canonicalisation would be the best approach to solve this issue. Are there any other suggestions or comments to this?
Technical SEO | | Inevo0 -
Handling "legitimate" duplicate content in an online shop.
The scenario: Online shop selling consumables for machinery. Consumable range A (CA) contains consumables w, x, y, z. The individual consumables are not a problem, it is the consumables groups I'm having problems with. The Problem: Several machines use the same range of consumables. i.e. Machine A (MA) consumables page contains the list (CA) with the contents w,x,y,z. Machine B (MB) consumables page contains exactly the same list (CA) with contents w,x,y,z. Machine A page = Machine B page = Consumables range A page Some people will search Google for the consumables by the range name (CA). Most people will search by individual machine (MA Consumables, MB Consumables etc). If I use canonical tags on the Machine consumable pages (MA + MB) pointing to the consumables range page (CA) then I'm never going to rank for the Machine pages which would represent a huge potential loss of search traffic. However, if I don't use canonical tags then all the pages get slammed as duplicate content. For somebody that owns machine A, then a page titled "Machine A consumables" with the list of consumables is exactly what they are looking for and it makes sense to serve it to them in that format. However, For somebody who owns machine B, then it only makes sense for the page to be titled "Machine B consumables" even though the content is exactly the same. The Question: What is the best way to handle this from both a user and search engine perspective?
Technical SEO | | Serpstone0 -
Website Migration - Very Technical Google "Index" Question
This is my understanding of how Google's search works, and I am unsure about one thing in specifc: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" connects to the "page directory". I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I ask is I am starting to work with a client who has a newly developed website. The old website domain and files were located on a GoDaddy account. The new websites files have completely changed location and are now hosted on a separate GoDaddy account, but the domain has remained in the same account. The client has setup domain forwarding/masking to access the files on the separate account. From what I've researched domain masking and SEO don't get along very well. Not only can you not link to specific pages, but if my above assumption is true wouldn't Google have a hard time crawling and storing each page in the cache?
Technical SEO | | reidsteven750 -
Client sites on same IP / same c class IP range
Our own site and client sites (that we design) are either hosted on the same IP or within same C Class range of IPs Will there be an issue with Google if we include a link on the client site home page footer (as agreed with client) that links to a specific project page on our website for that client e.g they are not linking to our home page?
Technical SEO | | NeilD0 -
Rel="author" showing old image
I'm using http://www.google.com/webmasters/tools/richsnippets to test my rel="author" tag which was successful, but I noticed I wanted to change my image in Google+ as it is not what I want. I changed my image in Google+, it's been over 14 hours now and still not showing the new picture using the RichSnippets tool. I know Google can take a couple weeks at least to show changes in search results, but this RichSnippet tool I thought was immeidate. Am I missing something here or am I just impatient? I want my new photo to show.
Technical SEO | | Twinbytes0 -
What is "canonical." And what do I need to do to fix it?
I'm seeing about 450 warnings on this. What is "Using rel=canonical suggests to search engines which URL should be seen as canonical." And what do I need to do to fix it?
Technical SEO | | KimCalvert0 -
301 an old site to a newer site...
Hi First, to be upfront - these are not my websites, I'm asking because they are trying to compete in my niche. Here's the details, then the questions... There is a website that is a few months old with about 200 indexed pages and about 20 links, call this newsite.com There is a website that is a few years old with over 10,000 indexed pages and over 20,000 links, call this oldsite.com newsite.com acquired oldsite.com and set a 301 redirect so every page of oldsite.com is re-directed to the front page of newsite.com newsite.com & oldsite.com are on the same topic, the 301 occurred in the past week. Now, oldsite.com is out of the SERPs and newsite.com is pretty much ranking in the same spot (top 10) for the main term. Here are my questions; 1. The 10,000 pages on oldsite.com had plenty of internal links - they no longer exists, so I imagine when the dust settles - it will be like oldsite.com is a one page site that re-diretcts to newsite.com ... How long will a ranking boost last for? 2. With the re-direct setup to completely forget about the structure and content of oldsite.com, it's clear to me that it was setup to pass the 'Link Juice' from oldsite.com to newsite.com ... Do the major SE's see this as a form of SPAM (manipulating the rankings), or do they see it as a good way to combine two or more websites? 3. Does this work? Is everybody doing it? Should I be doing it? ... or are there better ways for me to combat this type of competition (eg we could make a lot of great content for the money spent buying oldsite.com - but we certainly wouldn't get such an immediate increase to traffic)?
Technical SEO | | RR5000