I'm confused - why would you link out to another site and then 301-redirect that 3rd-party page back to the original site (especially to the home-page on the original site)? I honestly have no idea how Google would treat that - I think it would send a bit of a mixed signal, but it could be seen as an internal link. If you control the original site, though, this would have no benefits on search or users, so I'm just not clear on the motivation.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Posts made by Dr-Pete
-
RE: Do you need a place to ditch bad links?
-
RE: Do you need a place to ditch bad links?
Redirecting to the root domain can preserve the link value, but it won't shake loose any penalties. You'd basically just consolidate those links.
-
RE: Do you need a place to ditch bad links?
As best we know, 404s should kill the page as a link target, which essentially severs the links. I don't think Google views the link on a domain-wide level at that point. If they did, then honestly it's likely the same rules would apply to other HTTP headers, including 301s. If the page is dead, you're pretty safe at that point. I don't think 301-redirecting the bad page is going to have any additional positive impact.
Re: "breaking rules", the problem is that it's very subjective. Let's say that a bunch of SEOs realize that Penguin and other link-based penalties created an opportunity, and they start taking their own pages with bad links and 301-redirecting those to competitors (maliciously). If Google sees that pattern and then they see you 301-redirecting your links to a 3rd-party site, they may not be able to separate you from the pattern. In other words, they're going to assume bad behavior.
That's speculation, of course (in this specific case - I've definitely seen them mistake bad intent in other areas). I just don't see that you'd be gaining anything by taking on that risk, even if it's small.
-
RE: Do you need a place to ditch bad links?
It seems like you're trying to share something you believe is helpful, so I apologize if this comes off as overly critical, but that's really not a good tactic at all. First off, it's unnecessary. If you are fortunate enough to be able to isolate and redirect a page with bad links (as you said, assuming there aren't good links in the mix), then you'd do just as well to 404 that page entirely. There's no need to redirect it somewhere.
Second, it could actually look manipulative. Redirecting a page full of bad links to a 3rd-party site would look to me like negative SEO. I have no proof Google penalizes this particular behavior, but it seems like a red flag that could potentially cause risks for the site setting up the redirects.
Even if the risk of that happening is <5%, it's a risk on top of doing something completely unnecessary. Just kill the page (404 or Meta-Noindex if it still has user value) - it's a clear signal to Google. If you start getting weird with 301-redirects, you could raise alarms.
-
RE: Are link directories still effective? is there a risk?
Pretty good advice all-around here, but I just want to second Alan that the risk of this kind of focused directory-based link scheme (and it is a scheme, if they've built their own network) is very high. This is white-hat sermonizing. I'll be honest - yes, those links could help you in the short-term, and they could improve your ranking. The problem is that, if this scheme goes down, you will very likely be penalized, and you could lose everything. The SEO company will walk away, but you won't.
Solid, relevant directories, in moderation, are fine. Worst case, they may not carry the weight you want them to, and they're just part of a larger strategy. When you start gaming the system, though, you're facing the very real risk of a Capital-P Penalty.
-
RE: NoIndexing Massive Pages all at once: Good or bad?
If you're not currently suffering any ill effects, I probably would ease into it, just because any large-scale change can theoretically cause Google to re-evaluate a site. In general, though, getting these results pages and tag pages out of the index is probably a good thing.
Just a warning that this almost never goes as planned, and it can take months to fully kick in. Google takes their sweet time de-indexing pages. You might want to start with the tag pages, where a straight NOINDEX probably is a solid bet. After that, you could try rel=prev/next on the search pagination and/or canonical search filters. That would keep your core search pages indexed, but get rid of the really thin stuff. There's no one-sized-fits-all solution, but taking it in stages and using a couple of different methods targeted to the specific type of content may be a good bet.
Whatever you do, log everything and track the impact daily. The more you know, the better off you'll be if anything goes wrong.
-
RE: I thought META KEYWORDS tag was dead?
That's the kind of study I don't do because I'm secretly afraid it might work and then I'd have to quit my job and just drink full-time.
-
RE: I thought META KEYWORDS tag was dead?
One warning - not to derail the discussion, which is amazing - I'm as sure as is reasonably possible that at least one major search engine used META keywords as a spam signal in the past, and I'd bet it's still corroborating evidence for Google. Probably goes without saying, but if you use it - use it well. Just because it's not a positive ranking factor doesn't mean it's not a negative ranking factor.
I agree that the competitor aspect never bothered me. Hopefully, you also use your keywords in your actual content. Otherwise, what's the point?
-
RE: Rel="canonical" and rel="alternate" both necessary?
I'm honestly not completely clear on what the different URLs are for - I'd just add a note to keep the core difference between canonical and 301s in mind. A canonical tag only impacts Google, and eventually, search results. A 301 impacts all visitors (and moves them to the other page). A lot of people get hung up on the SEO side, but the two methods are very different for end-users.
As Tom said, if these variations have no user value, you could consolidate them altogether with 301s. I always hesitate to suggest it without in-depth knowledge of the site, though, because I've seen people run off and do something dangerous.
-
RE: Rel="canonical" and rel="alternate" both necessary?
Yeah, don't use rel=canonical for the same purpose as rel=alternate - the canonical tag will override the alternate/lang tag and may cause your alternate versions to rank incorrectly or not at all. It can be a bit unpredictable. If you only wanted one version to show up in search results, then rel=canonical would be ok, but rel=alternate is a softer signal to help Google rank the right page in the right situation. It's not perfect, but that's the intent.
As for multiple canonicals like what you described, that's essential like chaining 301-redirects. As much as possible, avoid it - you'll lose link equity, and Google may just not honor them in some cases. There's no hard/fast limit, and two levels may be ok in some cases, but I think it's just a recipe for trouble long-term. Fix the canonicals to be single-hop wherever possible.
-
RE: Category page canonical tag
It's tricky. Practically, I tend to agree with Tom - if it ain't broke, don't fix it. Especially at small-to-medium scale (let's say hundreds of URLs, but not thousands), rel=canonical is probably going to do the job here.
Technically, CleverPhd is correct that paginated content may be better served by rel=prev/next, and Google isn't fond of you canonical'ing to page 1 of search results. Their other preferred method is to canonical to a "View All" page (and make that page/link available to visitors), if that page loads reasonably and isn't huge.
In practice, they don't seem to penalized anyone for a canonical to page 1, and I know some mega-site SEOs who use rel=prev/next and have been almost completely unable to tell if it works (based on how Google still indexes and ranks the content). I think the critical thing is to keep most of these pages out of the index and avoid the duplicates. If your approach is working for now, my gut says to leave it alone.
-
RE: How reliable is the link depth info from Xenu?
It's hard to be completely sure without digging into the crawl, but keep in mind that Xenu is crawl-based, to both things can be true. Somewhere, a link to that page is 8 levels deep, event if it 301-redirects to a page that is only 1 level deep. This could indicate a problem and/or a mismatch in your architecture. Ideally, if you're 301-redirecting the page, then there should be no internal links to the old URL. Mixed signals can make messes, so I'd try to pin down where this link is coming from. Xenu probably is seeing something real.