That's been specifically called out by Google and can definitely get you penalized. It's very dangerous. Google will not see hiding text because your client doesn't want to put up text as a legitimate use - for one simple reason. They have no way to know that. All they see is a very abused tactic in play, and they can and will penalize it.
Posts made by Dr-Pete
-
RE: Good idea to use hidden text for SEO purposes due to picky clients not allowing additional content?
-
RE: How highly do you value a link from the BBB?
This gets into the realm of opinion pretty fast - it can be shockingly difficult to measure the value of one link. Here are a few of my opinions:
(1) One link is one link. It's rarely the magic pill people want it to be, even from a very authoritative site. I've seen people get a link like this and then wait on their hands for a sudden change in rankings, and it almost never comes. If you're just starting out and you have little or no link profile, a strong link can kick-start you, but I wouldn't pay $750 just to get a link if your site is established (I'm not sure I'd pay it even if your site is new).
(2) DA and PA both matter, and how much each matters can really vary with the situation. Your profile on a deep page of BBB is not an authority=96 link. It will carry weight, but the weight of any given profile could vary a lot.
(3) BBB has gotten a bit more aggressive, IMO, and I suspect Google will devalue these links over time. People tell me that they haven't yet, in this case, but it is, in essence, a paid link. Any day, Google could say "These BBB links are counting too much" and just lower the volume. So, don't put all your eggs in one basket, no matter what you do.
Now, to be fair, your BBB listing does have other value, like using it as a trust signal. The business case for spending the money goes beyond SEO, and that's a decision you have to make for yourself. If 100% of your interest in the listing is for a followed link, though, I personally would spend the money elsewhere.
-
RE: Bad links
I can't say there's no risk, but this is pretty blatant, and Google is generally good about detecting that. This was done in an automated fashion only on blog comments, and probably in a very short time-frame. On top of it, many of the comments are nofollow'ed, which means they weren't paying attention at all (they just sent out a bot and let it go).
If Google took action, it would probably first be against the phrase (and I'm guessing you don't want to rank for "buy cocks" anyway). Your link profile isn't quite as strong as I'd like to see, so there is some risk, but my gut reaction is that this is probably a short-lived attack that won't have much impact.
A few suggestions/possibilities:
(1) Keep an eye on it. People who fire off these low-value attacks tend to give up easily, but if they persist, that's a bigger issue.
(2) Notify the webmasters, as Syed said.
(3) Disavow - via Google Webmaster Tools - some of the worst links (really bad/spammy sites, etc.). Unfortunately, this is link by link, or domain by domain, at best, so it's not always a viable option.
-
RE: MozCast metrics: Big movement yesterday - what happened?
Mark beat me to the punch Sorry, yeah that was a code change necessitated by some DOM changes Google made around early December. Unfortunately, they were subtle, and we didn't detect them initially.
I'm happy to answer any specific questions/concerns in this thread.
-
RE: Duplicate page content
The exact percentage is hard to say - I think we use 90-95%, but it depends on the content (ads vs. template vs. unique copy, etc.). I think the aspect that probably confuses most people is that Google doesn't care about duplicate pages, in the sense of physical files on your server. They care about URLs. So, if your home-page can be reached (and is linked to) at:
http://www.example.com/index.html
...and Google crawls/indexes all of these, they could look like duplicates. Any specific case can get tricky fast, to be honest. I have a mega-post about the subject here:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
-
RE: Is it allowed to put a word in all domains URLs to get higher in SERP?
I tend to agree - you aren't going to get penalized, but two somewhat negative things generally happen. Let's say that your current URLs look like:
www.example.com/2012-honda-accord
...and you change them to:
www.example.com/cheap-cars/2012-honda-accord
(1) You're telling Google, in essence, that every page on the site should rank for "cheap cars". This is keyword cannibalization. Ideally, one page is the best target for that phrase. Adding it everywhere really only confuses spiders and visitors.
(2) You're pushing down the unique keywords "2012 Honda Accord" and making the URL longer. This hurts the ranking power of those unique keywords.
Now, keep in mind, URLs are just one small aspect of ranking, so the impact may be small. Generally, though, Google views this as low quality, and the potential harm well outweighs any SEO value in 2013.
-
RE: Panda Recovery - What is the best way to shrink your index and make Google aware?
If you want to completely remove these pages, I think Kerry22 is spot on. A 410 is about the fastest method we know of, and her points about leaving the crawl paths open are very important. I completely agree with leaving them in a stand-alone sitemap - that's good advice.
Saw your other answer, so I assume you don't want to 301 or canonical these pages. The only caveat I'd add is user value. Even if the pages have no links, make sure people aren't trying to visit them.
This can take time, especially at large scale, and a massive removal can look odd to Google. This doesn't generally result in a penalty or major problems, but it can cause short-term issues as Google re-evaluates the site.
The only option to speed it up is, if the pages have a consistent URL parameter or folder structure, you may be able to do a mass removal in Google Webmaster Tools. This can be faster, but it's constrained to similar-looking URLs. In other words, there has to be a pattern. The benefit is that you can make the GWT request on top of the 410s, so that can sometimes help. Any massive change takes time, though, and often requires some course correction, I find.
-
RE: Duplicate content on subdomains.
It is probably best to create separate profiles in Google Webmaster Tools, because then you can target the sub-domains to the countries in question. At that point, you could also set up separate sitemaps. It'll give you a cleaner view of how each sub-domain is indexed and ranking.
I'm not sure I understand (2) - why wouldn't you include those pages in the sitemap?
-
RE: Incorrect rel canonical , impacts ?
Yeah, I'm unclear as well - could you provide a sample URL, even if it's not the real URL (just something similar)?
If the canonical tag is appearing on both the original and duplicate and points to the original, that's fine. Google will essentially just ignore it on the original. If the original points to the duplicate, though, or they both point to each other, etc., that could be very dangerous.
-
RE: Reinforcing Rel Canonical? (Fixing Duplicate Content)
With products, it's a bit hard to say. Cross-domain canonical could work, but Google can be a bit finicky about it. Are you seeing the pages on both sides in the Google index, or just one or the other? Sorry, it's a bit hard to diagnose without seeing a sample URL.
If this were more traditional syndicated content, you could set a cross-domain canonical and link the copy back to the source. That would provide an additional signal of which site should get credit. With your case, though, I haven't seen a good example of that - I don't think it would be harmful, though (to add the link, that is).
If you're talking about 80K links, then you've got 80K+ near-duplicate product pages. Unfortunately, it could go beyond just having one or the other version get filtered out. This could trigger a Panda or Panda-like penalty against the site in general. The cross-domain canonical should help prevent this, whereas the links probably won't. I do think it's smart to be proactive, though.
Worst case, you could META NOINDEX the product pages on one site - they'd still be available to users, but wouldn't rank. I think the cross-domain canonical is probably preferable here, but if you ran into trouble, META NOINDEX would be the more severe approach (and could help solve that trouble).
-
RE: Sudden disappearance from visibility on Google
I wouldn't go crazy on the local-specific pages, but picking a couple and building them up is probably a good idea. It makes sense to have a Santa Barbara page, as long as you can get some solid content on it.
-
RE: Duplicate Page Content for sorted archives?
If you've got a couple of examples, I'd be happy to take a look and/or ping the Product team. Just DM me or email at [peter at seomoz dot org].
-
RE: Confused about canonicalization
Unfortunately, if your site is being indexed (and the duplicates are gone) but it isn't ranking at all, this may have nothing to do with the redirection. You could be looking at a more classic penalty, for various reasons.
Have you run a header checker, like:
http://tools.seobook.com/server-header-checker/
It's always good to make sure the redirects are doing what you think they're doing (they aren't multi-hop, 302s, etc.). Next step is to try very targeted searches - brand queries and long-tail queries in quotes. See if you can rank for anything - is it a total ban or just targeted? At this point, you've got to isolate the problem.
-
RE: Duplicate content on subdomains.
There's no perfect answer. Canonical tags would keep the sub-domains from ranking, in many cases. The cross-TLD stuff is weird, though - Google can, in some cases, ignore the canonical if they think that one sub-domain is more appropriate for the country/ccTLD the searcher is using.
Sub-domains can be tricky in and of themselves, unfortunately, because they sometimes fragment and don't pass link "juice" fully to the root domain. I generally still think sub-folders are better for cases like this, but obviously that would be a big change (and potentially risky).
You could try the rel="alternate" hreflang tags. They're similar to canonical (a bit weaker), but basically are designed to handle the same content in different languages and regions:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
They're basically designed for exactly this problem. You can set the root domain to "en-US", the UK sub-domain to "en-UK", etc. I've heard generally good things, and they're low-risk, but you have to try it and see. They can be a little tricky to implement properly.
-
RE: Would sharing the same IP address with competitors in the same market hurt SEO?
Historically, I've seen two cases where it went wrong:
(1) On rare occasions, if one site on an IP was penalized, it could carry to other sites - not always, or even usually, but it's happened.
(2) If the sites are cross-linked, those links are more likely to be devalued. A link network on one IP isn't exactly a master stroke of black-hattery.
I suspect (but can't prove) that (1) is less common these days. The simple reality is that we're out of IPv4 address space, so shared IPs are much more common now than they were a few years ago. Google understands that. The purist in me still says get your own IP whenever possible, but the SEO repercussions in 2013 are probably small in most cases.
-
RE: Rel canonical tag back to the same page the tag is on?
For all practical purposes, Google doesn't seem to index pages where it recognizes the canonical as legitimate. You won't find them in a "site:" query, "cache:" command, etc. Google may call that a "filter", but once it's reached that point, the URL is as good as de-indexed. There may be subtle, technical distinctions, but the end result is virtually the same.
-
RE: Duplicate Page Content for sorted archives?
Actually, except in isolated cases, I believe we do parse rel=canonical. I know there are some weird exceptions (such as conflicting canonical tags).
Agreed, though, that the canonical itself should be fine in this case. You could block your sorting from the crawl by putting it in pull-down, for example (not create the URL at all). You could also block the sort= parameter in Google Webmaster Tools. Generally, though, I like canonical better than GWT for this.
-
RE: Canonicalising To A 301?
It's certainly a mixed signal. It's hard to predict what Google will do, and they may just ignore the canonical in that case, but I've seen enough problems that I wouldn't take chances with it. My gut feeling is that the 301 is probably overpowering the canonical (and your Google index is showing the trailing slash in most cases), but I'd fix the canonical. You could see some short-term bounce, but I think it's for the best long-term.
FYI, you've got a ton of title tag duplication within the "/women" pages - you might want to look at adding some uniqueness to the deeper pages. That's unrelated - just something I noticed.
-
RE: Sudden disappearance from visibility on Google
Local is a tricky game, and on-page is mattering less and less over time, at least in the simple sense. In other words, it's not enough to just have "Santa Barbara" on your page - Google needs to see that you're a local business with reviews, citations, etc. They've definitely pushed harder in that direction this year.
You're hitting some of the on-page really hard, too - take your home-page title, for example. It's too long, you mention "Santa Barbara" twice, you have "Web Design" before and after it. Unfortunately, to a human, it just looks keyword-stuffed and borderline spammy.
At the same time, you have an internal link called "Santa Barbara Web Design and Development", which really looks over the top to an end-user of the site. It also means you've got two pages that are essentially in competition - the home-page and a deeper page targeted to Santa Barbara. The deep page is fine, but then ease off on the home-page. Honestly, ease off all around. You're push into dangerous territory where your on-page could have tipped from helpful to harmful.
I think you need to look at some of your local-specific factors. Even a few solid (legitimate) reviews could really tip the balance, and easing off the keyword stuffing could help get you out of any filters. I'm not seeing signs of a severe penalty, but I do think your home-page has been devalued or filtered for certain non-brand terms.
-
RE: Confused about canonicalization
Is the www version of your site being indexed. You can check with:
site:www.domainname.com
...or you could search something like:
site:domainname.com intitle:"some title keywords"
...with part of your home-page title and make sure that only one version has been indexed. Step one is to make sure that only the "www" version is being indexed and you haven't created duplicates. I'd also run a header checker - often, redirects get set up and don't work as planned (maybe they're taking more than one "hop" or the 301s are really 302, etc.). It's also worth that check, because mistakes happen more often than you'd think.
If everything is working properly and it's a very recent change, it could just be a matter of time to sort it out. Sometimes, any major change (even a positive one) can cause some short-term bouncing in rankings. Unfortunately, these situations can get technical pretty fast, so it's hard to speak in generalities.
-
RE: Too many links? Do links to named anchors count (ie page#nameanchor)?
I'd tend to agree with Dawn - it's a balancing act. I wrote about it a bit back:
http://www.seomoz.org/blog/how-many-links-is-too-many
As for named anchors, though, they should be fine. Google should treat them all as links to "page.html" (as long as they're true named anchors and not AJAX) and basically ignore the 2nd, 3rd, etc. link. They may not even be crawled (I'm not 100% sure on that), but they shouldn't dilute your internal link juice.
-
RE: Website Vulnerability Leading to Doorway Page Spam. Need Help.
I've definitely seen issues lately where mass 301-ing a lot of pages all to one page caused some problems with Google. If there were bad/suspicious links to some of those pages, it could definitely exacerbate the problem. You may have to try killing some of those redirects, especially from the worst pages. If you don't get traffic to those pages and you know the links are suspect (whether or not you created them), I'd strongly consider 404-ing some of those pages and cutting the redirects. How deep you have to cut depends on how bad the damage is and how much risk you're willing to take. It's definitely not for the faint of heart, but if the situation is bad enough, it may be necessary.
-
RE: Does using "pring2web" hurt SEO?
Tend to agree - it's hard to speak in generalities, but 4000 links from one site are likely to get devalued. Even if it's a great site and relevant links, multiple links from the same domain count less and less as you go. Getting 4000 links even from the New York Times wouldn't be nearly the same as getting 4000 links from different sites (probably not even the same as 40 decent sites). At worst, it could start to look manipulative and result in a penalty, but that's pretty uncommon based on just one site.
-
RE: Duplicate content by php id,page=... problem
You can use 301s or canonicals even if it's driven by one template. You'll have to set up the 301 rules based ont he URLs themselves or create dynamic caonical tags in the code. if the CMS can drive multple URLs, it can drive multiple canonicals.
If you can't sort that out in the code, you can't use NOINDEX either. You'd end up no-indexing every version.
Your other best bet may be to ignore the ID= parameter in Google Webmaster Tools. Personally, I consider that the worst of the three options, but it is the easiest and it should help a bit.
-
RE: How long does google take to pick up your new page headings and on page optimisation and when will results show fpr keywords due to this
Yeah, agreed - it can really vary, and it depends on how often Google re-crawls your site, how deep the page is, etc. Keep in mind that "web design" keywords are very competitive generally, so just adding them to new pages isn't going to automatically get you ranking.
-
RE: Exact URL Match For Ranking
It's certainly true that EMDs can still have an impact (it's declining, but they still matter), but it's rare for a brand new domain that's redirected to rank well, because there's nothing to redirect. You can't redirect the name itself, just the strength of the link profile. I suspect they may be doing something a bit more elaborate behind the scenes. They could be redirecting older, more powerful sites, or they could have a link network set up, as Matthew said.
Long-term, though, it will eventually burn out. It's frustrating, because these tactics can work for a while, but Google is definitely taking a dimmer view of them over time, and it's a risky play.
-
RE: Webmaster Tools shows 0 visits, Analytics shows 1000s?
I take those GWT numbers with a grain of salt, but 0 is certainly odd. If you segment you're GA numbers, is anything strange going on? Have you lost traffic for a particular URL, for example, or seen a shift in your origin countries?
In most cases, I'd trust the GA numbers much more than the GWT numbers, and this could just be a fluke, but I'd segment your GA data as much as possible. See if there's a story behind the total.
-
RE: Website Vulnerability Leading to Doorway Page Spam. Need Help.
Unfortunately, even across the broader community, specific technical issues with specific CMS platforms can be really hard to find an answer to. You need someone who's been in exactly your situation, in most cases. I'm seeing multiple mentions on the web for Plone security holes:
http://plone.org/products/plone/security/advisories/20121106-announcement
If you think this is primarily an issue of these bad links, then using the new disavow tool is your best (if imperfect) option right now, most likely. Otherwise, you're left contacting each website to let them know they have a hole. If you think this is a new vulnerability, you could try to work with Plone directly, but that would rely on all of these sites patching the hole. In other words, even if Plone releases a fix, everyone has to actually apply it, and that often doesn't happen. So, cutting off the links via Google is probably more effective.
Given that you switched platforms, though, I'd really dig deep and make sure you haven't run into other problems. For example, did the WordPress switch introduce new duplicate content? Did any of your TITLE tags, URLs, or other on-page factors change? Are they links you're "duplicating" starting to look like a network to Google? It's entirely possible for one site to get hit and not others, especially in a competitive vertical. I'd look long and hard at your whole portfolio and make sure this isn't a signal that something worse is about to happen.
That's conjecture, but I've just seen too many SEO companies jump to the conclusion of foul play, only to miss something they had control over. Make sure you're looking at the whole picture.
-
RE: Does the root domain hold more power then an inner page?
As Google starts to factor in user behavior, like CTR, this kind of thing may be even more important. I think it's a very small piece of the ranking puzzle right now, but I'd expect it to grow in the coming years. Google wants to rank the page that best answers the question, ultimately.
-
RE: Does the root domain hold more power then an inner page?
This is a very complex issue, but I think Jonathan's summed it up pretty well. Generally, home pages collect a lot of the "mass" of inbound links, and so they can overpower other pages. On the other hand, deep pages are easier to target to specific keywords and sometimes have targeted anchor text. I've seen cases where someone wanted the home-page to rank, but a deep page was ranking, and I've seen the opposite.
Rand wrote about that general problem here:
http://www.seomoz.org/blog/wrong-page-ranking-in-the-results-6-common-causes-5-solutions
While it's not exactly what you're asking, it covers the general logic of why one page might win over another.
-
RE: Page authority questions?
So, keep in mind that "cheating" Page Authority isn't really very useful - we're just creating a predictive metric. We have no control over whether you actually rank. PA is a machine-learning model designed to predict the ranking ability of a page, but it's far from complete or perfect (no one number can be, although we're trying to improve it all the time).
Sub-domains are tricky animals. Sometimes, they inherit a lot of power from the root domain. Sometimes, they get fragmented and essentially act like entirely separate domains. Our PA model doesn't really account for all of that complexity. It's a best guess, ultimately. There are certainly cases, though, where a sub-domain ranks well without many unique links, solely based on the power of the root domain. That's far from guaranteed, though.
-
RE: 301 redirect on yahoo hosting
I think Paul's got you on the right track, although I'll just add a couple of things:
(1) META Refresh can be a bit unpredictable, in terms of passing inbound linking power and SEO benefits. Google has generally recommended against using it the last couple of years.
(2) 301-redirects would be the preferred solution, but you could use rel=canonical in a pinch. It will generally help consolidate any duplicates and pass any inbound link "juice":
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394
The disadvantage is what Paul discussed - canonical tags won't send the visitors to the canonical URL - they're only seen by search engines. The advantage is that you could create them within the HTML itself and don't need the Yahoo platform to support it.
I tend to agree with Paul, though - there are plenty of inexpensive hosting options that do support redirects. If it's important, it's worth considering a switch.
-
RE: Rel="canonical" questions?
Oh, ouch - yeah that's definitely has potential to spin out of control. I think rel=canonical would actually be great there, because the product page really is 100% duplicated.