Do you want the .sg site to only rank regionally in Singapore? You could use rel=alternate hreflang to designate the language/region for the two sites, and help Google more accurately know when to display which sites. This also acts as a soft canonicalization signal and tells Google that the pages are known duplicates:
Best posts made by Dr-Pete
-
RE: Using canonical for duplicate contents outside of my domain
-
RE: Is Zemanta Still Safe
Looking at some Zemanta links in the wild, it doesn't seem too aggressive. Take for example, the bottom of this blog:
It depends on your goals. From a traffic perspective, as long as the links are relevant, it's fine. I can't tell you whether Zemanta will drive traffic to your particular site (I haven't used it), but it's certainly plausible. It probably depends a lot on your niche and the population of sites using the system.
From an SEO standpoint, my gut reaction is that it's fairly low-risk but may also be low-gain. Google could easily recognize these plug-in links and simply devalue them. I doubt a block of four relevant links is going to get outright penalized - it's more likely Google will just see this as a link-exchange of sorts and simply dial down the volume on those links.
That's not to say it's risk-free (we've certainly seen many link networks get hit in the past year), but I suspect that John was speaking in generalities and not about Zemanta in particular. On the other hand, no tool like this should be seen as some kind of panacea for link-building. It's just one piece of the puzzle for getting exposure and traffic.
-
RE: Is my knowledge graph code wrong?
Keep in mind that well-formatted structured data is only step one of (X), where Google defines X and X is always changing. Ok, I exaggerate a little, but the frustrating key point is that structured data is no guarantee that you will get enhanced results or any kind of Knowledge Graph entity. It's just a suggestion, and Google has to factor in a lot of other things. If the testing tool is giving you the green light, that's usually about the best you can hope for.
What kind of KG entity are you trying to produce, exactly (do you have a sample query with a competitor who has it)?
-
RE: Anchor text after the recent Panda update
So far (and this is based on very limited data), it appears that Penguin 2.0 targets many of the same tactics that Penguin 1.0 did, just on a deeper level, across more pages, and probably with a bit less forgiveness. So, I think it's safe to say that anchor text manipulation is still a factor in the new Penguin update.
I agree with Mark that it's dangerous to think in terms of pure percentages, as it's probably more complex than that, and may be dependent on a mix of factors, including the SERP "neighborhood" you're in. The problem with any aggressive manual link-building is that it's prone to looking artificial, so it's important to look for ways to both build and attract links, which naturally creates a variety of anchor text. As long as you're building by hand, it's going to be hard not to "over-optimize" (for lack of a better term).
I definitely think it's a good idea to diversify across landing pages. If you're going to build links manually, I'd just be careful not to use the same tactics on every landing page. That pattern is going to look more obvious when Google sees it across the whole site. Mix it up as much as you can, not just anchor text, but the actual link sources and types of links. It's really easy to carry one tactic too far that's ok in small quantities.
-
RE: Panda and Large Web Presence
Panda updates have hit microsites where content across the sites was either duplicated or "thin", although thin is often in the eye of the beholder. Keep in mind, and I mean this kindly, that "unique" is not always high-quality, and the quest for technical uniqueness can lead to practices where microsites are just spinning out versions of content with slightly different keyword concepts or ordering, etc. In other words, it's technically "unique", but most people wouldn't view it as valuable.
Early Panda updates did hit certain kinds of spun-off content hard, including geo-located content. In other words, you spun out your plumbing services page for 5,000 cities and it only differed by city names and a few basic facts (even if technically unique), that's definitely something Panda came down hard on.
Truthfully, though, it's really tough to tell without specifics. I'm more on EGOL's side of the fence - my gut feeling is that 20 micro-sites is excessive and I'd strongly suspect quality issues.
Some questions that might help you pin things down:
(1) Has traffic dropped across the entire cluster of sites or just the main site?
(2) Can you pin traffic drops down to any given date, set of keywords, or pages? Drill down as far as you can - that's always the most important first step, IMO.
(3) Are some of your micro-sites essentially dead - no traffic or ROI? You might not have to go all-or-none here. Odds are that some small % of your micro-sites are creating a large % of your value (let's call it an 80/20 rule). It's likely you could kill 10-15 of them with very little harm - at least that's what I typically see. You don't have to drop all 20 cold-turkey.
-
RE: Canonical
I endorsed Ade's answer/comments, but I just want to point out something important that people frequently overlook. While rel=canonical and 301 redirects can have a similar impact on SEO, they're completely different for visitors. If a person comes to Site B and there's a canonical tag, they see Site B. Google credits that page to Site A and Site A shows up in search results. If a person comes to Site B and there's a 301-redirect, they go straight to Site A and never see Site B (done properly). These are two completely different experiences with completely different goals. So, set aside the SEO aspect for a minute and ask what you want to have happen to your customers.
-
RE: Change in Meta Description - 320 to 160
I've got a blog post going up tomorrow morning, but the short answer is that it looks like Google has reverted to the previous limit (roughly 155 characters). There are a small % of display snippets with >300 characters, but those are exceptions to the rule and seem to be connected somewhat to Featured Snippets. In most cases, those >300 descriptions are pulled from page content.
-
RE: Do you need a place to ditch bad links?
It seems like you're trying to share something you believe is helpful, so I apologize if this comes off as overly critical, but that's really not a good tactic at all. First off, it's unnecessary. If you are fortunate enough to be able to isolate and redirect a page with bad links (as you said, assuming there aren't good links in the mix), then you'd do just as well to 404 that page entirely. There's no need to redirect it somewhere.
Second, it could actually look manipulative. Redirecting a page full of bad links to a 3rd-party site would look to me like negative SEO. I have no proof Google penalizes this particular behavior, but it seems like a red flag that could potentially cause risks for the site setting up the redirects.
Even if the risk of that happening is <5%, it's a risk on top of doing something completely unnecessary. Just kill the page (404 or Meta-Noindex if it still has user value) - it's a clear signal to Google. If you start getting weird with 301-redirects, you could raise alarms.
-
RE: Do you need a place to ditch bad links?
As best we know, 404s should kill the page as a link target, which essentially severs the links. I don't think Google views the link on a domain-wide level at that point. If they did, then honestly it's likely the same rules would apply to other HTTP headers, including 301s. If the page is dead, you're pretty safe at that point. I don't think 301-redirecting the bad page is going to have any additional positive impact.
Re: "breaking rules", the problem is that it's very subjective. Let's say that a bunch of SEOs realize that Penguin and other link-based penalties created an opportunity, and they start taking their own pages with bad links and 301-redirecting those to competitors (maliciously). If Google sees that pattern and then they see you 301-redirecting your links to a 3rd-party site, they may not be able to separate you from the pattern. In other words, they're going to assume bad behavior.
That's speculation, of course (in this specific case - I've definitely seen them mistake bad intent in other areas). I just don't see that you'd be gaining anything by taking on that risk, even if it's small.
-
RE: URL best practices, use folders or not ?
It's a trade-off, for both SEO and users, and I don't think there's one answer that fits every situation. The category level can add information, but it also makes URLs longer, which can be bad for both bots and people. If you have short, descriptive categories that aren't repeated in the product/page names, and those categories mimic your site structure, then I think it can be positive.
My argument was mostly against people adding categories just for SEO benefit (it's probably minimal, at best) or repeating every category, sub-category, etc. to the point of absurdity, causing keyword cannibalization and massive URLs. For example:
www.bobscamerashop.com/cameras/digital-cameras/canon-cameras/eos-cameras/camera-canon-eos-rebel-t3
Of course, that's also keyword stuffed, but I'm exaggerating to prove a point. You can go too far in either direction.
In general, though, I don't think categories in the URL are necessarily bad. In some cases, as Woj said, they could be a positive for users and possible even SEO.
-
RE: Honest thoughts needed about link building / removal.
Some of the risks of using the disavow tool (especially the more conspiratorial ones) are overblown, IMO. There are two main risks:
(1) You might disavow the wrong links or links that are actually helping you. Considering that you've already requested removal and decided to get rid of these links one way or another, this is a bit of a moot point.
(2) It might not work. The reality is that Google might not honor the disavow if you haven't shown progress in link removal, disavowal may not help if the problem wasn't what you thought it was, and/or disavowal may not help if you don't get the problem links.
I think (2) is where things get tricky. If you disavow a ton of low-quality links, but it turns out that Google really was targeting just a handful of paid links, then the disavowal may do nothing (that's just an example).
I'm concerned from your comments that this doesn't sound like a clear link-based penalty situation, and so you may end up spending a lot of money to solve the wrong problem and take a hatchet to your links in the process. I think you'd be better off putting some cash toward a second opinion than paying someone to send emails and submit a disavow (both of which take time but you can pretty easily do yourself).
-
RE: Implementing rel=canonical in a CMS
Ideally, you'd fix the crawl path, but that may be tricky (unless they've patched the CMS). You could add the canonical to just the "page=1" version, but admittedly that's a bit code-intensive.
An alternate idea - that is fairly Google-friendly. You could add a "View All" version and then point the canonical on all search pages to that version. Especially since all is only 2 pages, that could work well in your case and you wouldn't have to worry about all the variants or search results not getting crawled.
-
RE: How to determinate current site situation?
Ultimately, you have to pick something and just get started. I can't full audit a site in Q&A, but a few starting points:
(1) Your home-page title is over 200 characters long. It's just too much, and it may look keyword stuffed. I have the feeling you're trying to target everything, and that just doesn't work. Pick a focus and target a couple of keyphrases with each major page at most.
(2) Many of your online store pages have the same title tag, and all your product pages start with the same long phrase. The "Online Store" in the title is almost useless, and I'd move the brand to the end. Put the most unique keywords up front - it's better for SEO AND users. You probably want to control or even canonicalize some of these pages, but that's a complicated topic.
(3) You seem to have a bunch of pages doing a 302 to a 404 page, but the 404 returns a 200 code. In other words, your 404 isn't actually 404'ing. So, Google is never clearing out these old pages. If that's a common occurrence on the site, you need to fix it, or you're just filling the index with junk and hurting your ranking ability.
Obviously, that's not everything, but hopefully a few useful starting points.
-
RE: Custom Landing Page URLs
First off, a slight word of warning. When you spin out the custom landing pages, make sure they have unique content and don't do it in large numbers. It used to be that these kind of long-tail, keyword-targeted pages could help SEO (or, at worst, not hurt it). Since Panda and Google's attack on thin content over the past couple of years, these pages can actually cause you SEO harm. It depends a lot on the quantity and quality, of course. If you spin out 500 pages on a 50 page site just to target a bunch of keywords, and those pages only differ by a sentence or a few words, you're going to do more harm than good.
I doubt the two URLs you list would be much different. Theoretically, the shorter URL will focus more keyword power on "silver fish", but URL keywords are just one, relatively weak ranking factor, and you're talking about 4 characters.
You could use a hash-tag style URL, like:
www.domain.com/silver-fish.html#clp
I think those characers would be ignored by Google. Unfortunately, you'd have to modify your analytics to read them (as they'll be ignored by most analytics packages, too). Here's an article on how to do it in GA:
http://www.searchenginepeople.com/blog/how-to-track-clicks-on-anchors-in-google-analytics.html
That's a pretty technical feat for something that I doubt would have much impact, though.
-
RE: Optimize a Classifieds Site
Unfortunately, the painful reality, especially if you've been hit by Panda, is that you probably can't support that scale or that it looks thin to Google. 500 cities X 50 categories = 25,000 "category" pages, so to speak, all of which are basically just search results. For most sites, it's just too much.
I'd definitely keep the cities as sub-folders. If you go the sub-domain route, you could fracture your internal link-juice even more. It depends a bit on the authority and marketing budget of the site. If each city is a separate property with its own sales force, budget, etc., there may be a logic to sub-domains. Unless you're Groupon or someone like that, though, it's probably a bad idea.
You may have to prune down the indexed content, to be frank. I'd look for other Panda factors, too, like aggressive ad density (too many ads to too little content) or very thin pages. If you have tons of cities or categories with no listings, META NOINDEX them. You could even do it dynamically - only let Google index a page if it has 1+ listings, for example.
I'd also take a look at other low-value content, like paginated search. If each city has 100s of pages and you're indexing page 2, page 3, etc., consider consolidating them. It's a tricky topic, but Adam Audette has a great write-up here:
http://searchengineland.com/five-step-strategy-for-solving-seo-pagination-problems-95494
These pages can look very low-value to Google. Add in search sorts and other variants, and your 25K categories could be exploding into hundreds of thousands of pages, before Google even gets to the listings themselves. The ads are the real meat of the site, and that's where you want Google to focus.
-
RE: Help I don't understand Rel Canonical
The basic problem, and it's actually confusing to a lot of people, is that what you think of as a "page" is the physical file on your server (like your home-page). What Google thinks of as a page is a URL. So, if multiple URLs go to the same place, they look like duplicate copies to Google (who sees them as all different pages).
It's especially common with home-pages, where you can have www vs. non-www, root ("www.example.com") vs. the filename ("www.example.com/index.php"), etc. If those variations get crawled and indexed by Google, the duplicates can dilute your index and weaken your ranking ability. So, rel-canonical helps tell Google that those variations are all the same page.
-
RE: I try to apply best duplicate content practices, but my rankings drop!
Agreed with Alan (deeper in the comments) - you may have cut off links to these pages or internal link-juice flow. It would be much better to either 301-redirect the "/shop" pages or use the canonical tag on those pages. In Apache, the 301 is going to be a lot easier - if "/shop/product" always goes to "/product" you can set up a rewrite rule in .htaccess and you don't even need to modify the site code (which site-wide canonical tags would require).
The minor loss from the 301s should be much less than the problems that may have been created with Robots.txt. As Alan said, definitely re-point your internal links to the canonical (non-shop) version.
-
RE: Canonical URL tags help I am not sure what this is
Unfortunately, I don't know Etsy well, but I think Thomas is right on track - you don't necessarily want to pre-emptively add them, especially if you're not clear on how they work. Typically, it's more for preventing duplicate content issues. We're re-evaluating how we grade the canonical tag, since this is a bit confusing. It's a good practice on some pages (like your home-page) that naturally tend to have more than one version, but adding it sitewide can be tricky.
-
RE: Diagnosing duplicate content issues
I hate to advocate full-scale blocking, but if you really took a hit, and you know the timeline coincided with the new content, it is possible. It might be better to scale back and re-roll out new content in chunks.
One warning - if this is a regular filter (you added a bunch of duplicates), Google should start re-ranking content as soon as the blocking kicks in (this may take weeks, not days). If this was Panda-related or more severe, though, it could take a month or more to see an impact. Not to be the bearer of bad news, but don't Robots.txt block the pages for 2 days, decide it didn't work, and unblock them.
A slightly less extreme approach would be to META NOINDEX all of the pages. That way, you could start to selectively lift the NOINDEX on content piece by piece. If you Robots.txt block all the new directories, it's going to be hard to re-introduce the content. You'll end up releasing the block all at once and potentially just having the same problem again.
-
RE: To Many Links On Page
Did you modify the site recently? I'm only seeing about 74 links on the home-page.
In general, I think the other commenters raise good points - 100 links isn't a hard limit, but there's a fundamental truth for both search spiders and human visitors: the more links you have, the less love each link gets. It's always a balancing act, but building a more heirarchical approach can focus attention and internal PR at the top. For 74 links, it's just tweaks (I think you could improve the left side-bar a bit). For 1,000 links, I'd definitely consider a new information architecture.
-
RE: Can PDF be seen as duplicate content? If so, how to prevent it?
Oh, sorry - so these PDFs aren't duplicates with your own web/HTML content so much as duplicates with the same PDFs on other websites?
That's more like a syndication situation. It is possible that, if enough people post these PDFs, you could run into trouble, but I've never seen that. More likely, your versions just wouldn't rank. Theoretically, you could use the header-level canonical tag cross-domain, but I've honestly never seen that tested.
If you're talking about a handful of PDFs, they're a small percentage of your overall indexed content, and that content is unique, I wouldn't worry too much. If you're talking about 100s of PDFs on a 50-page website, then I'd control it. Unfortunately, at that point, you'd probably have to put the PDFs in a folder and outright block it. You'd remove the risk, but you'd stop ranking on those PDFs as well.
-
RE: The importance of the home page and subdirectories
It's tricky for home pages. Google really tends to prefer the root URL, and it's best if you can target that. Unfortunately, .Net has a bad way of forcing you to use a deeper page.
If you can't resolve to the root, then you need to be consistent with your internal links before you set the canonical. If you're resolving and linking internally to:
/keywords/default.aspx
...then canonical to that page. It's not quite as good as the root, but by setting the canonical to "/keywords" you could actually be creating a third URL that isn't represented in either your inbound or internal links.
In other words, the first step to a good canonical implementation is to actual use ONE URL, no matter what it is. The canonical tag itself is a bit of a band-aid. It's effective, but fixing the on-page structure is the first, best step.
-
RE: Can PDF be seen as duplicate content? If so, how to prevent it?
I think it's possible, but I've only seen it in cases that are a bit hard to disentangle. For example, I've seen a PDF outrank a duplicate piece of regular content when the regular content had other issues (including massive duplication with other, regular content). My gut feeling is that it's unusual.
If you're concerned about it, you can canonicalize PDFs with the header-level canonical directive. It's a bit more technically complex than the standard HTML canonical tag:
http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html
I'm going to mark this as "Discussion", just in case anyone else has seen real-world examples.
-
RE: Good idea to use hidden text for SEO purposes due to picky clients not allowing additional content?
That's been specifically called out by Google and can definitely get you penalized. It's very dangerous. Google will not see hiding text because your client doesn't want to put up text as a legitimate use - for one simple reason. They have no way to know that. All they see is a very abused tactic in play, and they can and will penalize it.
-
RE: Problem with 404 and 500 Status code pages
Unfortunately, these things are really tough to diagnose from generalities (you have to be able to dig in and see what's happening on the actual site). Going from the non-www to "www" version should be ok, but any full-scale change can cause short-term rankings bounce. A couple of questions:
(1) Did you only change from non-www to www or did you make other URL changes? Make sure you've mapped every major URL to a new URL - it's easy to miss some.
(2) Are the 301s working and are they single-hop? Test some individual pages with a header checker, like this one:
http://tools.seobook.com/server-header-checker/
I think our Firefox toolbar will do it, too.
(3) Are the pages we show as 404s returning 404s (again check the headers)? Should they be? If you have legit pages returning 404s and 50Xs, you've got a problem. Unfortunately, why you're getting 404s is almost impossible to tell from the outside - it's specific to your server/platform.
-
RE: Reducing number crawl-able links?
I haven't seen this solution in play, but I'd be cautious. It's a bit gray, at best. Google is okay with AJAX-driven menus, for example, and often won't crawl them, but this adaptation of HTML5 is kind of cheating the attributes of a link, and that could get you into trouble. Google may just outright crawl them anyway.
Truthfully, the data on mega-menu usability is really mixed. People often find them much less useful than webmasters and business owners think they are. Many people never see them at all. I think you should really consider your information architecture and whether even having mega-menus is useful. When you make everything important, each thing becomes less important, both for people and for Google.
-
RE: Do you bother cleaning duplicate content from Googles Index?
I DO NOT believe in letting Google sort it out - they don't do it well, and, since Panda (and really even before), they basically penalize sites for their inability to sort out duplicates. I think it's very important to manage your index.
Unfortunately, how to do that can be very complex and depends a lot on the situation. Highland's covered the big ones, but the details can get messy. I wrote a mega-post about it:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
Without giving URLs, can you give us a sense of what kind of duplicates they are (or maybe some generic URL examples)?
-
RE: Question about url structure for large real estate website
Unfortunately, other than being 99% sure there was an algorithm update around May 9th (dubbed "Phantom" by some folks), and even having seen it hit a former client, we have very few clues about what it actually did. Some folks have suggested it was "Panda-like" in which case thin content could be a culprit.
It's really tough to tell without seeing the site and the scope of the problem, but doubling up all of your rental pages could absolutely create problems, especially when you pair that with geographic searches and drill-downs. A couple of things I'd dig into before you completely change your structure:
(1) What's the scope of the doubling up, relative to your entire index size?
(2) Are there other culprits, such as search sorts and filters in play?
(3) Have you managed pagination (most likely with rel=prev/next, but there are other options)? With all of these geographic folders, you might have a ton of paginated search.
I think reducing your index size could be beneficial, but I'd make sure that the rental pages are the primary culprit first. I don't think the property URL change would help that much. It's a nice-to-have, but it wouldn't impact Panda or cause you major problems with Google the way it is. It's just slightly less user-friendly and slightly less keyword-targeted. I'd deal with the thin content first.
-
RE: Indexing an e-commerce site
It may be tedious, but you need to do it, one way or another. Theoretically, these product duplicates could be severely harming your client's ranking ability.
Practically, I'm not seeing much evidence, though, of these duplicate paths or duplicate products in the Google index. I am seeing other duplicate pages, like search results and https: versions of your product pages. You have a few canonicalization issues going on.
Ideally, no matter what category path, you'll land on one URL. The very small usability consequences of the path change (in my experience, at least) are far outweighed by the risks of spinning off dozens of duplicates. As @activitysuper said, there should be a way to do this dynamically - you're changing a couple of templates, not individual product pages.
I would have to see the duplicate product URLs in action, though. I'm not finding that specific problem.
-
RE: Does this site have a duplicate content issue?
Each of these problems may have a unique solution, so it gets complicated. Regarding the "design your own" pages, I'm seeing over 5K of those URLs in the search index, and they do probably look very similar. Since these are not the core product pages, I'd strongly consider using META NOINDEX on them. I find that Robots.txt does not do a good job of blocking content that has already been indexed, in most cases. You can add the meta tag dynamically in your code, hopefully, so that just a few lines of code will serve all of these pages.
While these pages aren't "true" duplicates, they look similar enough that, at the scale of your site, they really are diluting your ability to rank. In extreme cases, if you're also serving up product variations, paginated search results, etc., you could even run into Panda issues. Whether or not this is your core problem, from an SEO perspective, cleaning it up can't hurt, and may make it easier to find other problems.
-
RE: Duplicate content issue
Sorry, I'm slightly confused. Are the wiki and forum duplicating each other, or are they each duplicating content on your root domain. Even in a sub-domain, they could be diluting content in your root domain, but it really depends a lot on the situation and extent of the duplication.
You could use the canonical tag to point them to the source of the content, or you could block them (probably META NOINDEX), but I'd like to understand the goals a bit better.
-
RE: 2 version of meta description and title
As others said, rewrites are common, but rewrites that include information that no longer exists is a bit odd. Typically, it means either Google is caching old content somewhere (including a duplicate copy of the page), or they're pulling from a directory somewhere, like the Open Directory.
You could try the NOODP tag, since its harmless, although the hit rate is low (i.e. don't get your hopes up):
If it were me, I'd put a unique snippet of the undesired meta description in quotes and do some exact searches on Google. Try to find out where Google thinks that text lives.
-
RE: How can I tell which website pages are hosted on the root domain vs the www subdomain?
Did something change recently? I'm currently seeing your non-www pages 301-redirect to the "www." version, at least in a handful of cases. There are a couple of oddities where a page double-301s, but that looks isolated. I'm not seeing any clear signs of a problem.
You don't need both canonicals and 301s for this particular issue, although a canonical tag can still have value on other parts of the site (including the home-page).
-
RE: What to do about all of the other domains we own?
I think Robert's right - it's a matter of moderation. Sites change domains, for example, and 301-redirects are perfectly valid. Sometimes, sites consolidate and, again, that's natural. The problem is that people have also bought tons of domains and redirected them to game the system, so Google is watching.
The gradual approach is very sensible. You don't want to lose this link equity - absolutely agreed on that point. So, start with the most powerful sites and redirect one by one. Measure what happens and adapt along the way based on the data.
When you get to the weaker sites, it may be time to let them go (especially if they looks like duplicates). This isn't all or none. I'm definitely not saying to NEVER 301 or to always 301 - it's a balancing act. My fear is that if you do this with dozens of domains in one day, you'll get smacked down. So, ease into it.
-
RE: Do links in the nav bar help SEO?
One thing I'd keep in mind is that a lot of your main nav pages aren't always great landing pages for search users. "About Us" is a decent landing page for finding out about your company (and that or the home-page should rank fine), but it and "Contact Us" aren't usually good bets for your non-brand keywords. It's often better to have a dedicated page targeting separate services.
I think it's fine to use keywords for the "Services" page, or you could split that page into specific services. Then, each service would have a keyword-targeted internal link and content. In that sense, think of your services like products - you branch from a main "store" page to categories to individual products. Done well, it serves both users and SEO.
-
RE: What is considered "Over Optimization" for SEO?
In terms of the current algorithm change, we really don't know yet. I suspect that Donnie's right - it's really the same sort of stuff that we've referred to as over-optimization all along:
- Keyword stuff copy, titles, URLs
- Aggressive exact-match anchor text
- SEO-targeted copy (like 1000s of thin pages that only differ by a keyword)
- Suspicious code-order changes
- Hidden or partially hidden content
I suspect that, unlike the past, Google may start treating over-optimization more like Panda. In other words, dozens of over-optimization variables will get fed into a model and spit out one unified factor. I also suspect the threshold will be pretty high - you're going to have to be knowingly manipulating your site in an aggressive way. I don't think anyone is going to get penalized for an extra keyword repetition or a TITLE that's a bit too long.
-
RE: Guest posts/article marketing can be considered as paid posts by SEs?
Guest posts/articles wouldn't generally be considered paid, but they can create problems if they're obviously low quality or if you're spinning the same articles across dozens of sites. It really depends on a lot of factors:
(1) How much you use this tactic. No single tactic like this, especially if low-quality, should be the bulk of your link-building. Diversity is very important ("natural" link profiles tend to be diverse).
(2) If the sites are part of a link network. There's been a big crackdown lately on networks, and many article marketing services use them. If you're buying into a network or service, it's a lot more likely. If you're finding places to guest post manually, it's probably not a big risk.
(3) If the post/articles are clearly spammy. Use your judgment - if you look at the blogs your articles are posted on, and there are 20 other articles all on unrelated topics in spammy verticals (mortgages or pay-day loans, for example), it's going to be easy for Google to spot your quality issues. You may not get penalized, but the links will be devalued.
-
RE: SEO best practice: Use tags for SEO purpose? To add or not to add to Sitemap?
It's possible that these percentages may hold true for some site in some vertical, but "on-page" counts issues like site architecture that are incredibly important for larger sites. I've seen sites made or broken by on-page. All 22 Panda updates are essentially focused on on-page.
There's no one formula, and I think it can be very dangerous to suggest a one-sized-fits-all approach. I wrote a whole post on this debate:
http://www.seomoz.org/blog/whats-better-on-page-seo-or-link-building
I also think the role of social in 2012 has been seriously overstated. Social mentions can be great for getting a site noticed and indexed, especially in a low-competition vertical, but the impact often isn't lasting, without sustained activity. Sustained activity requires content and solid on-page, so it's really tough to separate the two.
-
RE: Can Location Information Decrease National Search Volume ?
Miriam has a much better head for local than I do, but my sense of recent local updates (like "Venice") is the same as hers. If Google starts to treat a SERP as local, then being seen as local should help you and being seen as may hurt you. On average, I think we're seeing more SERPs being treated as having local intent, so generally getting Google to locate you should be a positive. Of course, for any given keyword, there could be exceptions (especially if you get a huge amount of traffic from a few broad, "head" terms).
Hosting issues can always cause short-term problems, and there have been a lot of shake-ups in the algorithm recently. Google is probably testing over-optimization updates (not full roll-outs, but we're seeing weird things happening), and they had a glitch on 4/17 that shook up some sites. They've also been hitting link networks hard, but I find it hard to believe that a handful of reputable local directories would be impacted by that. If you built up 100s of low-quality directory links and doubled your link profile in the process, that could be trouble. Adding a couple of directories to a solid profile should pose no risk.
My gut reaction is that there's more going on here than just local factors.
-
RE: Backlinks for the same IP address
I'd tend to agree - this used to be a bigger deal, but as IPv4 space runs low and more and more sites share IP addresses, Google has had to be more forgiving. I'm slightly concerned that the 20 domains are all duplicated, though - that could start to look like a link network. If it's just one site and you have a solid link profile, it's probably no big deal. In most cases, Google would just filter those links and they might even filter the domains (basically, they'd be ignored). If you're seeing this situation repeat itself, though, that's a different matter.
-
RE: Big Site Wide Link
Typically, "devalue" just means that the links don't count as much as they might under other conditions. Obviously 750K links from one site don't count nearly as much as 1 link from 750K different sites (by a huge amount), but that's just because site-wide links are relatively common and Google knows to weight them a bit differently. That shouldn't be confused with a penalty.
Agreed with Julie that, if this is one of the sponsor banners, it could be seen as a paid link. By itself, I don't think this poses a threat, but if you have a weak link profile otherwise or are getting a lot of similar sponsorships, you may want to nofollow some of these links down the road. If you're not seeing any danger signs, though, I suspect you're ok for now. There's nothing spammy about the site, and all of the sponsors seem relevant.
-
RE: Penalized by Penguin 2.0
I just want to add that this is a difficult and even potentially dangerous process. There's no way to know exactly what Google didn't like, so it's easy to cut too deep. On the other hand, if you don't cut deep enough, Google may actually take that as a negative sign (don't just keep removing 5 links and then submitting a new disavow).
We don't know much about how Penguin 2.0 differs from 1.0, except some hearsay that it's targeting the page level now (and isn't just a sitewide action). So, I think the first thing you need to do is really pin down which keywords lost ranking and which pages those keywords target. It's possible that you've been too aggressive with links to a specific page or with anchor text using specific terms. If you see a pattern, then you can focus on just those links.
Also, keep in mind that, if the vast majority of your link profile is low quality, there may not be much left after you disavow. In other words, you may remove the bad links and still not recover. So, as Takeshi said, it's important to both address the problem and build for the future at the same time.
-
RE: Duplicate content on ecommerce sites
I'm going to generally agree with (and thumb up) Mark, but a couple of additional comments:
(1) It really varies wildly. You can, with enough duplication, make your pages look thin enough to get filtered out. I don't think there's a fixed word-count or percentage, because it depends on the nature of the duplicate content, the non-duplicate content, the structure/code of the page, etc. Generally speaking, I would not add a long chunk of "Why Buy With Us" text - not only is it going to increase duplicate-content risks, but most people won't read it. Consider something short and punchy - maybe even an image or link that goes to a site with a full description. That way, most people will get the short message and people who are worried can get more details on a stand-alone page. You could even A/B test it - I suspect the long-form content may not be as powerful as you think.
(2) While duplicate content is not "penalized" in the traditional sense, the impact of it can approach penalty-like levels since the Panda updates.
(3) Definitely agreed with Mark that you have to watch both internal and external duplication. If you're a product reseller, for example, and you have a duplicate block in your own site AND you duplicate the manufacturer's product description, then you're at even more risk.
-
RE: When to NOT USE the disavow link tool
So, I absolutely agree with your first point, but have to disagree a bit with the second (and that one, sadly, isn't entirely clear, even talking to Google reps). Re: the first point, it is a terrible mistake to take a reactionary glance at your links and just start hacking at them and hoping for the best. That's a good way to cause more harm than good - you could remove links helping you and still have no impact on Penguin, adding insult to injury.
In terms of GWT notifications, though, the situation isn't at all clear. Penguin is algorithmic, and GWT notifications have traditionally been focused on manual penalties. Over time, Google has used them to signal other kinds of bad links, but we've definitely seen confirmed Penguin hits where the site owner never received a warning.
That does not mean that disavow is inappropriate. It appears disavow has two primary paths:
(1) If hit with an algorithmic link penalty, like Penguin, then disavow as needed and wait for recrawl, and, most likely, a Penguin data refresh.
(2) If hit with a manual link penalty, then disavow as needed and file a reconsideration request (disavow by itself won't help you, in most cases).
I've talked to a handful of people who have had direct contact with Google reps, and so far, that's about the best picture we can piece together. The answers have been inconsistent.
-
RE: Canonical links apparently not used by google
I wouldn't both canonical to the "View All" AND use rel=prev/next - that could be sending mixed signals to Google. I'd let one do its work, if possible. There's another issue, though - you're canonicaling to:
http://www.soundcreation.ro/chitare-chitari-electroacustice-cid10-pageall/
...but the "View All" link goes to...
http://www.soundcreation.ro/chitare-chitari-electroacustice-cid10_-pageall/
...with an "_" (hard to see, since it's linked above). These are two different URLs and could be causing you some serious problems. You're basically sending 3 potentially conflicting signals to Google.
-
RE: Meta Description Length is Doubling (Like Twitter)
This started a couple of years ago, but it still only happens in isolated cases. See this post:
https://moz.com/blog/i-cant-drive-155-meta-descriptions-in-2015
We believe it's tied to Featured Snippets and Google parsing answers from sites (they share a core engine, even though you may see long snippets on SERPs with no Featured Snippets). In cases where the snippet is deemed highly relevant, Google may present more information. It's not an across-the-board length increase, though. Most snippets are still restricted to the traditional length limits.
-
RE: Is a "Critical Acclaim" considered duplicate content on an eCommerce site?
I think it's all a matter of degree, which is why these questions are tricky. Generally, I agree with @Crimson - it's like a testimonial. If you use them sparingly to supplement your own, unique content, they're fine. If you build a site out of a line of text and 20 "Acclaims" that are plastered across 500 other website, then you're site is going to look thin. It won't rank for much, and it could even be filtered out or penalized.
So, are they bad? Not necessarily - they can even be good. They should only be a piece of the puzzle, though. Any content re-use should be done sparingly, to enrich your site experience.
-
RE: Google also indexed trailing slash version - PLEASE HELP
Seems like you got the 301-redirect resolved below - if you've got that in place and fixed the canonical tag, it should be ok. It'll just take some time (usually longer than you'd like) for Google to clear out the pages, especially the deeper ones. If you see gradual de-indexation, though, you'll probably be fine.
-
RE: E Commerce product page canonical and indexing + URL parameters
If you're not experiencing any serious problems and just want to prevent future issues, I'd probably use rel=prev/next here:
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
It was designed specifically for paginated content and won't block your link-juice flow to deeper pages. Google has in the past said you can canonical to the "View All", but don't canonical back to page one. I've heard mixed results on the "View All" technique.
One thing, though - you currently have all these pages NOINDEX,FOLLOW'ed, so it's kind of a moot point. What you could do is just lift the NOINDEX on page 1 of results and keep it for pages 2+. That may be your least risky move at this point.