Someone who will not be named (it rhymes with "Bomb Itch Slow") told me to Mechanical Turk the crap out of it
Best posts made by Dr-Pete
-
RE: Tactics to Influence Keywords in Google's "Search Suggest" / Autocomplete in Instant?
-
Google's Mobile Update: What We Know So Far (Updated 3/25)
We're getting a lot of questions about the upcoming Google mobile algorithm update, and so I wanted to start a discussion that covers what we know at this point (or, at least, what we think we know). If you have information that contradicts this or expands on it, please feel free to share it in the comments. This is a developing situation.
1. What is the mobile update?
On February 26th, Google announced that they would start factoring in mobile-friendliness as a ranking signal. The official announcement is here. Of note, "This change will affect mobile searches in all languages worldwide and will have a significant impact in our search results."
2. When will the update happen?
In an unprecedented move, Google announced that the algorithm update will begin on April 21st. Keep in mind that the roll-out could take days or weeks.
3. Will this affect my desktop rankings?
As best we know - no. Mobile-friendliness will only impact mobile rankings. This is important, because it suggests that desktop and mobile rankings, which are currently similar, will diverge. In other words, even though desktop and mobile SERPs look very different, if a site is #1 on desktop, it's currently likely to be #1 on mobile. After April 21st, this may no longer be the case.
4. Is this a boost or a demotion?
This isn't clear, but practically it doesn't matter that much and the difference can be very difficult to measure. If everyone gets moved to the front of the line except you, you're still at the back of the line. Google has implied that this isn't a Capital-P Penalty in the sense we usually mean it. Most likely, the mobile update is coded as a ranking boost.
5. Is this a domain- or page-based update?
At SMX West, Google's Gary Ilyes clarified that the update would operate on the page level. Any mobile-friendly page can benefit from the update, and an entire site won't be demoted simply because a few pages aren't mobile friendly.
6. Is mobile-friendly on a scale or is it all-or-none?
For now, Google seems to be suggesting that a page is either mobile-friendly or not. Either you make the cut or you don't. Over time, this may evolve, but expect the April 21st launch to be all-or-none.
7. How can I tell if my site/page is mobile-friendly?
Google has provided a mobile-friendly testing tool, and pages that are mobile-friendly should currently show the "Mobile-friendly" label on mobile searches (this does not appear on desktop searches). Some SEOs are saying that different tools/tests are showing different results, and it appears that the mobile-friendly designation has a number of moving parts.
8. How often will mobile data refresh?
Gary also suggested (and my apologies for potentially confusing people on Twitter) that this data will be updated in real-time. Hopefully, that means we won't have to worry about Penguin-style updates that take months to happen. If a page or site becomes mobile-friendly, it should benefit fairly quickly.
We're actively working to re-engineer the MozCast Project for mobile rankings and have begun collecting data. We will publish that data as soon as possible after April 21st (assuming it;s useful and that Google sticks to this date). We're also tracking the presence of the "Mobile-friendly" tag. Currently (as of 3/25), across 10,000 page-1 mobile results, about 63% of URLs are labeled as "Mobile-friendly". This is a surprisingly large number (to me, at least) - we'll see how it changes over time.
-
"Update" in Search Console is NOT an Algo Update
We've had a few questions about the line labeled "Update" in Google Search Console on the Search Analytics timeline graph (see attached image). Asking around the industry, there seems to be a fair amount of confusion about whether this indicates a Google algorithm update.
This is not an algorithm update - it indicates an internal update in how Google is measuring search traffic. Your numbers before and after the update may look different, but this is because Google has essentially changed how they calculate your search traffic for reporting purposes. Your actual ranking and traffic have not changed due to these updates.
The latest updated happened on April 27th and is described by Google on this page:
Data anomalies in Search Console
Given the historical connotations of "update" in reference to Google search, this is a poor choice of words and I've contacted the Webmaster Team about it.
-
RE: Hoping someone could take some time to give me some feedback / advice. Thanks!
Thanks for sharing your story, Rick. My wife and I lost our first pregnancy due to Turner's Syndrome, so I'm painfully familiar with how random the genetic lottery can be. I'm happy to say we have a healthy, happy 17-month-old girl now. I'm glad to hear Noah is doing well, and I'm heartened to hear how proactive the doctors are being.
First off, I'd just like to say that you're doing a lot right. You have a well-designed site with great content, a good core structure, and many of the important features of a modern site/blog. The wide world of SEO can be overwhelming, but it's rare that you need to tackle it all at once.
I think it's great to be thinking proactively about categorizing your content, and it's ok to let that evolve organically as your needs are clear. Categorizing the videos certainly makes sense.
At this point, though, given that your basic structure is good and you've got a lot of content, the social and link-building aspects are probably equally or more important. You have one tremendous tool at your disposal - sincere passion that can connect you to an audience. Your own outreach efforts, interactions with other parents, discussion boards, communities, etc. will go a LONG way. As you build relationships, links will start building themselves.
One thing that wasn't clear to me until I fully read your post and dug into the site was that your wife is a pediatrician. The "Mom MD" just read like a cute category name to me (no offense intended - that was just my first impression). This fact, IMO, adds a lot of credibility to what you're doing, and makes this more than a personal blog. I'd make this clear, especially on the About page and at the top of the Mom MD section.
-
RE: Are press release sites useful?
I have some smaller clients who have had limited luck with it, but I think it's best to just stick to one of the sites and do periodic releases (maybe every couple of months). If nothing else, it'll give you a sense of what's working, and you can take some of the popular releases and push to get them a broader market.
What I wouldn't do is go after multiple low-value PR services and plaster the same releases everywhere you can. At best, it's diminishing returns - at worst, they'll be devalued. I'm with Peter G. - the best press release opportunities come through relationships with the media. Obviously, though, that takes a lot more time.
-
RE: Edu links service
Rand has a great post on link valuation:
http://www.seomoz.org/blog/10-illustrations-on-search-engines-valuation-of-links
There's no magic to .edu links, frankly - the data over the past couple of years doesn't really support that they're inherently better than .com's, etc. It's true that many .edu sites are high-authority sites, of course, but that's just a correlation (it's not that Google prefers .edu or .gov inherently).
Within any site, though, you have to look at the Page Authority, the number of links on that page, the placement of the links, the anchor text, the relevance (to some degree), and a lot of other factors. Let's take a non-edu example - DMOZ. People kill themselves for DMOZ links, but lately I'm seeing DMOZ listings where the entire page isn't indexed because it's so deep. No indexation means ZERO link juice. So, even though it's DMOZ, the link is worthless.
-
RE: Are Meta-keywords coming back?
The short answer is: No. They're not coming back, in the sense that anything has changed or that they carry any more weight than they did last year. All signs point to their continued decline. Google has publicly stated that it carries no positive ranking value.
Technically, Alan is correct - evidence suggest that Yahoo/Bing used Meta keywords as a ranking signal more recently than Google. Most of that evidence is 2+ years old, though, and I've seen no compelling reasons to think that it will tip the balance in any competitive situation on Bing. Even that 2009 article basically says: "Sure, use it, but don't expect much", IMO.
Here's the other problem - Meta keywords has been used as a negative ranking signal, and probably still is to some degree. In other words, you might not gain much or anything from using it, but if you spam it, you could get devalued. My gut feeling is that the negative signal is much, much stronger than the positive one, and even Google may still use it as a negative signal. I'm certain that Yahoo/Bing has used it as a negative signal (not sure if they still do).
I tend to agree that the competitive fears are overblown. Any decent site's keyword targets should be pretty clear - otherwise, it's not a very well SEO'd site.
Personally, if you want to use them, use them - but keep them short, sweet, and relevant. Once you do, get on with your life.
-
Crawl Test is now On-Demand Crawl!
If you've been with Moz a while, you may have used our old Crawl Test tool. A year ago we launched an all new, campaign-based Site Crawl (with an entirely rebuild crawl engine), but Crawl Test fell into disrepair and we haven't had a solid tool for crawling non-campaign domains.
I'm happy to announce that we've just launched an all new On-Demand Crawl, built on the new Site Crawl engine, with a UI that's focused on quick insights. Moz Pro Standard tier customers can run up to 5 crawls per month at 3,000 page per crawl (crawls are saved for 90 days), with per-month limits increasing at higher levels.
Most On-Demand Crawls should run in a few minutes, making the tool perfect to get quick insights for sales meetings, vetting prospects, or analyzing competitors. We've written up a sample case study or logged-in customers can go directly to On-Demand Crawl.
Try it out -- we'd love to hear your use cases (either here or in the blog post comments).
-
RE: How highly do you value a link from the BBB?
This gets into the realm of opinion pretty fast - it can be shockingly difficult to measure the value of one link. Here are a few of my opinions:
(1) One link is one link. It's rarely the magic pill people want it to be, even from a very authoritative site. I've seen people get a link like this and then wait on their hands for a sudden change in rankings, and it almost never comes. If you're just starting out and you have little or no link profile, a strong link can kick-start you, but I wouldn't pay $750 just to get a link if your site is established (I'm not sure I'd pay it even if your site is new).
(2) DA and PA both matter, and how much each matters can really vary with the situation. Your profile on a deep page of BBB is not an authority=96 link. It will carry weight, but the weight of any given profile could vary a lot.
(3) BBB has gotten a bit more aggressive, IMO, and I suspect Google will devalue these links over time. People tell me that they haven't yet, in this case, but it is, in essence, a paid link. Any day, Google could say "These BBB links are counting too much" and just lower the volume. So, don't put all your eggs in one basket, no matter what you do.
Now, to be fair, your BBB listing does have other value, like using it as a trust signal. The business case for spending the money goes beyond SEO, and that's a decision you have to make for yourself. If 100% of your interest in the listing is for a followed link, though, I personally would spend the money elsewhere.
-
RE: Looking for services to publish articles or blog posts with everlasting links.
We don't generally condone paid links here on SEOmoz, because we feel the risks often outweigh the rewards. However, Eppie fairly notes that they do (too often) work. The problem is that, even putting ethics aside, most people just don't do it very well.
I actually think Shane makes a good point - some of these links aren't really "everlasting" in the full sense of the word. Article marketing and paid blog posts often get archived quickly, and while the links continue to exist, they get rapidly devalued simply by moving in the internal structure. These paid networks have to continue to sell new links, and selling new links often means archiving old links and diluting existing content. So, if you pay once, expect your link to be treated like a 2nd-class citizen down the road. That's just the nature of that business, IMO. With a monthly fee, they can at least afford to keep your link active.
There are "paid" options that Google tends to not view as critically, such as:
(1) Editorially-reviewed directories
(2) Sponsorships and membership organizations
(3) Paid press-release services (although not really "everlasting")
People tend to only think of the big ones for (1) and (2) and often overlook niche directories, smaller organizations, local organizations, etc. The nice thing about the smaller sites is that you may be one of a half-dozen paid listings/sponsors, as opposed to one of 10,000 articles in an article-marketing network.
I'll leave this open as a discussion in case others have constructive suggestions.
-
RE: Canonicalization - Some advice needed :)
Seems like Matt and Marcus have you on the right track. With a real-estate site, duplicates and near-duplicates are very common, since you're adding and removing properties all the time and there are many search options and categories. I do agree that search-friendly URLs, long-term, where each property has a fixed URL, are definitely the best bet. In the meantime, though, a solid canonical structure helps a lot.
Ease into it - don't go sitewide in one fell swoop without a plan, unless you're having clear ranking problems. Start with your biggest problem areas, monitor/measure, and work from there. You can always check for indexed duplicates by running a Google search like:
site:daft.ie intitle:"176 Rathgar Road"
In this case, I'm not seeing any index issues, although I think Matt's concerns are valid.
I'd also consider rel=prev/next for search results pages, as that can help focus Google, too. Again, take it one step at a time and start with the biggest problems. It'll mitigate your risk all around.
-
RE: Removing Content 301 vs 410 question
Let me jump in and clarify one small detail. If you delete a page, which would naturally result in a 404, but then 301-redirect that page/URL, there is no 404. I understand the confusion, but ultimately you can only have one HTTP status code. So, if the page properly 301s, it will never return a 404, even if it's technically deleted.
If the page 301s to a page that looks like a "not found" sort of page (content-wise), Google could consider that a "soft 404". Typically, though, once the 301 is in place, the 404 is moot.
For any change in status, the removal of crawl paths could slow Google re-processing those pages. Even if you delete a page, Google has to re-crawl it to see the 404. Now, if it's a high-authority page or has inbound (external) links, it could get re-crawled even if you cut the internal links. If it's a deep, low-value page, though, it may take Google a long time to get back and see those new signals. So, sometimes we recommend keeping the paths open.
There are other ways to kick Google to re-crawl, such as having an XML sitemap open with those pages in them (but removing the internal links). These signals aren't as powerful, but they can help the process along.
As to your specific questions:
(1) It's very tricky, in practice, especially at large-scale. I think step 1 is to dig into your index/cache (slice and dice with the site: operator) and see if Google has removed these pages. There are cases where massive 301s, etc. can look fishy to Google, but usually, once a page is gone, it's gone. If Google has redirected/removed these pages, and you're still penalized, then you may be fixing the wrong problem or possibly haven't gone far enough.
(2) It really depends on the issue. If you cut too deep and somehow cut off crawl paths or stranded inbound links, then you may need to re-establish some links/pages. If you 301'ed a lot of low-value content (and possibly bad links), you may actually need to cut some of those 301s and let those pages die off. I agree with @mememax that sometimes a helathy combination of 301s/404s is a better bet - pages go away, and 404s are normal if there's really no good alternative to the page that's gone.
-
RE: Edu links service
I appreciate your transparency, but to me that looks like low-quality article spinning. It's ok to a point, and it may get you a short term boost, but those pages are going to be devalued over time. Plus, they have no other value (those links won't drive traffic).
As for the argument that Google can do whatever they want so that makes anything ok, I strongly disagree. There are link-building tactics that can create long-term problems. Should a client risk a full-on penalty for a low-quality link-building tactic that might get them a 5% boost for 3 months? For me to suggest that as an SEO would be grossly irresponsible. There are smart risks and there are bad risks.
-
RE: Do I need to add canonical link tags to pages that I promote & track w/ UTM tags?
I find Google is usually good about UTM parameters, but not always - for use in Adwords, they're almost never a problem, but when you use them for custom tracking, they can start to cause duplicates. Bing/Yahoo also don't handle them very well.
I'm not sure on the scope of your site/usage right now, so it's hard to give a definitive solution, but my gut reaction is that I would use canonical tags on the affected pages. If you want to double-check, you can test for the URLs in the Google index. Use something like:
site:example.com inurl:utm=
If they're not being indexed, you're probably ok, and can just keep an eye on it. If it's just a few landing pages, though (and not a massive, site-wide issue), I'd be proactive and put a canonical tag in place, if it were me.
-
RE: How far can I push rel=canonical?
I tend to agree - you always run the risk with cross-domain canonical that Google might not honor it, and the you've got a major duplicate content problem on your hands.
I think there's a simpler reason, in most cases, though. Three unique sites/brands take 3X (or more, in practice) the time and energy to promote, build links to, build social accounts for, etc. That split effort, especially on the SEO side, can far outweigh the brand benefits, unless you have solid resources to invest (read that "$$$").
To be fair, I don't know your strategy/niche, but I've just found that to be true 95% of the time in these cases. Most of the time, I think building sub-brands on sub-folders within the main site and only having one of each product page is a better bet. The other advantage is that users can see the larger brand (it lends credibility) and can move between brands if one isn't a good match.
The exception would be if there's some clear legal or competitive reason the brands can't be publicly associated. In most cases, though, that's going to come with a lot of headaches.
-
RE: Sudden disappearance from visibility on Google
Local is a tricky game, and on-page is mattering less and less over time, at least in the simple sense. In other words, it's not enough to just have "Santa Barbara" on your page - Google needs to see that you're a local business with reviews, citations, etc. They've definitely pushed harder in that direction this year.
You're hitting some of the on-page really hard, too - take your home-page title, for example. It's too long, you mention "Santa Barbara" twice, you have "Web Design" before and after it. Unfortunately, to a human, it just looks keyword-stuffed and borderline spammy.
At the same time, you have an internal link called "Santa Barbara Web Design and Development", which really looks over the top to an end-user of the site. It also means you've got two pages that are essentially in competition - the home-page and a deeper page targeted to Santa Barbara. The deep page is fine, but then ease off on the home-page. Honestly, ease off all around. You're push into dangerous territory where your on-page could have tipped from helpful to harmful.
I think you need to look at some of your local-specific factors. Even a few solid (legitimate) reviews could really tip the balance, and easing off the keyword stuffing could help get you out of any filters. I'm not seeing signs of a severe penalty, but I do think your home-page has been devalued or filtered for certain non-brand terms.
-
RE: How to prevent duplicate content at a calendar page
Sadly, the short answer is that you can't have it all. Either you index the separate calendar pages, get more pages/content out there and risk some "thinning" of your index, or you focus on one page, maximize the SEO value, but then lose the individual pages.
I would not 301 or 302 to the individual calendar URLs - that kind of daily URL shifting is going to look suspicious, Google will not re-cache consistently, and you're going to end up with a long-term mess, I strongly suspect.
I actually tend to agree with Muhammed and Paragon that a viable option would be to let the individual days have their own content, but then canonical to the main calendar page to focus the search results. That way, users can still cycle through each individual day, but Google will focus on the core content. In a way, that's how a blog home-page works - the content changes daily, but you're still keeping the bots focused on one URL.
Think of it in terms of usability, too. How valuable is old/outdated content to search users? They might find something relevant on an old page, but they still probably want to see the main calendar and view recent content.
Where are the links to the individual days, if "/calendar" always has today's content? I'm wondering if there's a hybrid approach, like letting the most recent 30 days all have their own URLs, but then redirecting or using rel-canonical to point to the main page after 30 days.
-
RE: Are pages with a canonical tag indexed?
I have to disagree on this one. If Google honors a canonical tag, the non-canonical page will generally disappear from the index, at least inasmuch as we can measure it (with "site:", getting it to rank, etc.). It's a strong signal in many cases.
This is part of the reason Google introduced rel=prev/next for paginated content. With canonical, pages in the series aren't usually able to rank. Rel=prev/next allows them to rank without clogging up the index (theoretically). For search pagination, it's generally a better solution.
If your paginated content is still showing in large quantities in the index, Google may not be honoring the canonical tag properly, and they could be causing duplicate content issues. It depends on the implementation, but they recommend these days that you don't canonical to the first page of search results. Google may choose to ignore the tag in some cases.
-
RE: Duplicate title-tags with pagination and canonical
Unfortunately, it can be really tough to tell if Google is honoring the rel=prev/next tags, but I've had gradually better luck with those tags this year. I honestly the GWT issue is a mistake on Google's part, and probably isn't a big deal. They do technically index all of the pages in the series, but the rel=prev/next tags should mitigate any ranking issues that could occur from near-duplicate content. You could add the page # to the title, but I doubt it would have any noticeable impact (other than possibly killing the GWT warning).
I would not canonical to the top page - that's specifically not recommended by Google and has fallen in disfavor over the past couple of years. Technically, you can canonical to a "View All" page, but that has its own issues (practically speaking - such as speed and usability).
Do you have any search/sort filters that may be spinning out other copies, beyond just the paginated series? That could be clouding the issue, and these things do get complicated.
I've had luck in the past with using META NOINDEX, FOLLOW on pages 2+ of pagination, but I've gradually switched to rel=prev/next. Google seems to be getting pickier about NOINDEX, and doesn't always follow the cues consistently. Unfortunately, this is true for all of the cues/tags these days.
Sorry, that's a very long way of saying that I suspect you're ok in this case, as long as the tags are properly implemented. You could tell GWT to ignore the page= parameter in parameter handling, but I'm honestly not sure what impact that has in conjunction with rel=prev/next. It might kill the warning, but the warning's just a warning.
-
RE: Canonical to the page itself?
I think it's good for some pages, especially the home-page, because you can naturally have so many variants ("www" vs. non-www, for example). It's a lot easier to pre-emptively canonicalize them than 301-redirect every URL variant that might pop up over time.
While Alan's concerns are technically correct, I've never seen evidence that either Google or Bing actually devalue a page for a self-referencing canonical. For Google, the risks of duplicates are much worse than the risk of an unnecessary canonical tag, IMO. For Bing, I don't really have good data either way. More and more people use canonical proactively, so I suspect Bing doesn't take action.
I don't generally use it site-wide, unless needed, but I almost always recommend a canonical on the home-page, at least for now. Technical SEO is always changing.
-
RE: Is SEO Certification.org Worth Having?
Unfortunately, I haven't heard of many certifications that the industry respects. I think the MarketMotive program is a good one, but that's more for the training than the piece of paper, IMO.
It really depends on your goal. If it's for clients or your employer, get the certifications they value. It's all about perception, in that case. If it's for selling your services directly, I'm not sure I'd bother. The training can be good, but you still have to pound the pavement.
I'm Google AdWords certified, for example. It's a decent program, and some of my existing clients like that I have it, but when I first got it, it did little or nothing to bring in new clients. The training program itself is good, but you can do that without ever paying them a dime or taking the test.
Is there a specific aspect of SEO you're trying to learn?
-
RE: Organic search traffic down 60% since 8/1/18\. What now?
I've studied the August 1st update as much or more than most, and let me first be brutally honest and say that we're still at a pretty early stage of speculation. Until we have clear recovery cases, we're all just trying to tell a story from the anecdotal data. There are a lot of plausible theories, but we can't give you clear answers other than to confirm that the update was large and the health and fitness vertical definitely seemed to be disproportionately impacted.
The E-A-T theory is plausible, from what I've seen, but it leaves a lot left to be explained. We don't know what signals Google uses for Expertise/Authority/Trust -- are they on-page signals, link-based signals, citation-based signals... ? Probably a combination, and some of those are a lot easier to control than others.
Looking briefly at your site, here's what I might try if I were you. Again, this is educated guesswork at best:
(1) Your author name links go to pages that show all articles by that author. I know this is a common practice, but I would consider linking those to a full bio page for each author, with credentials. The on-page E-A-T signals probably aren't the whole picture Google uses, but they're the easiest to control. Currently, these pages also lead to more, similar pages (page 2, 3, etc.), which could look thin, but your rel=prev/next tags seem to be properly formatted. I'm more concerned that Google isn't seeing any information about the authors on those immediately linked author pages (other than a very short blurb).
(2) I know this can be a dicey issue online (especially if you're trying to protect anonymity to some degree), but I'd strongly recommend using your full name, properly capitalized. The use of just "sharon," "victor," etc. could make it harder for Google to connect you to third-party mentions and citations, unless those sites link directly to your bio pages. I strongly suspect Gogole is using some link/citation signals to establish E-A-T. it's a bit like having a consistent business name and address in local search -- you need to make it easy for Google to connect your name to mentions of you.
You do have some odd links to your site that almost look like they're tied to a link network, but I'm having trouble finding some of them manually (they're coming up in Link Explorer reports). If you've got any link profile problems, I'd consider cleaning those up ASAP. If Google is pushing hard and require more proof of expertise in your link profile, that's going to be a lot harder battle. There's no easy fix for that, other than continuing to try to build credibility. Again, I think the full-name issue might help here, but it's only one small piece of the puzzle.
Sorry you've been affected by this -- we're seeing a lot of small health/fitness blogs getting hit, and many seem to be decent quality, independent blogs by well-meaning people.
-
RE: How do you feel when Moz marks one of your questions as "answered?"
I think some of this is a legacy from Private Q&A days, when we tried to make sure that every question had a resolution, as opposed to being an open discussion. Even now, though, it's partly a "housekeeping" issue - if we see a question that's 80-90% resolved and mark it "answered", we have a better sense of where to prioritize our efforts and serve the questions that have no answers or bad answers.
Unfortunately, it's also complicated by 30-day trials and people who ask opportunistically and then disappear. We've always struggled with how to filter that kind of question while serving our long-term members better.
Even when doing it, there are times when I know that the original author may not feel the situation is fully resolved. So, it's a balancing act of empathy toward the author vs. empathy toward the pool of all authors. It's an imperfect solution and I think it's definitely something we have to revisit from time to time. I would only argue that leaving everything "open" has down-sides as well (as we've seen in the past).
Edit: Reading this back, it feels a bit defensive, and I don't mean it that way. Sorting out how to balance this all has been an ongoing discussion for us for years, and I think it's absolutely valuable when people tell us what they're feeling about the process out loud. I'm just not sure there's a simple answer.
-
RE: New server update + wrong robots.txt = lost SERP rankings
I hate to say it, but @inhouseninja is right - there's not a lot you can do, and over-reacting could be very dangerous. In other words - don't make a ton of changes just to offset this - Google will re-index.
A few minor cues that are safe:
(1) Re-submit your XML sitemap
(2) Build a few new links (authoritative ones, especially)
(3) Hit social media with your new URLs
All 3 are at least nudges to re-index. They aren't magic bullets, but you need to get Google's attention.
-
RE: Should I delete a page that gets search traffic, that I don't care about?
There's no limit on search impressions - there are limits on the number of pages Google may be willing to crawl, and more indexed pages can drag down your ranking power for other pages, but you can have all the Google traffic you want, theoretically.
The biggest issue is that you could be losing decent visitors and you could lose the strength of inbound links if you cut this page off. If there's a more relevant page, then you could 301-redirect (as Ennovation) suggested, but it depends on the situation.
Why don't you like the page? Is it irrelevant to your site, or just not a great page (and one you'd like to update)? Are these visitors worthless, in a broader, conversion sense, or are they just not being driven to action well? There's a lot you could do to channel these visitors better, depending on the situation, but cutting them off cold is just throwing away a potentially valuable resource, IMO.
-
RE: Are multiple domains for my website hurting my Google ranking?
FYI, it looks like you're 302-redirecting the "shop." sub-domain to a deeper page on the root domain. This can be the worst of both worlds, in some cases. I'm not clear on the reasoning, but if "shop." is the e-commerce/store sub-domain, I'd rather see you 301-redirect it to "/shop" or something like that.
I realize you may have technical constraints that require the e-commerce site to be on a subdomain. There are ways around that, like a reverse proxy.
From a purely SEO standpoint, you can potentially rank 2 sub-domains (Google keeps changing their mind on that, it seems), but you're also potentially splitting the link-juice between those sub-domains. In most cases, I don't find the sub-domain approach to be a good long-term strategy. On the other hand, if you're ranking #2 and #3 and aren't seeing a downside, you've got to consider that data.
-
RE: What Questions Should I Be Asking?
Yeah, I actually think that cheap/paid advice can be just as dangerous and absolutely agree that the "who" can be more important than the "where". The other thing I'd add is that so much advice isn't good or bad so much as contextual. When someone asks how they should handle duplicate content or a major site architecture issue, it's tough to give a quick answer. Even when I can and that answer is right for them, that doesn't mean it's right for everybody. Eventually, you really have to understand some of the fundamental principles behind the answers people give.
Let's step back from SEO. Look at generic, internet health advice. Should you drink milk, for example? If you're malnourished, yes, absolutely. If you need more Vitamin D, sure. If you're lactose intolerant, probably not. If you're allergic, you could die. No matter how smart anyone is, there's no one-sized-fits-all answer to that question, IMO. That's true for a lot of complex SEO issues.
-
RE: Did Google just give away how Penguin works?
I suspect that's just Penguin 1.1 - you could look to the previous month's highlights for Penguin 1.0 - two stand out, IMO:
Keyword stuffing classifier improvement. [project codename "Spam"] We have classifiers designed to detect when a website is keyword stuffing. This change made the keyword stuffing classifier better.
Improvements to how search terms are scored in ranking. [launch codename "Bi02sw41"] One of the most fundamental signals used in search is whether and how your search terms appear on the pages you’re searching. This change improves the way those terms are scored.
The first one is obviously related to Penguin, hard to say on the second. Codename "Spam" isn't exactly telling us much we don't already know
-
RE: Competitor seo
I think the reason that it doesn't happen more often isn't so much that it can't work, but that:
(1) Doing it right takes a lot of time, money, and skill. If you don't want to leave a trail, it takes even more. Usually, the money is better spent elsewhere.
(2) It usually doesn't work. So, you're betting a lot on a small chance.
(3) Whether or not it works long-term, building links to your competitor will almost always give them a short-term ranking boost. So, it's not just time and money - it's likely to backfire.
Here's an analogy I just made up - let's say I don't like you, and I want to get you into trouble (for example - I don't actually dislike you ). I craft a plan to stuff your pockets full of counterfeit bills at the airport. IF TSA checks your pockets and IF they notice the money is counterfeit (two big ifs), you could go to jail. In the 99% likely chance they don't notice, though, I just gave my sworn enemy a few-hundred bucks. That's basically (3).
-
RE: Can PDF be seen as duplicate content? If so, how to prevent it?
If they duplicate your main content, I think the header-level canonical may be a good way to go. For the syndication scenario, it's tough, because then you're knocking those PDFs out of the rankings, potentially, in favor of someone else's content.
Honestly, I've seen very few people deal with canonicalization for PDFs, and even those cases were small or obvious (like a page with the exact same content being outranked by the duplicate PDF). It's kind of uncharted territory.
-
RE: Should We Switch from Several Exact Match URLs to Subdomains Instead?
I have to disagree with Bryan, I'm afraid - I think you carry substantial risk here, and this is a tricky decision. While EMD influence is declining, it still can carry a lot of weight (and quite a bit more than sub-domain keywords). If most of your traffic is coming from those "head" terms, you may see a serious loss by moving from EMDs to sub-domains.
Sub-domains have other issues, too, like fragmentation. Since the verticals are very different, Google could treat each sub-domain more like a separate domain. Then, your link equity won't consolidate AND you'll lose the EMD advantage. So, there's actually a risk of a worst-of-both-worlds scenario.
Now, to be fair - consolidation can have benefits, like unifying your link profiles, simplifying your other marketing efforts (one site to promote on social media), etc. Also, since your niches are really just different marketing perspectives on the same product, it's possible that your current sites might look a little thin to Google. In that case, consolidation could help, but "consolidation" would mean thinning out the separate pages, not just moving to one domain with a bunch of sub-domains.
Whether it's better for users really depends on your customer base. Do they tend to look for chat products as a general product, and then decide how it fits their industry, or do they look for products targeted to their industry? If the latter, then the separate domains might actually be more user-friendly.
Sorry, I know this is clear as mud, but I just want you to be aware of the complexity and possible issues. I would not make this decision lightly. Please note, too, that I'm generally in favor of consolidation and am not a big fan of an EMD-based strategy. We have to be realistic about what works now, though, vs. what may work in a couple of years, and I'm just concerend about the short-term impact for you.
My gut reaction, long-term, is that you could build a more product-focused site that has solid landing pages for each vertical, and that each vertical may not need a sub-site. This could create a stronger single site over time. It really depends how much unique content you've got within each vertical, and how your visitors find you. Even if that's a good long-term strategy, it could still have short-term negative impact, so you have to be aware of that and able to weather it.
-
RE: Outsourcing Link Building
The reality is that they might very well be able to get you PR4 links, but as Gareth said - PR is one tiny piece of the puzzle (and can be completely unreliable). Rand has a good post on link valuation here, and how complex it is:
http://www.seomoz.org/blog/10-illustrations-on-search-engines-valuation-of-links
These services also often use their own link networks, which have been hit hard by Google the last couple of weeks - expect that assault to continue. They won't get everyone, and some do it better than others, but the people charging $5/link are a lot more likely to get hit.
Keep in mind, too, that there's a lot of bait-and-switch in that industry. Someone might get you a bunch of links, wait a month, then sell those same links to someone else (and cut your links). That's the problem with a lot of these one-time deals. They also pull tricks to temporarily inflate the PR of the pages the links are on (like 301-redirecting other domains). By the time, you actually get the link, that PR4+ could be 3 months out of date (and may actually be a PR1).
I'm not saying this as a white-hat - some people buy links effectively, I'll admit. You do get what you pay for, though, and your risk isn't just wasting money - it's potential penalization and long-term problems. If you're going to buy links, make it part of a larger, more diverse strategy, and do it right.
-
RE: To "Rel canon" or not to "Rel canon" that is the question
I'm afraid there's no perfect solution. The canonical tag probably is the best bet here - the risk of letting thousands of near-duplicates into the index is much greater than the cost of not landing people on specific colors.
Keep in mind that, once Google removes the color variants, only the "master" product page will appear in search. So, users won't really come into the site with a color intent (except in their heads). Whether that's good or bad for usability isn't clear. On the one hand, it would be nice to rank for every color and have users with a color in mind land on that specific product. On the other hand, some users don't have a color in mind (they know what they like when they see it), and landing on the main product pages shows them all available options. It really depends on your customers, but there are pros and cons, in terms of usability and conversion.
There's no magic Option #3, though - I'm 99% confident saying that. The risks of indexing all color variants post-Panda are relatively high, and I think you'll gain more from consolidating than you'll lose by leaving them all.
-
RE: Google Algorithm change this month - theories ?
Posted some preliminary findings on Google+:
https://plus.google.com/b/112544075040456048636/+SEOmoz/posts/Tj2H3QDwX56
TLDR - definitely seeing unusual activity, and it doesn't seem to be a glitch. The last week we saw historic lows, so the contrast is very telling. It might be that Google kept changes light before rolling out something major.
I can confirm EGOL's observation that eBay took a big hit (also mentioned in the G+ post). This is unprecedented for a "Big 10" site, but it could well be temporary. An eBay-sized site moving in the SERPs that far could move the overall needle a bit, but I don't think it's the whole situation.
This feels more Penguin-like than Panda-like, but that's just my gut, and I think we have to be careful putting everything into those two buckets. With in the ballpark of 600 updates/year, there's a lot more than Panda and Penguin. If it's Penguin, we may yet hear confirmation from Google. If it's Panda, we probably won't.
-
RE: Unrealistic White Hat philosphy
I won't argue with your general point - you're right that "great" content isn't enough by itself. If you build the most beautiful house that ever existed on an island no one visits, you'll never win any architectural awards.
That said, I don't think there are many verticals where paid links are essential. In fact, I think chasing your competitor's tactics is often a good way to shoot for 2nd. In many cases, people are ranking in spite of low-value tactics, and finding the tactics the competition isn't targeting can give you a lot more leverage.
It's absolutely true, though, that even content marketing has to be marketed. I think you have to look outside of SEO. When I had content marketing successes on new sites, it came from relationship building. I pounded the virtual pavement (whether it be blogs, forums, social, etc.) - NOT for links, but to build relationships. I brought the eyeballs in, and when that hit critical mass, the links started to come. Even better - the links KEPT coming with little or no effort. Some posts generate new links 2 years after I wrote them.
The worst part about low-value SEO tactics isn't the risk of a penalty - the worst part is that you have to keep doing it every day. You haven't built anything but what you scrounged for that day. Strong content marketing takes a lot more up-front - no question - but it lasts and it builds on itself.
-
RE: I'm pulling my hair out trying to figure out why google stopped crawling.. any help is appreciated
Unfortunately, it can be very difficult to separate a bad history from a penalty from a large-scale technical problem, especially on large sites. I've seen many people assume they got hit by Panda when it was really a link-based penalty, and vise-versa. The site's history makes this go from difficult to nearly impossible, at least without a very deep dive, but I'll see what I can see.
Alan's right on one thing - Google Webmaster Tools has huge gaps in what they warn you about, and it's typically only manual penalties. Many sites have massive problems that never trigger a warning from Google.
I notice that you're NOINDEX'ing even high-level pages (in the navigation), such as:
http://dajaz1.com/music/alternative/
That seems like a bad message to Google - if it's important enough to appear in navigation, it's important enough to index. That's a pretty extreme culling of pages.
The paginated content is a bit of a mess, such as:
In some cases, these don't even seem to return any results, so I'm not sure how they got crawled in the first place. The trick with META NOINDEX here is that, until Google re-crawls, they won't process the tag. This gets tricky, but I'd recommend a couple of possibilities:
(1) If the page returns no results, 301-redirect to the last page of search that has results.
(2) If none of these pages have search value, you could block "/page" as a folder in Google Webmaster Tools. This is a bit dangerous, so I'd want to make sure none of these pages had search value.
Are you getting any page load (speed) warning? Hitting your site intially is massive - about 4MB, by my count, with a ton of JavaScript, most of which just fuels the top, rotating images (which are loading very slowly on my machine). It seems like overkill, both from an SEO and usability standpoint, and is probably in your way to recovery. I'd seriously consider stripping down the size of the code and pruning back some of the active elements for a while.
If you can re-open the important paths, get rid of the thin content (this is going to be complicated and probably involves multiple steps), and speed up the site, you'll know enough to see if this is a technical issue (such as Panda).
There is certainly weak ranking even on your indexed pages, which could indicate a penalty, but it's really tough to tell. Too much of your content is competitive or uses shared phrases or videos, so it's hard to see whether a search for:
"Dwayne Wade 70-Foot Buzzer Beater" ...has you in 6th place for competitive reasons or because your site has been devalued. I don't think it's a penalty, at least in this case. It's a YouTube video and there are other, similar videos for a fairly recent, competitive term, so this may be an accurate ranking (in Google's POV).
The history is a lot tougher, and Q&A just isn't adequate to comment on a situation that complicated, as there are not only SEO but legal ramification. Honestly, I'd have to know a lot more details on that. If you suspect the history has hit you permanently, there may come a time when you have to completely re-brand and re-launch under a new domain.
I suspect, though, that cleaning up the crawl problems, removing the thinnest content, speeding up the site, and generally fixing some technical issues could help quite a bit. It's going to be a difficult process, though. The thing about changes like the Panda Update, is that it's not just one factor. I can't point to one thing and say "fix this" - you have to aggressively attack multiple factors, since Google is wrapping multiple signals into Panda and won't tell you which one is the problem.
I should say that I'm not saying this is Panda, but that it's a Panda-like situation - you've got a lot of crawl/index issues that are going to cause you problems. The question is whether those are compounded by your history (and, unfortunately, they probably are). The combination means that you have to be even more aggressive with the clean-up.
-
RE: Google's Mobile Update: What We Know So Far (Updated 3/25)
Unfortunately, Google tends not to communicate these things directly on social media (or, at least, not consistently) - and, when they do, it's usually Google+. As I personally have data, I'll share it on my account (@dr_pete) and/or the MozCast account (@mozcast).
Barry Schwartz (@rustybrick) is a good bet, too.
-
RE: Footer Links And Link Juice
Just to add to the consensus (although credit goes to multiple people on the thread) - PR-sculpting with nofollow on internal links no longer works, and it can be counter-productive. If these links are needed for users, don't worry about them, and don't disrupt PR flow through your site. Ultimately, you're only talking about a few pages, and @sprynewmedia is right - Google probably discounts footer links even internally (although we may no good way to measure this).
Be careful with links like "register", though, because sometimes they spin off URL variations, and you don't want those all indexed. In that case, you'd probably want to NOINDEX the target page - it just doesn't have any search value. I'm not seeing that link in your footer, though, so I'm not clear on what it does. I see this a lot with "login" links.
-
RE: Block search engines from URLs created by internal search engine?
It can be a complicated question on a very large site, but in most cases I'd META NOINDEX those pages. Robots.txt isn't great at removing content that's already been indexed. Admittedly, NOINDEX will take a while to work (virtually any solution will), as Google probably doesn't crawl these pages very often.
Generally, though, the risk of having your index explode with custom search pages is too high for a site like yours (especially post-Panda). I do think blocking those pages somehow is a good bet.
The only exception I would add is if some of the more popular custom searches are getting traffic and/or links. I assume you have a solid internal link structure and other paths to these listings, but if it looks like a few searches (or a few dozen) have attracted traffic and back-links, you'll want to preserve those somehow.
-
RE: Where has Google found the £1.00 value for the penny black? Is it Google moving beyond the mark-ups too?
Yes, Google is going well beyond mark-up with the new v2.0 answer boxes. They're trying to extrapolate answers directly from indexed content. This is essential if they're going to expand the Knowledge Graph, but it's also an aggressive move, and they're not all that good at it yet. Unfortunately, there's no great way to control when/how/what they show.
So far, all of these answer boxes seem to come from page 1, so you have to have enough authority to rank on page 1. After that, though, it's a pretty crude matching process to on-page keywords. The matching is contextual (since the Hummingbird update powered more of that), but it's still pretty basic keyword/concept matching.
-
RE: DA vs PA when building links
EGOL is essentially correct - DA and PA are measures of ranking power, essentially (they both factor in multiple variables), but we don't currently model things like the likelihood of a link to be spammy - although we're working on that. So, it is definitely possible for a site to have high authority in theory but devalued by Google in practice.
It also depends on whether you mean "trust" in a broad sense or specifically something like TrustRank. Our MozTrust metric was intended to approximate TrustRank, which essentially measure how far a site is from a seed set of trusted sites. That's a way we believe Google has quantified "trust" in the past. I don't believe that PA/DA factor in MozTrust, but I'm not entirely sure on that one.
In terms of link value, DA and PA can both matter, and it depends a bit on the situation - even both metrics are only a small piece of the puzzle. If the numbers are similar and low (like 20/34 or 34/20), I wouldn't obsess about it. It's when they differ quite a bit that you might want to consider both. A weak page on a very strong domain or a very strong page on a weak domain both have potential value as link sources.
-
RE: Can SEO increase a page's Authority? Or can Authority only be earned via #RCS?
Tend to agree with EGOL, but I'll add some specific thoughts of my own:
(1) Try to come up with a tangible metric, like average ranking across a set of keywords, or total organic search traffic, or total queries, etc. that you can show the boss is more connected to actual traffic and sales. If you can get him to accept a metric that's more on your terms, it'll be better for both of you. I completely agree with EGOL that chasing PR or "authority" is a bit of an affectation.
(2) If he's stuck on something like authority, make sure to define it, and stack the deck. Our metrics - DA and PA, are tricky, and are based on machine learning. In reality, we're trying to predict how well you'll rank. So, we can't really say that authority comes from any one source. Traditionally, authority is things like high-trust, high-PR links, social mentions, and other "brand" signals, but that's very difficult to measure.
(3) I'd also say that whether on-page or links are more valuable really depends on your situation. I've seen a client's organic traffic triple in a few months because we fixed some on-page messes. Likewise, if you've got a Panda penalty due to thin content, then increasing your "authority" 50% could do nothing until you fix the on-page mess. If you've got a beautiful site, architecturally, but have no inbound links, building authority could be like magic. It's a mistake to focus on only one side of the equation.
I think (1) and (3) are intertwined. You really need to make the case: "Do you want a vanity metric, or do you want results?"
-
RE: EXPERT CHALLENGE: What link building strategies do YOU think will work after the latest 3/29/2012 Google algorithm change?
Are you talking about the recent crack-down on link networks? I'm a little confuse, because you mention a 3/29 algorithm change and the question went up on 3/28 (at least here in the US).
I'm actually working on a post about (1), because I think it's almost completely unanswerable without specifics. I've seen people obsess over on-page or build links like crazy and let their on-page turn into a mess, and often those people would be well served to completely switch gears. Take a site that's an absolute mess on-page but has a solid link profile, and fixing on-page issues could work magic for them (for example).
Let's say we're talking about a brand new site, though. It still varies with the goals and budget, but I'd probably say:
- 40% Content
- 30% Link-building
- 20% On-page
- 10% Social
Without some base of solid content, you've got nothing to build links to or promote socially. I'm not saying content is magical - you have to pound the pavement and build those links - but you've got to at least have enough of a site that someone would want to link to it. So, in the beginning, content is still the mainstay. On-page has to start pretty strong - do your keyword research and build a decent, SEO-friendly structure, but then it can level off a little.
-
RE: Should I use rel=canonical on similar product pages.
So, here's the problem - if you follow the official uses of our options, then there is no answer. You can't have thin content or Google will slap you with Panda (or, at the very least, devalue your rankings, you can't use rel=canonical on pages that aren't 100% duplicates, and you're not supposed to (according to Google) just NOINDEX content. The official advice is: "Let us sort it out, but if we don't sort it out, we'll smack you down."
I don't mean that to be critical of your comment, but I'm very frustrated with the official party line from Google. Practically speaking, I've found index control to be extremely effective even before Panda, and critical for big sites post-Panda. Sometimes, that means embracing imperfect solutions. The right tool for any situation can be complex (and it may be a combination of tools), but rel=canonical is powerful and often effective, in my experience.
-
RE: Secretly back-linking from whitelabel product
I'm with Alan - in theory, the canonical would pass the link-juice to the version with the link, but you're not only misleading the client - you're one step away from cloaking the link. You could actually get your own clients penalized for this, and that seems very short-sighted.
Add the NOINDEX on top of this, and I'd be willing to bet that the value of these links would be very low. Even if the client approved followed white-label pages with footer links, for example, we're seeing those types of links get devalued - they're just too easy to get. Now, you add these links all at once, NOINDEX the page, and canonical to a weird variant, and you've painted a very suspicious picture for Google. It might work for a while, but you're taking a significant risk for potentially a very small gain.
-
RE: SEO Benefit to SSL Certificate
I'm afraid the short answer is that we don't really know. Google is pushing that direction, but sites that have taken the plunge aren't reporting much in the way of verifiable gains. As you said, there is a real risk, too, and it can complicate some things.
As Patrick said, there are many legitimate business cases for having SSL, and if you're dealing with commerce or private data, you really should look beyond the SEO aspects. If you're only doing this for SEO benefits, though, I think there are better places to spend your time and money. Google may turn up the volume on this factor over time, but the evidence for major benefits is still lacking.