I think that situation's a bit different - if you aren't interlinking and the sites are very different (your site vs. customer sites), there's no harm in shared hosting. If you share the IP and one site is hit with a severe penalty, there's a small chance of bleedover, but we don't even see that much these days. Now that we're running out of IPv4 addresses, shared IPs are a lot more common (by necessity).
Posts made by Dr-Pete
-
RE: Multiple Domains on 1 IP Address
-
RE: 90% of our sites that are designed are in wordpress and the report brings up "duplicate" content errors. I presume this is down to a conical error?
Yeah, I'm not a WP expert at all, but we generally hear good things about Joost's plug-in (http://yoast.com/wordpress/seo/). All in One SEO Pack is the other popular one (http://wordpress.org/extend/plugins/all-in-one-seo-pack/). I think they both serve slightly different niches, and you do have to know how to use them. WordPress can be great, but the default installation can have some SEO issues.
-
RE: How will a sites ranking be affected??
I think Matt's absolutely right (although Alan's warning are definitely worth noting). It really depends a lot on what "change" means. Even "just a template" could mean that your navigation options move, change, increase/decrease, and that can change your internal PR flow. IF the navigation is the same, and IF you aren't adding a lot of ad space, and IF your content, titles, URLs stay roughly the same, and IF you 301-redirect properly (or don't need to, because URLs aren't changing), then you may see very little long-term impact.
You may see a bounce/shuffle as Google re-evaluates the site - any change can trigger short-term bounce. Other things to keep in mind:
(1) Make sure the new template doesn't radically alter load-times.
(2) If you're adding new content, that should be fine, but if you add a lot of content, you could dilute your index, create duplicates, etc. Plus, you'll be linking to that new content, which may draw internal PR from other pages. It's always a balancing act.
Not to make it sound grim. Plenty of people change their sites with no harm and even for the better (especially if it's better for users). Just go in with your eyes open and plan carefully.
-
RE: What do you think of Theme pyramids for SEO?
Yes, every site potentially has a logical hierarchy to it (more than one, in most cases) that could make sense for both visitors and SEO. It's really the basis of all information architecture, in a sense.
In SEO, we usually refer to a "flat" architecture as an ideal where the home-page would link to every page on the site and every page would only be one step away. Of course, in practice, this can lead to unusable sites and massive dilution of internal PR. It's great for a 10-page site, but not for a 10,000-page site.
-
RE: What do you think of Theme pyramids for SEO?
Absolutely (I actually thumbed up your comments). It's good to be aware of internal PR flow, and it IS important. it's just easy to go crazy.
-
RE: What do you think of Theme pyramids for SEO?
I think this is really just an extension of site/information architecture in general - to some degree, a logical structure is good for people and bots. I also think there's no "right" answer when it comes to this kind of structure vs. a "flat" architecture. As Alan said, a flat architecture isn't usually practical on big sites, but I think it goes deeper. A flat architecture implies that all the pages on your site have equal weight. That's rarely true. Driving internal link-juice to major categories and drilling down focuses the most weight on the top.
Now, you can overdo it. I think the article you site goes a little too far these days, because if you apply that to any situation, you're going to end up with a ton of thin content. Post-Panda, created 100s of deep pages just to target 3-4 word phrases could backfire. Eventually, you're going to run out of content for those pages. So, I wouldn't create a pyramid frame and then start looking for bricks. Start with your pile of bricks and see what kind of pyramid you can make out of it. Good information architecture starts with the information you have.
I also tend to lean toward hybrid approaches. For example, you can set up a pyramid but then also link to your Top 10 Products from your home-page. That flattens your architecture for those key products and sends link-juice deep into your structure. There are a lot of useful variations on that theme.
-
RE: Homepage canonicalized with trailing slash
Technically, the trailing slash version is the "correct" version. Almost all modern browsers automatically add it, so the practical implications are pretty small, but I think your consultant's essentially correct. As @walrus said, you're probably talking about in the ballpark of 1%, so I wouldn't obsess about it.
-
RE: Secretly back-linking from whitelabel product
I'm with Alan - in theory, the canonical would pass the link-juice to the version with the link, but you're not only misleading the client - you're one step away from cloaking the link. You could actually get your own clients penalized for this, and that seems very short-sighted.
Add the NOINDEX on top of this, and I'd be willing to bet that the value of these links would be very low. Even if the client approved followed white-label pages with footer links, for example, we're seeing those types of links get devalued - they're just too easy to get. Now, you add these links all at once, NOINDEX the page, and canonical to a weird variant, and you've painted a very suspicious picture for Google. It might work for a while, but you're taking a significant risk for potentially a very small gain.
-
RE: Is Google able to determine duplicate content every day/ month?
Sorting out Google's timelines is tricky these days, because they aren't the same for every process and every site. In the early days, the "Google dance" happened about once a month, and that was the whole mess (index, algo updates, etc.). Over time, index updates have gotten a lot faster, and ranking and indexation are more real-time (especially since the "Caffeine" update), but that varies wildly across sites and pages.
I think you also have to separate a couple of impacts of duplicate content. When it comes to filtering - Google excluding a piece of duplicate content from rankings (but not necessarily penalizing the site), I don't see any evidence that this takes a couple of months. It can Google days or weeks to re-cache any given page, and to detect a duplicate they would have to re-cache both copies, so that may take a month in some cases, realistically. I strongly suspect, though, that the filter itself happens in real-time. There's no good way to store a filter for every scenario, and some filters are query-specific. Computationally, some filters almost have to happen on the fly.
On the other hand, you have updates like Panda, where duplicate content can cause something close to a penalty. Panda data was originally updated outside of the main algorithm, to the best of our knowledge, and probably about once/month. Over the more than a year since Panda 1.0 rolled out, though, it seems that this timeline accelerated. I don't think it's real-time, but it may be closer to 2 weeks (that's speculation, I admit).
So, the short answer is "It's complicated" I don't have any evidence to suggest that filtering duplicates takes Google months (and, actually, have anecdotal evidence that it can happen much faster). It is possible that it could take weeks or months to see the impact of duplicates on some sites and in some situations, though.
-
RE: Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Are you seeing the images getting indexed, though? Even if GWT recognize the Robots.txt directives, blocking the pages may essentially keep the images from having any ranking value. Like Matt, I'm not sure this will work in practice.
Another option would be to create an alternate path to just the images, like an HTML sitemap with just links to those images and decent anchor text. The ranking power still wouldn't be great (you'd have a lot of links on this page, most likely), but it would at least kick the crawlers a bit.
-
RE: Severe Health issue on my site through Webmaster tools
Thanks for letting us know, and glad you found a work-around. A 0-second META REFRESH sometimes acts like a 301 - it's not ideal, as you said, but it's something.
-
RE: Block search engines from URLs created by internal search engine?
That sounds perfect - if the user-generated URLs are getting enough traffic, make them permanent pages and 301-redirect or canonical. If not, weed them out of the index.
-
RE: Internal search : rel=canonical vs noindex vs robots.txt
Yeah, normally I'd say to NOINDEX those user-generated search URLs, but since they're collecting traffic, I'd have to side with Alan - a canonical may be your best bet here. Technically, they aren't "true" duplicates, but you don't want the 1K pages in the index, you don't want to lose the traffic (which NOINDEX would do), and you don't want to kill those pages for users (which a 301 would do).
Only thing I'd add is that, if some of these pages are generating most of the traffic (e.g. 10 pages = 90% of the traffic for these internal searches), you might want to make those permanent pages, like categories in your site architecture, and then 301 the custom URLs to those permanent pages.
-
RE: Is using a platform to automatically cross post on the social bookmarking websites good or bad for SEO?
I tend to agree with Valery - it depends a lot on the audience and your engagement. If you're active on a site and have a strong profile, then auto-posting your new content often makes sense. If you're just submitting content to 100s of social bookmarking sites, with no real profile and very low authority, you're not going to accomplish much. Google will likely just ignore it, for the most part.
-
RE: Internal search : rel=canonical vs noindex vs robots.txt
Alan's absolutely right about how canonical works, but I just want to clarify something - what about these pages is duplicated? In other words, are these regular searches (like product searches) with duplicate URLs, are these paginated searches (with page 2, 3, etc. that appear thin), or are these user-generated searches spinning out into new search pages (not exact duplicates but overlapping)? The solutions can vary a bit with the problem, and internal search is tricky.
-
RE: Do links in the nav bar help SEO?
There's nothing wrong with doing this, as long as the "title" attribute is accurate (DON'T spam it with non-relevant keywords), but I haven't seem compelling evidence that it acts as a ranking signal.
-
RE: Do links in the nav bar help SEO?
One thing I'd keep in mind is that a lot of your main nav pages aren't always great landing pages for search users. "About Us" is a decent landing page for finding out about your company (and that or the home-page should rank fine), but it and "Contact Us" aren't usually good bets for your non-brand keywords. It's often better to have a dedicated page targeting separate services.
I think it's fine to use keywords for the "Services" page, or you could split that page into specific services. Then, each service would have a keyword-targeted internal link and content. In that sense, think of your services like products - you branch from a main "store" page to categories to individual products. Done well, it serves both users and SEO.
-
RE: Multiple domains for one site / satellite domains
When you say "park on top of" what do you mean, exactly? Typically, the host would be redirecting somehow (or using a CNAME).
For the old, active domains, you should keep the 301-redirects in place for the foreseeable future (at least a year) - the benefits far outweigh the costs. After that, you can probably just keep the domains and redirect them from the registrar directly (in other words, you won't need them hosted and redirected page-by-page).
With the several other domains, do those have active sites with content or are they just registered domains? There's no harm in pointing those domains to your main site, but it doesn't really accomplish much if they never were set up. If they're active sites, it really does depend on the scope and focus. I wouldn't 301-redirect dozens of domains all at once, as that can look suspicious.
Personally, I'm not nearly as fond of microsites as I once was. The benefits are declining, and the costs are increasing. The biggest cost, practically, is just splitting your efforts. At one time, that basically just meant content. Now, it means splitting link profiles, social efforts, etc. It's rarely worth the time and money. There are exceptions, and well-targeted microsites can work. Creating them just to have a few long-tail domains, though, isn't usually worth the trouble, and can create duplication issues.
-
RE: Block search engines from URLs created by internal search engine?
It can be a complicated question on a very large site, but in most cases I'd META NOINDEX those pages. Robots.txt isn't great at removing content that's already been indexed. Admittedly, NOINDEX will take a while to work (virtually any solution will), as Google probably doesn't crawl these pages very often.
Generally, though, the risk of having your index explode with custom search pages is too high for a site like yours (especially post-Panda). I do think blocking those pages somehow is a good bet.
The only exception I would add is if some of the more popular custom searches are getting traffic and/or links. I assume you have a solid internal link structure and other paths to these listings, but if it looks like a few searches (or a few dozen) have attracted traffic and back-links, you'll want to preserve those somehow.
-
RE: On Page SEO Tool
A desktop crawler will do most of that, too - Screaming Frog is a great option, but it's a paid tool over 500 pages (I think). I wrote a post last year comparing it and Xenu, another crawler:
http://www.seomoz.org/blog/crawler-faceoff-xenu-vs-screaming-frog
-
RE: Google Rankings Jumping Around
Hmmm... that is strange, especially if you haven't pushed any link-building to that section. Unfortunately, the duplicate content aspects are really tough to speak to without seeing the site/page. One easy spot-check I do is to take the page title of the section, let's say it's "Section Title" and search the index within your site, such as:
site:example.com intitle:"Section Title"
That will show you if copies are being indexed, and it's a really easy place to start.
-
RE: Can somebody explain Canonical tags and the technical elements of SEO?
It's a bit of a read, but I discuss a lot of on-page tag/tactics in this post, inspired by Panda:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
If you're launching a very large site (like an e-commerce site) with 1000s of products, then a deep knowledge of on-page SEO can be critical. For most sites, though, that grow organically, you can learn as you go. As you start to track your own content and rankings, you'll begin to see what works and what doesn't.
Early on in a site's life, a lot of on-page really just comes down to solid keyword research, a sensible site architecture/structure (navigation and internal links), controlling duplicate URLs, and writing decent TITLE tags. That'll take you a long way in the beginning.
-
RE: Long meta description
Putting nothing actually isn't always bad these days. If the pages are clearly unique, Google can create a snippet with no trouble. In fact, they often do this anyway (regardless of your META description). Most people prefer some control over the snippet (you never have total control), but I've seen cases where leaving a META description off worked fine.
There really isn't much benefit to going beyond the length limit - it's not a ranking signal and Google will only display up to the limit. If you had a long META description, it's possible Google would display a middle section of it if that matched the query, but in most cases I wouldn't bother. You're just using up load-time for something very low value. Presumably, that text is also on the page somewhere.
All of this is to say that, while I'd lean toward the truncated version, I don't think it's cut-and-dry. I'd actually say the long version is my last pick in most cases. As @Boomajoom said, it could be a spam signal (although probably only if its keyword-stuffed).
-
RE: Why am I ranking for this
Did you 301-redirect or canonical from the old site that was optimized previously for this term? You could be seeing an indirect impact of back-links to that old site (which wouldn't show as links to the current site, depending on how you redirected them).
-
RE: How do you incorporate a Wordpress blog onto an ecommerce website?
It's probably true that the subdomain approach is easier, but I lean toward the subfolder these days - it's possible for subdomains to fragment in some cases and not pass all authority to the root domain. The subfolder can help preserve that inbound link value.
Ben and Andrea's comments about the difficulty of subfolders and potential risk of integrating WordPress on to your main servers are certainly valid and worth considering. I'm definitely not an expert on WP migration, and there's more than one way to achieve it. It's possible to actually keep the WP installation on a separate server and then make it act as if it "lives" under the "/blog" subfolder with a reverse proxy, but that's pretty complex:
http://www.apachetutor.org/admin/reverseproxies
No matter which route you go, keep in mind that you'd need to 301-redirect all of the old URLs to either the subdomain or subfolder version. Simply moving the WP installation won't migrate the inbound link-juice or traffic. Both visitors and spiders need to be redirected to the new URLs - that's absolutely critical.
-
RE: Does Google pick up on words such as "in", "the", "and" etc?
There was a time when Google simply ignored so-called "stop words", for processing efficiency, and so they two queries in your example were essentially the same. It looks like that has changed over time, though. See this post from 2008 by Bill Slawski (an expert on Google patents and technology):
http://www.seobythesea.com/2008/01/new-google-approach-to-indexing-and-stopwords/
...and a quick experiment someone did in 2010 that seems to confirm that:
http://www.dougwilliams.com/blog/seo/stop-words-does-google-ignore-these-anymore.php
In my experience, it's a bit specific to the query and competition. In many cases, the addition or subtractions of a stop word may not make much of a difference, but in your case it probably does. If the term you want to target is "Holidays in Ireland" and the Top 10 for that term seems different from the shorter term, I'd say to use "in". I'm seeing some differences between those two sets of Top 10 results (not huge, but some).
-
RE: Canonical
Did you make a change? I'm seeing the canonical tag in the header and not in the content/body (as Boomajoom mentioned). In my experiments, Google won't honor a canonical tag in the body.
I do see that the tag is reversed a bit, with the "href" attribute first and "rel" second. Although Google will probably honor this, I think it might be confusing our system, which can be a bit more literal.
-
RE: Having trouble removing homepage from google
Unfortunately, GWT removal is one of the fastest ways I know of. If Google isn't honoring that, they're probably trying (in their minds) to protect you from de-indexing your home-page. I think the next most extreme step would be to rel-canonical your home-page to another page (possibly even another site) - that's going to be hard to undo later, though.
-
RE: Big Site Wide Link
The recent crackdown on link networks has been pretty harsh in some cases. Unfortunately, there's not a ton you can do about bad links like that, especially if the sites have been de-indexed. Cutting your links from pages that aren't indexed probably won't have much impact (and often isn't even feasible). In that case, you're going to just have to focus on positive link-building tactics for a while and hope to turn it around.
If you do suspect a link-based problem, then switching your paid links to nofollow might be a good bet. I would especially suggest this if you're going to file for reconsideration with Google (otherwise, they'll probably see those links and ignore the request). It's tough, though, since it's possible those links are also helping you right now. At the level of any one link, it's almost impossible to tell.
I think this recent interview with Jim Boykin has some good advice. He's definitely dabbled on the black-hat side, so I think it's an honest appraisal of the situation:
-
RE: Having trouble removing homepage from google
I'm not a legal expert, but are they still using the trademark on the home-page? If they're not using it at all, but are only ranking on it from past efforts, there's not much of a case. They can't control the algorithm - they can only remove references on the site. Of course, it depends on how pervasive those references are and how much you want to fight it.
I just worry that de-indexing the home-page is going to have much broader, long-term consequences.
If you want to make the party threatening action happy at any cost to your site, you could rel-canonical your home-page to their home-page. I wouldn't do it, but it's another possibility.
Ultimately, though, all these things take time to process. If you, in good faith, have removed all trademark references and have requested removal with Google, you can't control the rest and I doubt you're liable for it.
-
RE: Having trouble removing homepage from google
Typically, for a regular NOINDEX situation, you wouldn't want to use Robots.txt (you're right, it could prevent the page-level signal), but in this extreme of a situation and to really get the GWT signal to work, you may need the Robots.txt directive in place.
I'm confused, though, they want the home-page out of Google but NOT the rest of the site? By blocking the home-page, they'll kill the link-juice to other pages, and do massive SEO damage. I have to think there's another alternative.
For example, could they canonical the home-page to another, Google-safe page (let users see the original but send SERPs elsewhere)? Another alternative would be to move the current home-page content to a deeper page, 302-redirect to that page for visitors, and leave Google on the Google-safe home-page.
My gut reaction is that this sounds like a very dangerous maneuver, but it's really tough to say without understanding the logic behind it.
-
RE: Having trouble removing homepage from google
Just the home-page, or the entire site? GWT is usually the fastest and most reliable way. I'd block the entire site in Robots.txt, too - sometimes, Google wants to see that prior to a GWT removal (although, usually, NOINDEX Is enough).
I assume they still want the page active for visitors (and just want it off of Google)?
-
RE: Splitting a Site into Two Sites for SEO Purposes
I don't think there's a "right" answer here, but my observation is that microsites aren't doing as well as they once did. It used to be that, just by having more sites, you did better. Now, as Google seems to be testing the volume know on exact-match domains, devalues cross-linking, and is harsher on duplicate and thin content, it's a lot harder to support separate sites. Factor in that you're splitting your links, social signals and offline marketing/branding, and promoting two properties can really make you lose focus.
That's not to say it's all-or-none, though. Exact- and partial-match domains do still carry weight, and if the niche is unique and separable enough, it is possible to build a strong identity for it. I'd really look at the business side, though, for guidance. Is this a division of the business that really stands alone as a brand? If so, separation could provide broader benefit. If you're just separating for SEO, I'd generally side with keeping the unified site.
The issue with the redirects is that the weight of those pages only gets to exist in one place. So, if some of those pages have inbound links, a 301-redirect will kick start the new domain, but it will also take away from the authority of the old domain. In other words, you may not just lose the traffic itself - you may lose some of the main domain's ranking ability. That depends a LOT on the situation, though (it's hard to speak in generalities).
-
RE: Big Site Wide Link
Typically, "devalue" just means that the links don't count as much as they might under other conditions. Obviously 750K links from one site don't count nearly as much as 1 link from 750K different sites (by a huge amount), but that's just because site-wide links are relatively common and Google knows to weight them a bit differently. That shouldn't be confused with a penalty.
Agreed with Julie that, if this is one of the sponsor banners, it could be seen as a paid link. By itself, I don't think this poses a threat, but if you have a weak link profile otherwise or are getting a lot of similar sponsorships, you may want to nofollow some of these links down the road. If you're not seeing any danger signs, though, I suspect you're ok for now. There's nothing spammy about the site, and all of the sponsors seem relevant.
-
RE: Google Rankings Jumping Around
Are the other rankings for that keyword(s) bouncing as well, or is it just you? If everyone is bouncing, Google could be tweaking the algorithm or the keyword could be evaluated different ways. For example, I'm seeing Google treat more keywords as having local implications (especially since "Venice"), but they seem to be fine-tuning that week to week.
If it's just you, it's probably one of two things:
(1) You're experiencing a technical issue that's impact the speed or availability of your site.
(2) You've got a very large index or some issues with duplicate content. Sometimes, I see a page get re-crawled (say, from an XML sitemap), rank well for a few days, and then drop as Google re-crawls more of the site.
(3) You're hovering on the border of a link-based penalty.
Fortunately, these are different enough that you can probably tell which one is most likely by digging into your own index and link profile, as well as Google Webmaster Tools.