I'd also say that you have to consider the SERP CTR and CRO aspects. By prominently displaying "Oklahoma City" in your title, you're probably scaring off customers outside of that general area, especially organic search visitors. So, the small boost you get for SEO may actually be hurting you in final sales.
Posts made by Dr-Pete
-
RE: Does City In Title Tag Inhibit Broader Reach?
-
RE: Does City In Title Tag Inhibit Broader Reach?
I'm afraid there's no one "right" answer, and people have covered the issues pretty well:
(1) Local SEO in 2012 goes far beyond just keyword targeting, and is incredibly complex. Your TITLE tags are probably only a small part of your ability to rank locally.
(2) Any keyword you target, generally speaking, means not targeting some other keyword. If you put your city in the TITLE, you do send a signal to Google. In addition, the tag gets longer, which impacts the other keywords. That said, though, it's only one small part of the puzzle.
For traditional, organic SEO, I don't think signalling a city in the keywords automatically penalizes you for areas outside of the city. For local SEO, though, it's a different matter. If you strongly establish yourself in one area, it does imply that your less relevant for other areas. The trick is whether that's a 2% impact or 20%. Honestly, I don't think any of us could tell you in the scope of a Q&A.
-
RE: Can Affiliate Links Harm Your Rank?
It depends on how the affiliate links are set up. Let's say your affilaites using a tracking parameter (like "affiliate="). Now, you link them to a product page, such as:
You could end up with a bunch of indexed pages...
www.example.com/product1.php?affiliate=1
www.example.com/product1.php?affiliate=2
...etc. Those would all be seen as duplicates. Again, I'm only speaking in generalities. I don't know how your links are currently set up.
-
RE: Rel="canonical" on home page?
There are mixed opinions on using it on every page, but I think it's very useful on the home-page, for exactly the reasons that @donford suggests. It's easy for the home-page to get a bunch of variants indexed, including tracking parameters.
Originally, Google said that canonical wasn't proactive, but they've eased up on that. Worst case, they may just ignore it, but the All-In-One SEO approach on a blog isn't a bad bet. It's just so easy for dynamic sites to spin off duplicate URLs that it's better to be proactive.
I've never seen a penalty or devaluation due to using canonical when it's not necessary. I think Bing implied that they may ignore it if they see it too often, but I've never even seen a concrete example of that happening. It's so commonplace now that you'd hear about it if sites were being penalized.
-
RE: Links to Product pages
As far as we know, Shane is right - multiple links to any Page B from Page A basically get ignored. Personally, I don't think that the link from the product # is very obvious to users, but it shouldn't harm SEO.
The only minor issue is that the second set of anchor text also probably gets ignored. So, Google sees this links as being on the product name (since that comes first). That should be fine, but it's just something to consider.
-
RE: Pagination question
Rel=prev/next is still pretty new, so we don't have a lot of data, but it seems to work like a canonical tag. It should pass link-juice up the chain. That said, it's pretty rare for "page X" or search results (where X > 1) to have inbound links or much in the way of search value. I think cleaning up pagination can help a lot, if it's a big chunk of your search index.
-
RE: Site drop in SERPS after domain down for three plus weeks
I've seen more of this post-Caffeine. Now that Google is crawling/indexing faster, site outages can do a lot more damage. Being down for 3 weeks is definitely bad, and it's very likely you'll lost ranking.
It's rare to see any kind of manual penalty, and you should recover - it usually just takes time. You've got to get Google back in action - XML sitemaps (if you don't have them), building up some new links, etc.
This is a quick confirmation from 2006 - I'd say the problem is much worse now:
http://www.searchenginejournal.com/server-outages-lead-to-drop-in-search-engine-rankings/3946/
The bigger argument is just that being down 3 weeks is a lot of lost sales, potentially. To not notice you're down and no know where you're hosted is unacceptable for any serious online business, IMO. I'd be wary of any client who cares that little about their business, to be perfectly blunt.
-
RE: Can Affiliate Links Harm Your Rank?
It's less common than when you're using obvious paid links, but there is some amount of risk. It really depends on how strong and diverse your link profile is. If you're talking a relatively small percentage, it's probably ok. If the majority of your links are affiliate links, it could look shady to Google. It depends on the nature of the links, the industry, etc. too.
The other issue is that affiliate links can often create duplicate content issues, so it can make sense to consolidate them (usually, with canonical tags or 301-redirects). That's a separate issue, though.
You might find this post from Joost de Valk interesting:
http://yoast.com/affiliate-links-and-seo/
Edit: A couple more posts - it's a complex topic:
http://www.darrinward.com/blog/seo/google-penalty-nofollow-affiliate-links
http://www.wolf-howl.com/affiliate-marketing/how-to-mask-affiliate-links/
-
RE: I'm getting a Duplicate Content error in my Pro Dashboard for 2 versions of my Homepage. What is the best way to handle this issue?
The 301 and canonical can be used to solve similar issues, so it gets confusing. For home pages, I think the canonical is a good route, because it "sweeps" up other variants as well. For example, someone might hit your home-page without the "www" with an affiliate ID, etc. One canonical tag on the home-page prevents all of that.
The "alerts" in our system can be a bit hyperactive. Usually the All-In-One canonicals are solid. We're probably just giving you a general warning, but it's tough to tell without a specific page.
-
RE: Should I delete a page that gets search traffic, that I don't care about?
Interesting - yeah, it possible that if you're getting tons of visitors who immediately bounce, and that's a large chunk of your traffic, it could actually cause harm. Unfortunately, it's really tough to tell if a 301 or similar solution would be better than a 404 without understanding the specific situation.
I'm also not clear on how the ads factor in or on what attracted this traffic in the first place. If it really is bad traffic all around and the page has no inbound links, a 404 should be ok. I just hesitate to give definitive advice, because this stuff can get very situational.
-
RE: Help! Is rel cononical impacting me?
I'm not seeing any issues with your canonical tags - the implementation with the All-In-One SEO Pack seems ok. We may be spitting out a warning because your home-page is at the "/wp" folder level and not at the root. That's not always a great idea, but it shouldn't cause major problems here. Fixing it does require some technical expertise.
Like Alan, though, I'm getting some connection issues. I can get to the pages, but there are serious latency issues. This could definitely be causing you some ranking problems. That's not something our tools directly measure at this point.
I notice you're embedding a font that's over 400KB and have some very large images. I have a feeling you've also got some hosting issues, but the size of the template isn't helping, either.
-
RE: Cyrillic letter in URL - Encoding
If you're targeting for Russian queries on Google.ru and your target audience is primarily entering queries with Cyrillic characters, then then Cyrillic URLs should be ok. It used to be that non-Latin character support was poor, but I think that's changed a lot over the past couple of years.
Here's a relevant Google support thread where John Mu chimes in:
http://www.google.com.ag/support/forum/p/Webmasters/thread?tid=489ece0479e0d33d&hl=en
Technically, Google can crawl/index these pages. For example, the Russian version of Wikipedia seems to be using Cyrillic URLs:
http://ru.wikipedia.org/wiki/%D0%9A%D0%BE%D0%BC%D0%BF%D1%8C%D1%8E%D1%82%D0%B5%D1%80
(unfortunately, that URL does get broken when I cut/paste)
The big question to me would be whether searchers are in the habit of using Latin characters in searches, and whether those searches draw more volume than Cyrillic. Unfortunately, we don't have any Russian speakers here on staff, so I can't comment on that one. I do speak a little Mandarin Chinese, and I've seen a mix in that market, too. Some URLs use simplified characters and some use Pinyin (the Romanized version). Technically, either should work, but there are still some legacy effects of the times when only Latin characters were supported.
-
RE: TLD Conflicts in WMT
Google can be very stubborn about wanting to see content at the root level. The other common problem, though, is that the crawler just may not see the other language variants. Are you auto-redirecting by IP (i.e. geo-location)? One major issue is that Google crawls from the US, so you'll need crawl paths to all of the variants.
The first thing to check is that Google is actually reading/honoring the 301. You'll need to see what the crawler is seeing.
Google has added some new tags for sites in multiple languages (especially if the same language is shared across regions/countries). I don't have a lot of good data on them, but Google reps are encouraging their use:
http://googlewebmastercentral.blogspot.com/2011/12/new-markup-for-multilingual-content.html
You may find life is easier, long-term, if you put the dominant language at the root level, along with navigation options for other languages, and then use geo-location to send visitors to other pages. It also saves you a 301-hop for the primary market and will strengthen your SEO a bit for that language.
-
RE: Google Rel="Next" & Rel="Prev"
I wouldn't use rel=next/prev AND canonical to "View All". The canonical will almost definitely overwhelm the weaker rel=next and you'll be showing all search visitors the view all page. There's not a lot of data on rel=next/prev yet, but my testing with clients is going positively. If you aren't currently facing problems, I think I'd give it time to do its job.
Pagination is a tough problem, and Google is not consistent about their preferred methods (some of their reps have gotten in pretty heated arguments on opposite sides). It's more than a little annoying, given how common of a problem it is. The only down side to rel=next/prev is that it's new and Bing currently doesn't support it.
-
RE: Duplicate content via dynamic URLs where difference is only parameter order?
To be fair to Highland, I do think canonical is a good bet here, but I just have to comment that I don't think Google handles these kinds of URLs very well. They should, in theory, but in my experience they rarely do. The problem with order variants is that you can easily spin 100s or 1000s of them and create serious indexation and ranking problems.
For this particular example, the canonical tag is probably best, but there may be cases where certain parameters have no particular value (like a "sort by" parameter). Those are sometimes better off blocked.
I cover a bunch of examples in my mega-post on duplicate content:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
-
RE: Changing the URL structure will it help me or hurt me?
Some good comments here, and I'll have to come in somewhere in the middle. I think Vahe is right that there can be meaningful benefits, both for SEOs and visitors. It's also true, though, that a site-wide URL change can carry risks. Solid planning and well-implemented 301s can mitigate most of that risk, though.
If it were only to get keywords in the URL and the site is ranking well, I'd probably hesitate. Since these dynamic URLs are creating duplicates, though, I think it's a different situation. Those duplicates could create very real risk to your rankings. If the URL change can solve both problem, I'd be much more inclined to do it.
There are other ways to deal with the duplicates - the canonical tag is probably a good bet here (although I'm not sure how tough it is to implement in Joomla). Blocking duplicate-causing parameters in Google and Bing Webmaster Tools is another option. For example, you could block "Itemid" if it had no unique value (I'm not clear on that from the example).
-
RE: Fading Text Links Look Like Spammy Hidden Links to a g-bot?
I should add that there are sites that hide links without trouble, but it's usually in very standardized places (dropdown menus, for example).
The ticker probably is a bit safer. I wouldn't load it up with links, but at least the behavior is more like tabbed content - you're showing chunks of content at a time. I'm worried that the combo of hidden text and color change on the other approach is just going to set off alarms.
-
RE: Fading Text Links Look Like Spammy Hidden Links to a g-bot?
Using display:none on content can be a bit hard to predict. When it's something clear, like tabbed navigation, it's usually ok (and Google is getting a lot better about it). When it's something unusual, Google's guesses about what you're trying to do aren't always great. The actual color-change JS is probably fine, but this could look like hidden text, given the initial setting. For 4-5 links, it may not matter, but given that the benefits are dubious at best (I agree about it being pretty awful for usability), I wouldn't personally take the risk.
-
RE: Can you see the 'indexing rules' that are in place for your own site?
Unfortunately, that would be specific to your own platform and server-side code. When you look at the SEOmoz source code, you're either going to see a nofollow or you're not. The code that drives that is on our servers and is unique to our build (PHP/Cake, I think).
You'd have to dig into the source code generating the Robots.txt file. I don't think you can have a fully dynamic Robots.txt (it has to have a .txt extension), so there must be a piece of code that generates a new Robots.txt file, probably on a timer. It could be called something similar, like Robots.php, Robots.aspx, etc. Just a guess.
FYI, dynamic Robots.txt could be a little dicey - it might be better to do this with a META NOINDEX in the header of the user profile pages. That would also avoid the timer approach. The pages would dynamically NOINDEX themselves as they're created.
-
RE: To "Rel canon" or not to "Rel canon" that is the question
Unfortunately, there are still a lot of gaps in how Google handles even the typical e-commerce site. Even issues like search pagination are incredibly complicated on large sites, and Google's answers are inconsistent at best. The only thing I'd say for sure is that I no longer believe the "let us handle it" advice. I've seen it go wrong too many times. I've become a big believer in controlling your own indexation.