That's a trick that used to occasionally work, but there's no evidence for it in the past couple of years. Google has gotten pretty good at understand how pages are rendered and is no longer completely dependent on source-code order. In some cases, they may even view it as manipulative.
Best posts made by Dr-Pete
-
RE: Infinite Scrolling: how to index all pictures
-
RE: Ranking Internationally
I'm afraid there's no one "right" answer. Country-specific TLDs do have some extra power, but the problem is that then you're splitting everything - content, links, social mentions, etc. If you have the budget to really build up a site and market it in each country, then ccTLDs are great. If you aren't ready to go all-in, though, I'd probably recommend against it.
I generally would not rely on machine translations. They tend to be a poor user experience and Google can often spot them and may consider them to be thin content. A good translation done by a native speaker is perfectly fine. In this case, I'd also use the hreflang tags to let Google know that it's a language/region-specific piece of content:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
Finally, in most cases I'd recommend sub-folders or sub-domains. Sub-domains may split authority and can act more like separate domains (but without the power of the ccTLD). There are standard practices for sub-folders, like "http://www.sepndbitcoins.com/au" and "/nz" - and this can help Google more easily understand the site structure. If the pages are all in English then I'd definitely recommend hreflang tags - they'll help Google sort out the region-specific content. You can target sub-folders in GWT. See this page for more information:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=182192#2
-
RE: NEw domain extensions, are they worth it seo wise?
Unfortunately, no - still pretty limited on travel - if I take a trip, it's usually to the Moz office. Speaking at MozCon in July and then in the Czech Republic in November.
-
RE: I need to know more clearance on rel=canonical usage than 301 redirects ?
One thing that I almost always see overlooked in these discussion - 301 and canonical have totally different impacts on the visitors to your site. A 301 will take the visitor to the new site, whereas a canonical won't. If you're really trying to phase out the old domain, canonicals could be self-defeating, because people won't know the site has moved and they'll still bookmark, tweet, link to, etc. the old URLs.
Keep in mind, too, that cross-domain canonicals are at Google's discretion. While they often work, and can pass PageRank, they're sometimes ignored. The are cases where canonicals may be safer, such as if you suspect the old domain carries a penalty. For a full site move, though, I'd almost always go with 301s.
-
RE: Infinite Scrolling: how to index all pictures
There should be no real difference, in terms of Google's infinite scroll solution. If you can chunk the content into pages with corresponding URLs, you can put any source code on those pages - text and/or images, along with corresponding alt text, etc. Once you've got one solution implemented, it should work for any kind of HTML. Not sure why images would be different in this case.
There are also ways to create photo galleries that can be crawled, mostly using AJAX. It's complex, but here's one example/discussion:
-
RE: Duplicate content by php id,page=... problem
You can use 301s or canonicals even if it's driven by one template. You'll have to set up the 301 rules based ont he URLs themselves or create dynamic caonical tags in the code. if the CMS can drive multple URLs, it can drive multiple canonicals.
If you can't sort that out in the code, you can't use NOINDEX either. You'd end up no-indexing every version.
Your other best bet may be to ignore the ID= parameter in Google Webmaster Tools. Personally, I consider that the worst of the three options, but it is the easiest and it should help a bit.
-
RE: Infinite Scrolling: how to index all pictures
By assigning a URL to each virtual "page", you allow Google to crawl the images, done correctly. What Google is suggesting is that you then set up rel=prev/next between those pages. This tells them to treat all of the image URLs as a paginated series (like a mutli-page article or search results).
My enterprise SEO friends have mixed feelings about rel=prev/next. The evidence of it's effectiveness is limited, but what it's supposed to do is allowing the individual pages (images, in this case) to rank while not looking like duplicate or near-duplicate content. The other options would be to rel=canonical these virtual pages, but then you'd essentially take the additional images out of ranking contention.
This infinite scroll + pagination approach is VERY technical and the implementation is well beyond Q&A's scope (it would take fairly in-depth knowledge of your site). Honestly, my gut reaction is that the time spent wouldn't be worth the gain. Most users won't know to scroll, and having 10-20 pictures vs. just a few may not add that much value. The SEO impact would be relatively small, I suspect. I think there may be easier solutions that would achieve 90% of your goals with a lot less complexity.
-
RE: Exact URL Match For Ranking
It's certainly true that EMDs can still have an impact (it's declining, but they still matter), but it's rare for a brand new domain that's redirected to rank well, because there's nothing to redirect. You can't redirect the name itself, just the strength of the link profile. I suspect they may be doing something a bit more elaborate behind the scenes. They could be redirecting older, more powerful sites, or they could have a link network set up, as Matthew said.
Long-term, though, it will eventually burn out. It's frustrating, because these tactics can work for a while, but Google is definitely taking a dimmer view of them over time, and it's a risky play.
-
RE: Infinite Scrolling: how to index all pictures
I don't think the risk of harm, done right, is high, but: (1) it's easy to do wrong, and (2) I suspect the benefits are small at best. I think your time/money is better spent elsewhere.
-
RE: Incorrect rel canonical , impacts ?
Yeah, I'm unclear as well - could you provide a sample URL, even if it's not the real URL (just something similar)?
If the canonical tag is appearing on both the original and duplicate and points to the original, that's fine. Google will essentially just ignore it on the original. If the original points to the duplicate, though, or they both point to each other, etc., that could be very dangerous.
-
RE: Infinite Scrolling: how to index all pictures
That depends on a lot of factors. Consolidating those to one page has advantages, SEO-wise, but you're losing the benefits of the photo page. I lean toward consolidation, but it really depends on how the pages are structured in the navigation, what sort of content and meta-data they have, etc. I'm not clear on what's left on Page A currently, but the biggest issue is probably dilution from the extra pages. Since there are "guide" pages, though, I'm not sure how they fit your site architecture. To remove 200 of them, you may need to also rethink your internal link structure.
-
RE: 301 issue in IE9
Haven't heard of that with IE9, but from an SEO standpoint, 302s everywhere is much more risky than a few 301s mis-firing as 404s. I get why they're concerned, but this is the wrong solution. Is there a way to set up the redirects within the page headers and only returns 302s for IE9, for the short-term. That's not ideal, but it's at least a stop-gap solution. I'm sincerely afraid their short-term "fix" could cause you long-term problems.
-
RE: Infinite Scrolling: how to index all pictures
Yeah, I don't think the picture- and video-heavy pages are going to rank all that well by themselves. It's just a question of whether those additional pages are diluting your MLS listing pages (by using similar regional keywords, etc.).
At the scale of a large site, it's hard to tell without understanding the data, including where your traffic is coming from. If it's producing value (traffic, links, etc.), great. If not, then you may want to revisit whether those pages are worth having and/or can be combined somehow. I don't think "combined" means everything on both pages gets put onto one mega-page - you could pick and choose at that point.
-
RE: WordPress - How to stop both http:// and https:// pages being indexed?
Just one adjustment to this - although I think David's right that the canonical tag can be a good solution. Although Google can index https: fine, the issue is whether you're creating duplicates. If you have duplicates, then it's possible that the https: version could be the one you want as canonical. In this case, it doesn't sound like it, but I just wanted to point that out.
Of course, long-term, you should sort out why these are being created. A desktop crawler like Xenu or Screaming Frog may be the best bet, but I'd hit the WordPress forums, too. Odds are it's a common issue. Typically, it happens when some deeper page (like a shopping cart) on a site is secure, and then the links are all relative ("/about.php", for example). Then, those links get crawled as both secure and non-secure.
Unfortunately, I'm not a WordPress expert, so I can only speak in generalities.
-
RE: Previously owned domain & canonical
I'm confused about one thing - how did you request URL removal in Google Webmaster Tools if you no longer own the domain?
It looks like Google is caching some pages of your Southernafricatravel.com domain and showing them as appearing on Preferredsafaris.com. When I dig into the cache, it's actually showing your main/active site. This could be because of an old canonical relationships between them (whether actual canonical tags, 301-redirects, or something similar), or it could be because they contained duplicate content and Google chose to view them as canonical. Sometimes, that happens even if you don't specify it.
I don't love that the new owner has put up spam irrelevant to the domain, and it's possible that could bite you somehow, but I suspect it's unlikely. Once Google sees that this old pages don't resolve, I think you'll see them gradually disappear. There is no duplicate content at this point.
-
RE: Canonical Tag on Blog - Roger says it's incorrect?
In some cases, we return a warning if the canonical doesn't match the display URL. I realize this can be confusing, because often canonicals don't match the page, by necessity. It's essentially just a heads up, in that case, to make sure no one does anything dangerous. There are two canonical messages, though - one is an error or warning, and one is just a notice. I'm not sure which one you're seeing.
As Sean said, though, I'm not seeing any obvious issues with the canonical tag on your blog. This may just be a hyperactive warning on our part.
-
RE: Negative SEO penalty, new domain?
It's tough to speak in generalities, but in almost all of the cases where I suspect negative SEO was in play, there was an inherent weakness or problems in the link profile to begin with. If you add those problems to a domain with a questionable history, your risk is going to be fairly high. If you were a new site in a completely different industry with no history (or a good history), then the history of that domain might not matter. In your case, though, I'm hearing some alarm bells.
Also keep in mind that unless you're going to start over cold-turkey, and not 301-redirect any of the old site, you'll carry any link-related problems with you. So, re-launching on a new domain is definitely a big decision and will probably take months of work to rebuild momentum. Granted, waiting for the next Penguin refresh could take months, too, so I understand your dilemma.
If you're going to take this step, though, I'd put the time and money into a domain with a clean history. You can't afford to do this twice.
-
RE: Do internal links from non-indexed pages matter?
I assume these are pretty deep in the site structure, so I don't think those "links" being reported are very powerful or important. Some people claim that, since PageRank is recursive, you don't want to cut off paths, but when the paths are deep I've rarely seen any evidence to support this. A big, bloated index full of thin content, especially content available on other sites, is a much bigger danger.
I would not recommend using both a NOINDEX and a rel=canonical on these pages. It's a mixed signal, and that can cause Google to ignore one or both signals (and at their choosing, not yours). I think NOINDEX is fine here. I've built structures like this for things like event websites (where we index the main event but NOINDEX all of the cities/dates, because they change so often) and have never seen any major issues. Actually, in one notable case, even before Panda came along, the site's rankings improved measurably.
-
RE: When to use canonical urls
Thumbing up both answers - I think they've got you covered. This is definitely a situation where you should try to sort out why the deeper page is ranking. It could be a positive that you should try to encourage (disrupting that could harm your ranking, ultimately) or it could signal something about your home-page that needs work.
Rand had a good post a while back on the subject:
http://www.seomoz.org/blog/wrong-page-ranking-in-the-results-6-common-causes-5-solutions
-
RE: What are the pros & cons of recycling an old domain name?
Yeah, I'm somewhere in the middle on this one - as Richard said, an off-topic domain with low authority isn't going to buy you much. If you want the domain for the name or something, great - but don't expect much SEO benefit. Google has gotten pretty savvy about ignoring this stuff, as buying and redirecting domains has been heavily abused. I doubt you'd be at much risk here, but you'd probably see very little benefit.
-
RE: What if I point my canonicals to a URL version that is not used in internal links
I think Andy's absolutely right - I've seen too many situations where mixed signals caused crawl/index and even ranking problems. Ultimately, the canonical URL should be canonical in practice and used consistently. Otherwise the canonical tag is just a band-aid.
The other problem is that you naturally end up attracting links to your non-canonical URLs, because those are what people can see. Long-term, that compounds the situation.
Now, is it catastrophic? Unfortunately, that's really tough to say. I've seen situations where Google honored the canonical tag even without internal links and the site was ok. I just think it's a significant, unnecessary risk. Unfortunately, like Andy, I don't know of any clear documentation on the subject.
-
RE: Google Search Listing With Feedback Link
Could you provide a sample query? A number of features have "Feedback" links now, including Knowledge Graph features (as Umar said).
-
RE: Impact of May 2015 quality update and July Panda update on specialty brands or niche retail
Unfortunately, this is an incredibly complex situation (in many cases) with no easy answer. Unlike a penalty or typical Panda update, this sounds more like a signal change favoring one type of site over another (one set of signals over another). I'm not going to say "big brands", because that carries a lot of assumptions and baggage, but there are certainly signals that tend to be correlated with more powerful brands.
If Google really just decided to change their preference, there's not a lot to be done. You may have done nothing wrong, per se, and it's hard to fix something that isn't broken. In that case, you've got a few options, SEO-wise:
(1) Hunt for greener pastures. You may have to find new, long-tail keywords where the bigger brands aren't playing. This is a big project beyond the scope of Q&A, but there are cases where you do need to go after new targets.
(2) Re-evaluate your keywords based on impact/traffic/conversion instead of ranking. It's possible, in some cases, that big brands could dominate the Top 5, but that, for some reason, you're still getting decent CTR on certain keywords. Do that analysis before you give up on these keywords.
(3) Hang in there. Sorry, it sounds like lame advice, but these kinds of updates often go back and forth, and you could see Google tweaking the mix over the next few months. In other words, whatever tactical shifts you make, don't completely cut off the pages/tactics that were ranking before (just in case).
All of that said, it's often the case that the situation is a bit grayer, and Google has made this shift because of quality issues it saw across a large number of sites. It's hard to speak in generalities, but Panda updates have gradually been harder on certain types of pages, like product categories, because these are often fairly thin (search results, etc.). If all of the smaller players took a similar approach, it's possible you all got devalued at once, and there may be a way to fix that.
Unfortunately, that kind of fix is really hard to advise on without at least some sense of the keywords/pages in question. I guess my main point is that it's easy to say "Google gave big brands all the rankings!" and see red, which can make you miss the few things you might have power to change.
-
RE: I was wondering, do you know when you see updated results for a sporting event in the google search.
Do you mean rich snippets for events? Check out this resource from Google:
https://developers.google.com/structured-data/rich-snippets/events?hl=en
Note: the mark-up alone doesn't guarantee you'll get those results. It depends on authority, relevance, and other factors. The mark-up is important, though.
If you mean something else, let me know - just describe to me where it lives on the SERP and what it looks like.
-
RE: I was wondering, do you know when you see updated results for a sporting event in the google search.
Ah, got it. Google calls those "Live Answers", I think. Short answer is: no, it's not structured data. Sports, weather, stock prices, and other highly specialized data comes from private partnerships with Google, generally, and each one is unique (and not particularly publicly discussed).
Unfortunately, I don't even know what team at Google handles that or who you might talk to about a partnership. They aren't very transparent about it. In some cases, they list sources, but not for sports (not sure why).
-
RE: Should I use rel=canonical on similar product pages.
To clarify, that's the official stance - rel=canonical should only be used on true duplicates (basically, URL variants of the same page). In practice, rel=canonical works perfectly well on near-duplicates, and sometimes even on wildly different pages, but the more different you get, the more caution you should exercise. If the pages are wildly different, it's likely there are more appropriate solutions.
-
RE: Should I use rel=canonical on similar product pages.
I haven't heard any SEO recommendations or benefits regarding rel="contents". Rel=prev/next has mixed results, but I'd generally only use it for its specific use case of paginated content.
I guess you could treat V2 as "pages" within V1. If you did that, what you'd need to do is treat the main page as a "View All" page and link to it from each author page. I'm not sure if that's the best approach, but it's more or less Google-approved.
If the site has decent authority and we're only talking 100s of pages, I might let them all live in the index and see what happens. Let Google sort it out, and then decide if you're ok with the outcome. If the site is low authority and/or we're talking 1000s of pages, I might be more cautious.
It's hard to speak in generalities - it depends a lot on the quality of the site and nature of the pages, including how much that content is available/duplicated across the web. One problem here is that author pages with lists of books probably exist on many sites, so you have to differentiate yourself.
-
RE: Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
GWT numbers sometimes ignore parameter handling, oddly, and can be hard to read. I'm only seeing about 40 indexed pages with "ref" in the URL, which hardly seems disastrous. One note - once the pages get indexed, for whatever reason, de-indexing can take weeks, even if you do everything correctly. Don't change tactics every couple of days, or you're only going to make this worse, long-term. I think canonicals are fine for this, and they should be effective. It just may take Google some time to re-crawl and dis-lodge the pages. You actually may want to create an XML sitemap (for Google only) that just contains the "ref=" pages Google has indexed. This can nudge them to re-crawl and honor the canonical. Otherwise, the pages could sit there forever. You could 301-redirect - it would be perfectly valid in this case, since those URLs have no value to visitors. I wouldn't worry about the Bing sitemaps - just don't include the "ref=" URLs in the Bing maps, and you'll be fine.
-
RE: Duplicate content due to parked domains
What was happening when they were parked - were they 302-redirected or was it some kind of straight CNAME situation where, theoretically, Google shouldn't have even seen the parked domains? Trick, of course, is that Google is a registrar, so they can see a lot that isn't necessarily public or crawlable.
Did the additional domains get indexed while parked, or after you went to 301-redirects?
-
RE: Duplicate content due to parked domains
Oh, and how many domains are we talking (ballpark)?
-
RE: Tired of finding solution for duplicate contents.
As best I can tell, your canonical tags are properly implemented and Google doesn't seem to be indexing any URLs with "items_per_page" in them. Our crawler and desktop crawlers may be getting confused because there are internal paths to these variations.
Ideally, that pulldown probably shouldn't be crawlable, but I think your canonical implementation as it stands is ok. I don't see any evidence that Google is having problems with it. It may just be a false alarm on our part.
-
RE: Duplicate content due to parked domains
Ugh... 75 is a chunk. The problem is that Google isn't a huge fan of 301-redirecting a bunch of new domains, because it's been too often abused in the past by people buying up domains with history and trying to consolidate PageRank. So, it's possible that (1) they're suspicious of these domains, or (2) they're just not crawling/caching them in a timely manner, since they used to be parked.
Personally, unless there's any link value at all to these, I'd consider completely de-indexing the duplicate domains - at this point that probably does mean removal in Google Search Console and adding Robots.txt (which might be a prerequisite of removal, but I can't recall).
Otherwise, your only real option is just to give the 301-redirects time. It may be a non-issue, and Google is just taking its time. Ultimately, the question is whether these are somehow harming the parent site. If Google is just indexing a few pages but you're not being harmed, I might leave it alone and let the 301s do their work over time. I checked some headers, and they seem to be set up properly.
If you're seeing harm or the wrong domains being returned in search, and if no one is linking to those other domains, then I'd probably be more aggressive and go for all-out removal.
-
RE: Duplicate Content for Multiple Instances of the Same Product?
Yeah, no argument there. I worry about it from an SEO standpoint, but sometimes there really isn't a lot you can do, from a business standpoint. I think it's occasionally worth a little fight, though - sometimes, when all the dealers want to have their cake and eat it, too, they all suffer (at least, post-Panda). Admittedly, that's a long, difficult argument, and you have to decide if it's worth the price.
-
RE: Rankings appear mixed up causing huge drop in organic
At this point, other than "It's not Penguin, probably" we don't have much insight into what's been going on over the past week, other than that, as Peter N. said, multiple tools are showing rankings shake-ups.
If you're talking about a total loss of top pages, though, I think it can be premature to assume it was an update in play. I'd definitely thoroughly check the technical aspects. Are these pages still being index? Are they being cached properly? Do they show up for longer-tail or exact-match terms (in quotes) - in other words, have they dropped in ranking or are they ranking for nothing at all? The more you can pin down, the better.
Unfortunately, it's very hard to speak in generalities and tell you what factors were involved in this week's updates. It really takes a deep dive into the site(s) in question.
-
RE: Why are the bots still picking up so many links on our page despite us adding nofollow?
The main issue with too many on-page links is just dilution - there's not a hard limit, but the more links you have, the less value each one has. It's an unavoidable reality of internal site architecture and SEO.
Nofollow has no impact on this problem - link equity is still used up, even if the links aren't follow'ed. Google changed this a couple of years back due to abuse of nofollow for PageRank sculpting.
Unfortunately, I'm having a lot of issues loading your site, even from Google's cache, so I'm not able to see the source code first-hand.
-
RE: High charts, anyone used them? SEO impact/things to take into consideration
Looking at what's cached and appearing in search snippets, it seems like Google is crawling at least some of the text in my HighCharts on MozCast.com. Those charts aren't text heavy, so it's hard to tell how they weight it or if that text is eligible for ranking, but I strong suspect that Google is at least aware of HighCharts and can parse some of the data.
-
RE: Should I rename URLs to use hyphens instead of underscores?
I'd tend to agree with Nakul on proceeding with caution - while Google doesn't necessarily treat "_" as a word separator, the URL is just one relatively small ranking factor. There are many risks in a site-wide 301-redirect, especially when you're redesigning. If the redesign runs into SEO trouble, you're not going to be able to separate the many changes, and that could delay fixing any problems.
The exception would be if you're planning to change a lot of the URLs anyway, as part of the redesign. Then, I'd go ahead and do it all at once. Hyphens are a nice-to-have - I'm just not sure that, practically, the risks outweigh the rewards. It does depend a lot on how you're currently ranking and whether the URLs are causing you any major headaches.
-
RE: High charts, anyone used them? SEO impact/things to take into consideration
Yeah, the main MozCast graphs use them, and those are public. Google seems to index the pages fine and even is parsing a lot of the text content on the graph itself, best I can tell. I haven't seen any SEO issues at this point.
I think I'd be wary of putting any really critical content (to search) entirely in a HighChart. You might want to break it out. If a page is nothing but a chart, it's kind of like a page that's nothing but a video embed. It still has value, potentially, but if you had a lot of them it could start to look thin and bots might not see all of that rich content.
That said, the HighChart is a pretty heavy element of the MozCast home-page, and it seems to rank fine. I'd really just start to worry if you had 100s of pages that were each a different chart but had very little supporting content.
The other issues are technical. Load-times are good with HighCharts, but mobile rendering can vary a bit. It's decent, but there are variations. If you have a big mobile audience, I'd definitely make sure things look the way you think and are accessible. If you had a very low tech-savvy audience that had JS turned off, obviously that's a consideration, too. I think that's rarely a fear these days, though.
-
RE: Will rel=canonical work here?
I agree with Tom that rel=canonical should work here, but it does depend a bit on the scope and structure of the site. I might actually META NOINDEX the printable versions, as these are usually dead-ends and have no value to search visitors. I'm not entirely sure I understand what (3) is or why it's a separate page.
In general, though, you should get these under control, as it sounds like every advert basically has 4 different URLs. This could dilute your ranking ability and even cause Panda problems.
-
RE: Google Answer Box For Old Forum Discussions
I'm pretty much 100% on board with Everett, but just adding a couple of things. I do think it's worth experimenting, and a good time to do that. That said, there's a lot we still don't know.
You do have to rank on page 1 for the queries in question. So, if you're not clearing that hurdle, this won't be time well spent. If you are clearing it for a solid % of pages, then excellent. Move forward.
We have a pretty good sense of how to change a "Featured Snippet" (Google's term for attributed-link answer boxes) and how to take one from someone else, but not how to get Google to show one when they don't currently. So, if you make your page more answer-like, it will increase your odds of getting an existing Featured Snippet, but we're not clear if it increases your odds of the query being interpreted as a question. In other words, can well-organize content get you from no Featured Snippet at all to Google showing one? Initial experiments suggest "no", or at least, that it's not easy. That said, the percentage of Featured Snippets has been increasing steadily.
I think, done right, this could have user value and actually drive down bounce rates. Try to make the result something that site visitors would find useful as well. I think these kinds of answers can be win-win for both SEO and CRO.
If you want to see if your page would be eligible for a Featured Snippet (and what text Google is seeing), use "site:yoursite.com query". If you have pages eligible for a Featured Snippet for that term, you'll see which page Google is choosing and which text on the page they're selecting. That can be a big help, since this is otherwise a black box.
-
RE: Drupal infinite URL depth? SEOMOZ treating as duplicate content
I'm not a Drupal expert, but it sounds like you may have some kind of relative path that's getting perpetuated. Robots.txt could help as a patch, but I'd definitely want to solve the crawl problem, as this could spin out into other problems.
Have you tried a desktop crawler, like Xenu or Screaming Frog? Sorry, it's tough to diagnose without seeing the actual site, but it's almost got to be a relative path that's causing "/product" to keep being added to links.
-
RE: Google is showing the Wrong Date for our Holiday.
Yeah, unfortunately, that's core Knowledge Graph (not a Featured Snippet), so it's coming from a partnered data source. I'm not sure where they're pulling this from, though, as the Wikidata/Wikipedia entries seem correct:
https://en.wikipedia.org/wiki/Free_Shipping_Day
There's not even a "report" option on this one, so I'm afraid Alick300 is right - you're going to have to contact Google directly, I'll ask around on Twitter, but Google isn't transparent at all about these data sources.
-
RE: Should I migrate clients site to older established domain?
So, just to be sure: Six months ago, they 301-redirect the old domain to the new domain, correct? A couple of questions:
(1) Have you verified that this is a single-hop 301-redirect using 3rd-party header checkers? In other words, make sure it's working the way they think it's working. I don't say that to be condescending, but because I've seen it screwed up many, many times.
(2) Did they redirect on a page-by-page basis, or did everything just go to the new home-page? Are there potentially pages that got left out?
(3) Is the new page topically relevant to the old page (sounds like they're the same company)?
(4) Is there any sign of bad links on the old domain? Sometimes, links that didn't impact a domain after a long time can cause problems after a redirect.