Hi Sean,
Never had a penalty and no robots.txt issues, but thanks for the response.
Joe
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Job Title: Director
Company: Bastion Insurance Services Ltd
Favorite Thing about SEO
Rand Fishkin
Hi Sean,
Never had a penalty and no robots.txt issues, but thanks for the response.
Joe
Hi all,
I launched a new website in Aug 2015, and have had some success with ranking organically on Google (position 2 - 5 for all of my target terms). However I'm still not getting any traction on Bing.
I know that they use completely different algorithms so it's not unusual to rank well on one but not the other, but the ranking behaviour that I see seems quite odd. We've been bouncing in and out of the top 50 for quite some time, with shifts of 30+ positions often on a daily basis (see attached). This seems to be the case for our full range of target terms, and not just the most competitive ones.
I'm hoping someone can advise on whether this is normal behaviour for a relatively young website, or if it more likely points to an issue with how Bing is crawling my site. I'm using Bing Webmaster tools and there aren't any crawl or sitemap issues, or significant seo flags.
Thanks
Hi,
I'd been looking forward to seeing the latest index update for a Moz campaign set up in September, but it doesn't seem to be coming through. I'm still seeing that the next update is due on 14th Dec..
All of my other campaigns were updated on time, so I was wondering if it's normal to see different behaviour for relatively new sites/campaigns, or if it suggests that there's a problem somewhere (other than my impatience)?
Many thanks,
Monday morning, still the same, still no reset/add parameters buttons in GMWT any more, still not understanding why Google is being so stubborn about this.
3 identical pages in the index, Google ignoring both GWMT URL parameter and canonical meta tag.
Sigh.
Nope, nice clean site map that GWMT says provides the right number of URLs with no 404s and no ?ref= links.
It's like Google has always indexed these links separately but for some reason has decided to only show them now they no longer exist..
Ask Matt Cutts!
I've read that a 5-year registration is probably better than a 2-year one, as if you consider what google is looking for (authority implies longevity) and what they don't want to see (short-termism) it's possible it's a signal. Higher domain registration costs are a barrier to a business that operates MFAs or farms, for instance.
Given the price difference it's a no-brainer as far as I'm concerned. If you want hard evidence, A|B testing would probably be your only option.
Good morning Moz...
This is a weird one. It seems to be a "bug" with Google, honest...
We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance
www.three-clearance.co.uk/apple-phones.html
..could be reached via
www.three-clearance.co.uk/apple-phones.html?ref=menu or
www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on.
GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good.
This is the chain of events:
Site migrated to new platform following best practice, as far as I can attest to.
Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified.
URL structure and URIs were maintained 100% (which may be a problem, now)
Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I.
Run, not walk, to google and do some Fu:
http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI)
Checked BING and it has indexed each root URL once, as it should.
Situation now:
Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated.
I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment)
I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page.
Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form ) include
A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct
B) Hand-removing the URLs from the index through a page removal request per indexed URL
C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty)
D) Post on SEOMoz because I genuinely can't understand this.
Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation.
Do you?
Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Thanks Tela.
I think you might be on to something here. You're right that the worry is looking needlessly spammy by having too many affiliate links on page and also about conserving link juice.
It's something I'll have to speak to our development team about because generating the tariff code dynamically might take a fair bit of work. It's definitely an idea I think we should investigate.
Regarding the interstitial URL/step after the user select the phone they want - there is already a 'transfer page' that holds them for a few seconds before taking them to the network's basket/checkout. I fear that adding yet another step before that would have a negative impact on the customer journey as we already see people dropping out in the post-transfer stage before completing the sale.
Cheers for the help.
Thanks Dr Pete.
The target page takes the customer to a dynamic 'transfer' page with affiliate tracking information that ensures the sale gets attributed to us. We have to do this because we don't have our own cart/checkout system. It's not an affiliate link swapping program or anything dubious - we don't actually get linked back to by the networks. I'd have thought Google was used to handling official affiliate programs.
I can totally see why it would look bad to Google by having this many external affiliate links on page but there is little we can do about the number of deals that the network offers. Our system of showing a restricted number of deals upon landing with the option to see 10 more at a time helps deal with UX issues.
It's reassuring to note that it is less of an issue because it is a deeper page than the home page.
Seeing as we are official affiliates to the major networks can you recommend any practices or techniques to mitigate the impact of large numbers of affiliate links to their sites?
I get what you're saying. That's the general SEO best practice that I'm aware of. I was just looking for something a bit deeper than general kind of guidance.
Our user navigation isn't ideal (sadly there's not much as SEO I can do about it) but with the right filters and options it works ok. We can't really remove the links because they are the tariff options as they come through from the networks themselves. We do however show a tailored few when people land on the page with the option to see all deals.
With that in mind I'm essentially asking is there a better way to markup these links than with rel="external"? They are external links after all but we don't want to risk having this many links on the page cause negative side effects.
The user experience is generally fine and the number of links is fixed. I wonder if we can't do better with what we currently have by improving our PR distribution somehow.
Here is an example of a product page:
It appears that core Drupal includes a CSS style that automatically generates an
> ## Main menu
This uses the CSS to create a 1px1px header with that text that is absolutely positioned in the top left hand corner. Essentially, hidden and unreadable to humans and presumably also useless to even screen readers.
There is some discussion of the reasoning for including this functionality as standard here:
[http://drupal.org/node/1392510](http://drupal.org/node/1392510 "http://drupal.org/node/1392510")
I'm not convinced of its use/validity/helpfulness from an SEO perspective so there's a few questions that arise out of this.
1. Is there a valid non-SEO reason for leaving this as the default rather than giving ourselves full control over our
## tags?
2. Could this be seen as cloaking by creating hidden/invisible elements that are used by the search engines as ranking factors?
Update:
http://www.seobythesea.com/2013/03/google-invisible-text-hidden-links/
Google's latest patent appears to deal with this topic. The patent document even makes explicit reference to the practice of hiding text in
## tags that are invisible to users and are not proper headings.
Anyone have any thoughts on what SEOs using Drupal should be doing about this?
Director of Bastion Insurance Services Ltd, a mobile phone and gadget insurance company specialising in online marketing.
Looks like your connection to Moz was lost, please wait while we try to reconnect.