Hi Sean,
Never had a penalty and no robots.txt issues, but thanks for the response.
Joe
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi Sean,
Never had a penalty and no robots.txt issues, but thanks for the response.
Joe
Hi all,
I launched a new website in Aug 2015, and have had some success with ranking organically on Google (position 2 - 5 for all of my target terms). However I'm still not getting any traction on Bing.
I know that they use completely different algorithms so it's not unusual to rank well on one but not the other, but the ranking behaviour that I see seems quite odd. We've been bouncing in and out of the top 50 for quite some time, with shifts of 30+ positions often on a daily basis (see attached). This seems to be the case for our full range of target terms, and not just the most competitive ones.
I'm hoping someone can advise on whether this is normal behaviour for a relatively young website, or if it more likely points to an issue with how Bing is crawling my site. I'm using Bing Webmaster tools and there aren't any crawl or sitemap issues, or significant seo flags.
Thanks
Hi,
I'd been looking forward to seeing the latest index update for a Moz campaign set up in September, but it doesn't seem to be coming through. I'm still seeing that the next update is due on 14th Dec..
All of my other campaigns were updated on time, so I was wondering if it's normal to see different behaviour for relatively new sites/campaigns, or if it suggests that there's a problem somewhere (other than my impatience)?
Many thanks,
Monday morning, still the same, still no reset/add parameters buttons in GMWT any more, still not understanding why Google is being so stubborn about this.
3 identical pages in the index, Google ignoring both GWMT URL parameter and canonical meta tag.
Sigh.
Nope, nice clean site map that GWMT says provides the right number of URLs with no 404s and no ?ref= links.
It's like Google has always indexed these links separately but for some reason has decided to only show them now they no longer exist..
Ask Matt Cutts!
I've read that a 5-year registration is probably better than a 2-year one, as if you consider what google is looking for (authority implies longevity) and what they don't want to see (short-termism) it's possible it's a signal. Higher domain registration costs are a barrier to a business that operates MFAs or farms, for instance.
Given the price difference it's a no-brainer as far as I'm concerned. If you want hard evidence, A|B testing would probably be your only option.
Good morning Moz...
This is a weird one. It seems to be a "bug" with Google, honest...
We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance
www.three-clearance.co.uk/apple-phones.html
..could be reached via
www.three-clearance.co.uk/apple-phones.html?ref=menu or
www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on.
GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good.
This is the chain of events:
Site migrated to new platform following best practice, as far as I can attest to.
Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified.
URL structure and URIs were maintained 100% (which may be a problem, now)
Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I.
Run, not walk, to google and do some Fu:
http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI)
Checked BING and it has indexed each root URL once, as it should.
Situation now:
Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated.
I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment)
I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page.
Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form ) include
A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct
B) Hand-removing the URLs from the index through a page removal request per indexed URL
C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty)
D) Post on SEOMoz because I genuinely can't understand this.
Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation.
Do you?
Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Thanks Tela.
I think you might be on to something here. You're right that the worry is looking needlessly spammy by having too many affiliate links on page and also about conserving link juice.
It's something I'll have to speak to our development team about because generating the tariff code dynamically might take a fair bit of work. It's definitely an idea I think we should investigate.
Regarding the interstitial URL/step after the user select the phone they want - there is already a 'transfer page' that holds them for a few seconds before taking them to the network's basket/checkout. I fear that adding yet another step before that would have a negative impact on the customer journey as we already see people dropping out in the post-transfer stage before completing the sale.
Cheers for the help.
Thanks Dr Pete.
The target page takes the customer to a dynamic 'transfer' page with affiliate tracking information that ensures the sale gets attributed to us. We have to do this because we don't have our own cart/checkout system. It's not an affiliate link swapping program or anything dubious - we don't actually get linked back to by the networks. I'd have thought Google was used to handling official affiliate programs.
I can totally see why it would look bad to Google by having this many external affiliate links on page but there is little we can do about the number of deals that the network offers. Our system of showing a restricted number of deals upon landing with the option to see 10 more at a time helps deal with UX issues.
It's reassuring to note that it is less of an issue because it is a deeper page than the home page.
Seeing as we are official affiliates to the major networks can you recommend any practices or techniques to mitigate the impact of large numbers of affiliate links to their sites?
I get what you're saying. That's the general SEO best practice that I'm aware of. I was just looking for something a bit deeper than general kind of guidance.
Our user navigation isn't ideal (sadly there's not much as SEO I can do about it) but with the right filters and options it works ok. We can't really remove the links because they are the tariff options as they come through from the networks themselves. We do however show a tailored few when people land on the page with the option to see all deals.
With that in mind I'm essentially asking is there a better way to markup these links than with rel="external"? They are external links after all but we don't want to risk having this many links on the page cause negative side effects.
The user experience is generally fine and the number of links is fixed. I wonder if we can't do better with what we currently have by improving our PR distribution somehow.
Here is an example of a product page:
Hi,
Though similar to other questions on here I haven't found any other examples of sites in the same position as mine.
It's an e-commerce site for mobile phones that has product pages for each phone we sell. Each tariff that is available on each phone links through to the checkout/transfer page on the respective mobile phone network. Therefore when the networks offer 62 different tariffs that are available on a single phone that means we automatically start with 62 on page links that helps to quickly tip us over the 100 link threshold.
Currently, we mark these up as rel="external" but I'm wondering if there isn't a better way to help the situation and prevent us being penalised for having too many links on page so:
As always, any help or advice would be much appreciated
Thanks Andy.
We don't really get enough "images.google" traffic to justify the effort needed to rejig everything for that reason alone.
However, that does raise a secondary question of whether we should be improving our image search SEO. Do image search rankings contribute to regular search rankings?
Does anyone think it would be worth redirecting the old image file paths to the new ones instead? I can't think of many cases (if any) where it would be necessary to have these redirects but I know that in the process of past migrations my predecessors have done this.
Hi,
I'm wondering how important it is when relaunching a site on a new platform (switching to Drupal) to serve up images from the same file paths in order to ensure consistency during the changeover.
I've tried to keep the questions straightforward so that this post can be useful to people in a similar situation in future:
Any help would be much appreciated
Thanks Mike.
We're going to run with it for a while on one of our sites and see how it performs. I'll try and post any meaningful results here at a later date.
Thanks Corey.
It's certainly something that had us a bit worried.
The maximum number of hidden H2s on our Drupal pages is something like 2-3, and in each case the H2 serves to provide a description for the following ul/ol HTML tags (which it can be argued is just good semantic markup). If this is the case, could it still be penalised for cloaking? Essentially, is cloaking seen as an absolute practice in the eyes of the Search Engines or is it more subjective? Is a site penalised for appearing to use cloaking methods in a black and white sense and in lines with certain criteria or do they rate this by degrees?
(I realise they are questions we might not be in a position to know the answer to.)
I'm still in two minds about seemingly wasting 2-3 H tags by having them wrap around "main menu" content on seemingly every page. As it stands, they are automatically generated around our breadcrumb and our main menu buttons at the top of the page and are used to simply describe the menus on the page.
My worry is that even if this is not having a negative impact re: cloaking it is still a waste of H2 tags. If we have these 2-3 just describing the menus (that are global) and a further 1-2 describing the actual content of the page, then this is not really ideal from an SEO point of view.
In our case, I wonder if it might be worth sacrificing semantic structure for the SEO benefit?
Thanks.
It appears that core Drupal includes a CSS style that automatically generates an
> ## Main menu
This uses the CSS to create a 1px1px header with that text that is absolutely positioned in the top left hand corner. Essentially, hidden and unreadable to humans and presumably also useless to even screen readers.
There is some discussion of the reasoning for including this functionality as standard here:
[http://drupal.org/node/1392510](http://drupal.org/node/1392510 "http://drupal.org/node/1392510")
I'm not convinced of its use/validity/helpfulness from an SEO perspective so there's a few questions that arise out of this.
1. Is there a valid non-SEO reason for leaving this as the default rather than giving ourselves full control over our
## tags?
2. Could this be seen as cloaking by creating hidden/invisible elements that are used by the search engines as ranking factors?
Update:
http://www.seobythesea.com/2013/03/google-invisible-text-hidden-links/
Google's latest patent appears to deal with this topic. The patent document even makes explicit reference to the practice of hiding text in
## tags that are invisible to users and are not proper headings.
Anyone have any thoughts on what SEOs using Drupal should be doing about this?
Within the section of the source code of a site I work on, there are a number of distinct sections.
The 1st one, appearing first in the source code, contains the code for the primary site navigation tabs and links. The second contains the keyword-rich page content.
My question is this: if i could fix the layout so that the page still visually displayed in the same way as it does now, would it be advantageous for me to stick the keyword-rich content section at the top of the , above the navigation?
I want the search engines to be able to reach the keyword-rich content faster when they crawl pages on the site; however, I dont want to implement this fix if it wont have any appreciable benefit; nor if it will be harmful to the search-engine's accessibilty to my primary navigation links.
Does anyone have any experience of this working, or thoughts on whether it will make a difference?
Thanks,