Questions created by Tinhat
-
Is it normal for Bing rankings to fluctuate so much on a daily basis?
Hi all, I launched a new website in Aug 2015, and have had some success with ranking organically on Google (position 2 - 5 for all of my target terms). However I'm still not getting any traction on Bing. I know that they use completely different algorithms so it's not unusual to rank well on one but not the other, but the ranking behaviour that I see seems quite odd. We've been bouncing in and out of the top 50 for quite some time, with shifts of 30+ positions often on a daily basis (see attached). This seems to be the case for our full range of target terms, and not just the most competitive ones. I'm hoping someone can advise on whether this is normal behaviour for a relatively young website, or if it more likely points to an issue with how Bing is crawling my site. I'm using Bing Webmaster tools and there aren't any crawl or sitemap issues, or significant seo flags. Thanks dhYgh
Intermediate & Advanced SEO | | Tinhat0 -
Not seeing index update for one individual campaign
Hi, I'd been looking forward to seeing the latest index update for a Moz campaign set up in September, but it doesn't seem to be coming through. I'm still seeing that the next update is due on 14th Dec.. All of my other campaigns were updated on time, so I was wondering if it's normal to see different behaviour for relatively new sites/campaigns, or if it suggests that there's a problem somewhere (other than my impatience)? Many thanks,
Link Explorer | | Tinhat0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Too Many On Page Links, rel="nofollow" and rel="external"
Hi, Though similar to other questions on here I haven't found any other examples of sites in the same position as mine. It's an e-commerce site for mobile phones that has product pages for each phone we sell. Each tariff that is available on each phone links through to the checkout/transfer page on the respective mobile phone network. Therefore when the networks offer 62 different tariffs that are available on a single phone that means we automatically start with 62 on page links that helps to quickly tip us over the 100 link threshold. Currently, we mark these up as rel="external" but I'm wondering if there isn't a better way to help the situation and prevent us being penalised for having too many links on page so: Can/should we mark these up as rel="nofollow" instead of, or as well as, rel="external"? Is it inherently a problem from a technical SEO point of view? Does anyone have any similar experiences or examples that might help myself or others? As always, any help or advice would be much appreciated 🙂
Web Design | | Tinhat0 -
Redirecting Images
Hi, I'm wondering how important it is when relaunching a site on a new platform (switching to Drupal) to serve up images from the same file paths in order to ensure consistency during the changeover. I've tried to keep the questions straightforward so that this post can be useful to people in a similar situation in future: How much difference do the file paths make to SEO? Does Google care or even notice if the image file paths change? Is it worth forcing Drupal to mimic our old file paths for the sake of consistency with the old site in order to maintain rankings OR do we take the opportunity to redesign our file paths for better SEO? Any help would be much appreciated 🙂
Web Design | | Tinhat0 -
Drupal SEO - Concerns about cloaking
It appears that core Drupal includes a CSS style that automatically generates an tag for any* or > ## Main menu This uses the CSS to create a 1px1px header with that text that is absolutely positioned in the top left hand corner. Essentially, hidden and unreadable to humans and presumably also useless to even screen readers. There is some discussion of the reasoning for including this functionality as standard here: [http://drupal.org/node/1392510](http://drupal.org/node/1392510 "http://drupal.org/node/1392510") I'm not convinced of its use/validity/helpfulness from an SEO perspective so there's a few questions that arise out of this. 1. Is there a valid non-SEO reason for leaving this as the default rather than giving ourselves full control over our ## tags? 2. Could this be seen as cloaking by creating hidden/invisible elements that are used by the search engines as ranking factors? Update: http://www.seobythesea.com/2013/03/google-invisible-text-hidden-links/ Google's latest patent appears to deal with this topic. The patent document even makes explicit reference to the practice of hiding text in ## tags that are invisible to users and are not proper headings. Anyone have any thoughts on what SEOs using Drupal should be doing about this?
Web Design | | Tinhat1 -
Source code structure: Position of content within the tag
Within the section of the source code of a site I work on, there are a number of distinct sections. The 1st one, appearing first in the source code, contains the code for the primary site navigation tabs and links. The second contains the keyword-rich page content. My question is this: if i could fix the layout so that the page still visually displayed in the same way as it does now, would it be advantageous for me to stick the keyword-rich content section at the top of the , above the navigation? I want the search engines to be able to reach the keyword-rich content faster when they crawl pages on the site; however, I dont want to implement this fix if it wont have any appreciable benefit; nor if it will be harmful to the search-engine's accessibilty to my primary navigation links. Does anyone have any experience of this working, or thoughts on whether it will make a difference? Thanks,
Technical SEO | | Tinhat0