How long to reverse the benefits/problems of a rel=canonical
-
If this wasn't so serious an issue it would be funny....
Long store cut short, a client had a penalty on their website so they decided to stop using the .com and use the .co.uk instead. They got the .com removed from Google using webmaster tools (it had to be as it was ranking for a trade mark they didn't own and there are legal arguments about it)
They launched a brand new website and placed it on both domains with all seo being done on the .co.uk. The web developer was then meant to put the rel=canonical on the .com pointing to the .co.uk (maybe not needed at all thinking about it, if they had deindexed the site anyway). However he managed to rel=canonical from the good .co.,uk to the ,com domain!
Maybe I should have noticed it earlier but you shouldn't have to double check others' work! I noticed it today after a good 6 weeks or so. We are having a nightmare to rank the .co.uk for terms which should be pretty easy to rank for given it's a decent domain.
Would people say that the rel=canonical back to the .com has harmed the co.uk and is harming with while the tag remains in place? I'm off the opinion that it's basically telling google that the co.uk domain is a copy of the .com so go rank that instead.
If so, how quickly after removing this tag would people expect any issues caused by it's placement to vanish?
Thanks for any views on this. I've now the fun job of double checking all the coding done by that web developer on other sites!
-
Yeah, if the .com is blocked now, there's really no point in putting 301s or canonicals over there, because they won't do anything (theoretically, at least). You could put self-referencing canonicals on the .co.uk site. It would at least be a nudge to Google to ignore the old canonicals (to the .com). Other than that, you may have to wait and see.
As Alan said, you could 301-redirect the .com and then stop blocking it. Properly redirected, no visitors should be able to view the old pages. In some ways, that's even more reliable than blocking.
Update: Sorry, realized that was a bit confusing, as I sort of told you that a 301 was pointless but then to 301 What I'm saying is that you could stop blocking the .com and THEN 301-redirect it. If it really is fully blocked, 301-ing it probably won't have any impact (although it won't hurt anything).
-
If the .com is de indexed, then i would either get rid of it, or 301 it to the .uk
-
Dr Pete,
The whole thing has been one issue after another with the client. One of those helpful clients whom change their website and page structure without telling you. First you hear about it is when they call you wanting to know why their rankings have dropped!
The idea was to move away from the .com site and use the .co.uk site, however they had a lot of people visiting the .com and wanted to keep that as a live site. What should have been done (what I advised them on) was to canonical from the .com to the uk site, telling google that the uk domain is now the main domain. Helpful and rather impressively their web developer managed to put the canonical tag on the .co.uk domain telling google the .com was the main domain.
Then, the .com got involved in a trademark dispute so they decided to remove it from the google listings via webmaster tools (it is still removed as it still ranks for the trademark keyword when it's unblocked). The long and short of it was they ended up in a position which the site they wanted to be ranking was being ignored by google in favour of the site they blocked from google!
I guess now it's a question of just waiting for google to recrawl the .co.uk and see the tag has gone. It's a basic seo error on my part but I would have trusted an experienced web designer to copy and paste a code I gave him on to the correct site.
Don't you just love the clients which won't give you ftp access and insist all changes go through their web developer who is freelance!!
Thanks for the help on this everyone
Carl
-
I'm thinking the same thing - if possible, the 301 might help override the canonical. Sometimes, in my experience, if you reverse a signal (like rel-canonical) with that same signal, Google takes it's time to re-evaluate, because the reversal just looks odd. The 301 here might be more insistent.
The link profile and other signals should help, but I've seen reversing a bad canonical take weeks. It's a tough signal to undo.
Is the .com site still blocked, though? If you canonical'ed to a blocked site and now are trying to reverse it, but the site is still blocked, Google won't crawl the new signal (the same would be true for 301s). If the .com is blocked somehow and you remove the bad canonical, Google may act more quickly (since canonicalizing to a blocked site would seem strange).
-
I would not do anything, it will sort out soon enouth. I dont think it will happen first crawl, as i remeber google saying that they dont honer redirects first time as you may be making changes when crawled, so it may a take a few crawls, also it is not clear if you get all link juice back when it is crawled or when the pages that link to you are crawled. to explain further, if i had a link pointing to you, would the link juice point back to your .uk when your page gets crawled, or when i get crawled?
My guess you will start seeing value return over a period, from day one (as most sites get a few pages a day crawled) up untill a couple of months.
-
Would a 301 from .com to .co.uk work better?
-
the dreaded call to the client, I dont envy you, but be open and honest and all will work out for the best.
Aas SEOs were supposed to notice the finer details, but were only human.
In the past I've put these 'issues' down to 'communication problems with the clients outsourced develeopers', perhaps they should cosider moving development to you guys
-
Thanks for the reply.
Hopefully the large number of backlinks to it will mean it gets recrawled very quickly. I had spent weeks trying to work out why I could get the .co.uk homepage indexed in google now I know why. Now comes a nice call to the client, eek! Thankfully the web design works freelance and is employed by the client not me
-
Hi MisterG,
I feel your pain and learnt the same lesson a few years back. Now i double check everything our devs do.
I agree with you that the canonical is tell Google to go rank the .com as its the authorative owner of the co.uk's content.
How log its going to take to remedy after sortig out the canonical is anyones guess. I suppose it depends on how often you get crawled (deeply).
Be patient and cross your fingers! (oh and dont be too harsh on the devs, they are simple logical creatures!)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Problem with Google SERPS
I am running yoast SEO plugin in WP. I just noticed when I google the client, none of their meta data is showing. I see that I had facebook OG clicked, which looks like it made duplicates of all the titles etc. Would that be the problem? I have since turned it off. I am hoping that was the problem. Also, when the client searches it says in the meta desc - you've viewed this site many times". What is that?
Technical SEO | | netviper0 -
Canonical tags
How hard is it to put in Canonical tags on a webpage? My web guy didn't do it because he put in redirects in place for all old URLs and all content
Technical SEO | | Boodreaux
(except error pages and advanced searches) should have a unique URL. By not having canonical tags does it lose link juice? Not sure if that question makes sense. 🙂 Poo1 -
Rel=canonical for similar (not exact) content?
Hi all, We have a software product and SEOMOZ tools are currently reporting duplicate content issues in the support section of the website. This is because we keep several versions of our documentation covering the current version and previous 3-4 versions as well. There is a fair amount of overlap in the documentation. When a new version comes out, we simply copy the documentation over, edit it as necessary to address changes and create new pages for the new functionality. This means there is probably an 80% or so overlap from one version to the next. We were previously blocking Google (using robots.txt) from accessing previous versions of the sofware documentation, but this is obviously not ideal from an SEO perspective. We're in the process of linking up all the old versions of the documenation to the newest version so we can use rel=canonical to point to the current version. However, the content isn't all exact duplicates. Will we be penalized by Google because we're using rel=canonical on pages that aren't actually exact duplicates? Thanks, Darren.
Technical SEO | | dgibbons0 -
Canonical Tag Here?
Hello, I have a client who I have taken on (different to my other client in another question), My client has a ecommerce website and in nearly all of his products (around 30-40) he has a little information checklist like.. Made in the UK
Technical SEO | | Prestige-SEO
Prices from 9.99
Top quality
Free delivery on orders over.. This is the duplicate content, what is the best practise for this as the SEOmoz crawler is giving me a multiple of errors.0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Long Meta Descriptions
I want to create a template for Meta titles, descriptions and keywords on my website for old news and minor pages in order to get some long tail traffic from them. The only template I can think to use for the descriptions takes the first sentence of the news article (which often if above 160 characters). Since these are minor pages, how big of a problem is that? Thanks!
Technical SEO | | theLotter0 -
How long before rankings return?
We recently had a redesign of our website and moved it from a static html site to WordPress. During the development the WordPress installation sat on a sub-domain. Our programmer and designer had the privacy setting inside WordPress checked so the search engines would not crawl the pages while being developed to avoid duplicate content. Fast forward two weeks when new site replaces old. Guess what privacy setting was not fixed. So about 3 days after the site is live organic traffic drops by 80%. Do a little research and all of our great rankings are gone. Notice that our page rank is also gone...Hmm... Upon further inspection the noindex tag is on every page of the site. It's now removed and has been 3-4 days. I've resubmitted inside webmaster tools and I'm just curious what y'all think the likelihood is that everything will come back to how it was before.
Technical SEO | | jmacek070 -
Duplicate Content and Canonical use
We have a pagination issue, which the developers seem reluctant (or incapable) to fix whereby we have 3 of the same page (slightly differing URLs) coming up in different pages in the archived article index. The indexing convention was very poorly thought up by the developers and has left us with the same article on, for example, page 1, 2 and 3 of the article index, hence the duplications. Is this a clear cut case of using a canonical tag? Quite concerned this is going to have a negative impact on ranking, of course. Cheers Martin
Technical SEO | | Martin_S0