Canconical tag on site with multiple URL links but only one set of pages
-
We have a site www.mezfloor.com which has a number of Url's pointing at one site. As the url's have been in use for many years there are links from many sources include good old fashioned hard copy advertising. We have now decided that it would be better to try to start porting all sources to the .co.uk version and get that listing as the prime/master site.
A couple of days ago I went through and used canonical tags on all the pages thinking that would set the priority and that would also strengthen the page in terms of trust due to the reduced duplication. However when I went to scan the site in MOZ the warning that the page redirects came up and I am beginning to think that I need to remove all these canonical tags so that search engines do not get into a confused spiral where we loose the little page rank we have.
Is there a way that I can redirect everything except the target URL without setting up a separate master site just for all the other pages to point at.
-
Yes, it is good when there is a clear Google guideline to follow. I'm happy for your quick win!
-
Thanks
I am pleased I do not have to go through the whole site again and even more pleased as I have a number of other sites to work on.These could certainly do with a bit of a boost and this is a quick win.
-
So you want to put a canonical of www.b.co.uk/index.html on a page that can be reached via www.b.co.uk/index.html and you are worried that it will become a loop?
Don't worry. Google specifically thought about the possibility that people might use self-referential canonicals (SEO plugins do it all the time) and engineered it so that this does not cause a loop. (See Matt Cutts on the topic.)
I myself inherited some ugly urls for which I made nice user-friendly aliases and I tagged those pages with the friendly canonical. There were no problems and the pages started doing much better. (In my case it was not cross-domain, but cross-domain canonicals are supposedly supported and in fact I have succesfully used them in other situations.)
-
Hi thanks for the response
The issue is we have one set of pages on a server which is addressed through several different url's.
I never got involved in the server side of things so I do not know if that was by redirects at the route URL. Just maybe I am trying to add canonical links that just are not required.
If I have www.a.co.uk/index.html, www.a.com/index.html, www.b.co.uk/index.html and want them all to point to www.b.co.uk/index.html. As index.html is on the server once then my thought was that I should have a canonical link to that page from within that page with the www.b.co.uk/index.html as the route. This may be right or wrong but there is the risk that a spider stops when it gets to the link and goes to the start of the same page, again and again in a loop.
You are of course right that the Google bot should be OK with this but the Moz bot stopped in its tracks and asked if I wanted the page indexed so I had to do this manually.
Gut feel says I should remove the links for now but need to understand what we did server side. Gut feel maybe wrong and I would prefer to do the right thing!
-
Okay you lost me a little but let me see If I can help.
First off the canonical tag - Its fantastic for duplicate content (even across other sites) now so good if you don't have duplicate content.
301's - It's very similar to above can work well with duplicate content but not essential. Now you can 301 a few pages into one page so if a user types a URL in (or even has it as a bookmark etc.) the will land on the page you want. its normally a good idea to 301 into similar pages to you don't get users thinking they are going to buy (e.g.) a pair of boots and land on a page about t-shirts.
Google getting lost - Don't worry about Google getting lost, if a user can get around so can Google, plan plan and plan again if you plan it all out (you can even draw flow diagrams) so you know where its all going to and from until you are happy. You can also get someone who doesn't know your site to test it see if they get lost.
Hope that background helps a bit, you lost me here-
"Is there a way that I can redirect everything except the target URL without setting up a separate master site just for all the other pages to point at."
Why can't you redirect all your pages to the target URL ?
One helpful tool I recommend is screaming frog it can help you pick up redirects 404 etc.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can You Use More Then One Google Local Rich Snippet on a single site/ on a single page.
I am currently working on a website for a business that has multiple office locations. As I am trying to target all four locations I was wondering if it is okay to have more then one Local Rich Snippet on a single page. (For example they list all four locations and addresses within their footer and I was wondering if I could make these local rich snippets). What about having more then one on a single website. For example if a company has multiple offices located in several different cities and have set up individual contact pages for these cities, can each page have it's own Local Rich Snippet? Will Google look at these multiple "local rich snippets" as spaming or will they recognize the multiple locations and count it towards their local seo?
Technical SEO | | webdesignbarrie1 -
Pageing page and seo meta tag questions
Hi if i am using paging in my website there is lots of product in my website now in paging total paging is 1000 pages now what title tag i need to add for every paging page or is there any good way we can tell search engine all page or same ?
Technical SEO | | constructionhelpline0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Google is somehow linking my two sites that aren't linked! HELP
Good Morning... In my Google webmaster account it is showing an increase of backlinks between one site i own to the other.... This should not happen, as there are no links from one site to the other. I have thoroughly checked many pages on the new site to see if i can find a backlink, but i can't. Does anyone know why this is showing like this (google now shows 50,000 links from one site to the other).. Can someone please take a look and see if you can find any link from one to the other... original site : http://goo.gl/JgK1e new site : http://goo.gl/Jb4ng Please let me know why you guys think this is happening or if you were actually able to find a link on the new site pointing back to the old site... thanks a lot
Technical SEO | | Prime850 -
Am I missing something if I absorb one site into another?
We are absorbing one of our sites into another and I want to make sure I am not missing anything. The site that is being absorbed will no longer exist as all the content has been replicated/duplicated on the main site. About a month ago we added canonicals to all the duplicate content to point to the new site it will be a part of. That went very well and organic traffic continued to flow to those pages on the new site. We recently (yesterday) used 301s on all the pages using mod_rewrites and redirected the domain name from the old site name to the new one. Using mod_rewrites we redirect any other page linking to that domain to www.newsite.com?ref=oldsite.com. So far I dont see anything coming in on an unexpected link. I still need to tell Google via Webmaster Tools that the oldsite is now on newsite.com correct? Is there anything else I might bump into that we havent thought of? Thanks
Technical SEO | | GeorgeLaRochelle0 -
If multiple links on a page point to the same URL, and one of them is no-followed, does that impact the one that isn't?
Page A has two links on it that both point to Page B. Link 1 isn't no-follow, but Link 2 is. Will Page A pass any juice to Page B?
Technical SEO | | Jay.Neely0 -
What pages of my site does Google rank as the most important?
If I type site:youtube.com into Google, are the results listed by what Google considers to be the most important pages of the site? If I change my sitemap should this order change? Thanks!
Technical SEO | | Seaward-Group0 -
I have pages that are showing up as having too many links, yet they are noindexed.
I've got several pages that have "too many on page links" and the pages mentioned have already been noindexed. Do these pages need to be no followed too? Here's one of the pages: http://digisavvy.com/site-map/. There's several pages like this, most of which are category or tag archives, which I've noindexed... Do I need to nofollow these, too?
Technical SEO | | digisavvy0