Is a Rel Canonical Sufficient or Should I 'NoIndex'
-
Hey everyone,
I know there is literature about this, but I'm always frustrated by technical questions and prefer a direct answer or opinion. Right now, we've got recanonicals set up to deal with parameters caused by filters on our ticketing site. An example is that this:
http://www.charged.fm/billy-joel-tickets?location=il&time=day relcanonicals to...
http://www.charged.fm/billy-joel-tickets
My question is if this is good enough to deal with the duplicate content, or if it should be de-indexed. Assuming so, is the best way to do this by using the Robots.txt? Or do you have to individually 'noindex' these pages?
This site has 650k indexed pages and I'm thinking that the majority of these are caused by url parameters, and while they're all canonicaled to the proper place, I am thinking that it would be best to have these de-indexed to clean things up a bit.
Thanks for any input.
-
I totally agree with EGOL on this. I would like to add my 2cents since I think I am one of the only SEO people that is a developer too.
This is what I would do (in pseudo code) put a <rel="canonical" href="$url=strtok($_SERVER[" request_uri"],'?');"=""> </rel="canonical">
This is in php, I don't know what platform you are on, but what it will do in php is return the current url as the canonical and delete the ? and everything after. So basically it will return the url minus the query string. I use this technique a lot with my clients for doing canonical urls on CMS's that use query strings and it works great.
-
Hi - Just to throw in my two cents - the canonicals should do it as Moosa says but if you really want to de-index then a dynamic meta robots tag is the best way to get them out of the index in my experience.
That being said, having a quick look at your site it doesn't look like those url parameters are the issue, a quick look at something like this: site:charged.fm inurl:date= only shows a few thousand results and the location= and time= show even less - so looks like the rel canonicals are doing the job and will continue to with a bit of patience. If you look at urls with /event/ in them however you see a lot (300,000+) and I am guessing many of those are for past events. Google webmaster tools should help you id what the bulk of those 600 thousand urls are so worth verifying where the exact issue is before attempting to fix something that isn't a problem...
-
There are a few choices for managing parameters. I have used....
A) The URL parameter manager in the "crawl" options of Google Webmaster Tools. I have found it to be totally unreliable.
B) Rel=canonical. It is much more reliable than WMT but you still must rely on search engines to discover it and obey - which can be slow to take effect and is less than 100% effective.
I have not used robots.txt because I think that it would have similar performance to rel=canonical.
I have the belief that you shoud not trust search engines to do things for you that you can do for yourself with 100% reliability. So, I am doing ......
C). Managing parameters on my server with .htaccess so I have 100% control.
-
I believe if you have setup the rel canonical correctly there ideally should be no issue with that but if you really see some of your non preferred versions indexed in Google then you can go with the no index idea.
When no-indexing pages you can go with any approach but in my experience it is better do it by using robots.txt.
I hope this is a direct and to the point opinion J
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonicals for Splitting up large pagination pages
Hi there, Our dev team are looking at speeding up load times and making pages easier to browse by splitting up our pagination pages to 10 items per page rather than 1000s (exact number to be determined) - sounds like a great idea, but we're little concerned about the canonicals on this one. at the moment we rel canonical (self) and prev and next. so b is rel b, prev a and next c - for each letter continued. Now the url structure will be a1, a(n+), b1, b(n+), c1, c(n+). Should we keep the canonicals to loop through the whole new structure or should we loop each letter within itself? Either b1 rel b1, prev a(n+), next b2 - even though they're not strictly continuing the sequence. Or a1 rel a1, next a2. a2 rel a2, prev a1, next a3 | b1 rel b1, next b2, b2 rel b2, prev b1, next b3 etc. Would love to hear your points of view, hope that all made sense 🙂 I'm leaning towards the first one even though it's not continuing the letter sequence, but because it's looping the alphabetically which is currently working for us already. This is an example of the page we're hoping to split up: https://www.world-airport-codes.com/alphabetical/airport-name/b.html
Intermediate & Advanced SEO | | Fubra0 -
If I put a piece of content on an external site can I syndicate to my site later using a rel=canonical link?
Could someone help me with a 'what if ' scenario please? What happens if I publish a piece of content on an external website, but then later decide to also put this content on my website. I want my website to rank first for this content, even though the original location for the content was the external website. Would it be okay for me to put a rel=canonical tag on the external website's content pointing to the copy on my website? Or would this be seen as manipulative?
Intermediate & Advanced SEO | | RG_SEO1 -
Noindex search pages?
Is it best to noindex search results pages, exclude them using robots.txt, or both?
Intermediate & Advanced SEO | | YairSpolter0 -
Canonical Rel .uk and .au to .com site?
Hi guys, we have a client whose main site is .com but who has a .co.uk and a com.au site promoting the same company/brand. Each site is verified locally with a local address and phone but when we create content for the sites that is universal, should I rel=canonical those pages on the .co.uk and .com.au sites to the .com site? I saw a post from Dr. Pete that suggests I should as he outlines pretty closely the situation we're in: "The ideal use of cross-domain rel=canonical would be a situation where multiple sites owned by the same entity share content, and that content is useful to the users of each individual site." Thanks in advance for your insight!
Intermediate & Advanced SEO | | wcbuckner0 -
Anyone managed to change 'At a glance:' in local search results
On Google's local search results, i.e when the 'Google places' data is displayed along with the map on the right hand side of the search results, there is also an element 'At a glance:'
Intermediate & Advanced SEO | | DeanAndrews
The data that if being displayed is from some years ago and the client would if possible like it to reflect there current services, which they have been providing for some five years. According to Google support here - http://support.google.com/maps/bin/answer.py?hl=en&answer=1344353 this cannot be changed, they say 'Can I edit a listing’s descriptive terms or suggest a new one?
No; the terms are not reviewed, curated, or edited. They come from an algorithm, and we do not help that algorithm figure it out. ' My question is has anyone successfully influenced this data and if so how.0 -
Crawl Budget on Noindex Follow
We have a list of crawled product search pages where pagination on Page 1 is indexed and crawled and page 2 and onward is noindex, noarchive follow as we want the links followed to the Product Pages themselves. (All product Pages have canonicals and unique URLs) Orr search results will be increasing the sets, and thus Google will have more links to follow on our wesbite although they all will be noindex pages. will this impact our carwl budget and additionally have impact to our rankings? Page 1 - Crawled Indexed and Followed Page 2 onward - Crawled No-index No-Archive Followed Thoughts? Thanks, Phil G
Intermediate & Advanced SEO | | AU-SEO0 -
Rel=nofollow and SSL Certs
Will I lose or gain seo benefit from using rel=nofollow on my SSL certificate? every page on the site refers (links) to the cert and the server call to display the cert adds over 500ms to my page load speeds. <updated question=""> Is there a way to display the cert to cut down on load speeds? Also, would Google discount or penalize the site if the cert were nofollowed?</updated> Thoughts? Thanks in advance!
Intermediate & Advanced SEO | | AnthonyYoung0