URL Parameters as a single solution vs Canonical tags
-
Hi all,
We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved.
Still a few questions remain:
- Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling.
- What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'?
- Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem?
I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values.
Thank you for your help.
Ivor
-
Great! All clear to me now.
I'll let you know how things will have developed soon.
Thanks for your input!
Best,
Ivor
-
Hi Ivor,
I wouldn't pay much attention to those Google guidelines about duplicate content.
Yes, Canonical tags are best practice, but what you're dealing with is dynamically generated query URLs from your CMS. If you opted to follow Google's guidelines on this you'd have to either manually set Canonical tags for each query as it is created, or set up a rule to do this automatically.
Both sound tricky to me so I'd just stick with the robots.txt alterations you've made and you should be fine.
Make sure you set back everything to index, follow. This is because you're giving the search engine instructions to ignore specific URLs in the robots.txt and you're also doing this in the meta robots function.
When this occurs the search engine gets confused and then makes it's own best judgement as per the article you've referenced.
Best to keep it simple and leave everything index, follow and keep the robots.txt in place to block these URLs and see how your results go.
Also might be a good idea to touch up your content on the page. I'd suggest about 250 words of content with your targeted keyword twice and 2-3 LSI keywords once each. You can put this at the bottom of the page, after the products so it doesnt push your products down. For more info on content you can check out my blog post here: http://searchfactory.com.au/blog/optimise-content-marketing-writing-for-google-hummingbird-semantic-search/
All the best!
Stel (@StelinSEO )
-
Hi Stel,
It all seems to work fine. After i waited until this morning for the weekly MOZ crawl, I notice the technical issues dropped almost completely. But I keep being confused whether i should allow for these pages still to be set to either "index, follow" or rather to "no-index, no follow"?
Right now, we have set dissallow commands in robots.txt, canonical tags and no index, no follow tags.
If you read Google's guidelines, they don't recommend blocking duplicate content in robots.txt but seem to prefer using canonical tags only https://support.google.com/webmasters/answer/66359
Google does not recommend blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the
rel="canonical"
link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools.And with duplicate content not set to no-index, no-follow they claim they would choose for the right pages to be displayed:
Google tries hard to index and show pages with distinct information. This filtering means, for instance, that if your site has a "regular" and "printer" version of each article, and neither of these is blocked with a noindex meta tag, we'll choose one of them to list. In the rare cases in which Google perceives that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we'll also make appropriate adjustments in the indexing and ranking of the sites involved. As a result, the ranking of the site may suffer, or the site might be removed entirely from the Google index, in which case it will no longer appear in search results.
So if I read this, I should perhaps set my tags to index, follow? And still keep the robots.txt commands and canonical rel tags?
Thanks a lot for your input.
Ivor
-
Hi Ivor,
The problem with _Disallow: /*? _is it only blocks top level queries like this: **mercadonline.es/?page=13&sort=price_true **, but it won't block this: mercadonline.es/anuncios-ciudad-real/?page=13&sort=price_true
So by adding a wildcard directory (i.e. Disallow: //?) this will block queries that occur at any level of your URL structure, like the one second bold example above.
You can indeed just block all queries if you like, but I'm not 100% what your structure is like. If you're sure it won't adversely affect any other pages, then Disallow: //? will solve the sort, price and page issues you've highlighted.
Once you're happy with the robots.txt (just had a look and looks fine to me) run it through screamingfrog and siteliner.com and see if these domains have been blocked and what Duplicate content issues exist.
-
Thank your Donford!
- Ivor
-
Hi Stel,
Thanks for your answer.
- Since we have already added: Disallow: /*? to the robots.txt, will this already exclude all parameters? Or is it better to refine this as you describe as follows:
Disallow: /*/*sort
Disallow: /*/*descending
Disallow: /*/*orderby
- Moreover, would I have to add as well:
Disallow: /*/*page
Disallow: /*page
- Finally, is we have search strings in our parameters; could we add this as well to our robots.txt? Since this content changes all the time.
If you like, I can send you my robots.txt file in a PM.
Thanks a lot for your help!
Ivor
-
Hi Ivor,
I concur with donford's answer, definitely something that can be sorted out by the robots text file. However, I would suggest using the following parameters for robots.txt:
**User-agent: ***
*Disallow: /*/page
*Disallow: /*/sort
*Disallow: /*/descendingMy reason for suggesting the extra /* is this will target URLs that appear on the second or below level.
I may be wrong, but it's best to try both by using the robots.txt checker in Webmaster Tools.
This article will give you an overview of how the robots.txt checker works: https://support.google.com/webmasters/answer/6062598?hl=en
All you have to do is click the link on the post that says robots.txt checker, login to Webmaster Tools and paste everything you see in bold in the text box. Then paste the following (also in bold) into the field below that says Enter a URL to test if it is blocked anuncios-ciudad-real/?page=13&sort=price_true
Click the test button and if it says BLOCKED you can add this to your robots.txt file, stored at top level in your FTP server.
Feel free to Tweet me at @StelinSEO if you have any further issues!
All the best,
Stel
-
Hi Ivor,
This is a very good place for canonical tags. If you put the canonical tag on the root page then you should be okay when the page=2 or sort=Az parameters are added it will still canonical to root page. There is nothing wrong with putting a canonical page tag to itself so there is little worry about.
Fixing parameters in Google is only one of the search engines all the other crawlers won't know what Google sees so it is best to fix it for everybody.
The other option would be to use a exclude in your robots.txt so the pages are not seen as duplicates, but I would advise to use canonical first.
User-agent: *
Disallow: /*page
User-agent: *
Disallow: /*sort
For example.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will URLS With Existing 301 Redirects Be as Powerful As New URLS In Serps?
Most products on our site have redirects to them from years of switching platform and merely trying to get a great and optimised URL for SEO purposes. My question is this: If a product URL has alot of redirects (301's), would it be more beneficial to me to create a duplicated version of the product and start fresh with a new URL? I am not on here trying to gain backlinks but my site is tn nursery dot net (proof:)
Intermediate & Advanced SEO | | tammysons
I need some quality help figuring out what to do.
Tammy0 -
Is single H1 tag still best practice?
Hi Guys, Is having a single h1 tag still best practice for SEO? Guessing multiple h1 tags dilute the value of the tag and keywords within the tag. Thoughts? Cheers.
Intermediate & Advanced SEO | | kayl870 -
Why is rel="canonical" pointing at a URL with parameters bad?
Context Our website has a large number of crawl issues stemming from duplicate page content (source: Moz). According to an SEO firm which recently audited our website, some amount of these crawl issues are due to URL parameter usage. They have recommended that we "make sure every page has a Rel Canonical tag that points to the non-parameter version of that URL…parameters should never appear in Canonical tags." Here's an example URL where we have parameters in our canonical tag... http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/ rel="canonical" href="http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/?pageSize=0&pageSizeBottom=0" /> Our website runs on IBM WebSphere v 7. Questions Why it is important that the rel canonical tag points to a non-parameter URL? What is the extent of the negative impact from having rel canonicals pointing to URLs including parameters? Any advice for correcting this? Thanks for any help!
Intermediate & Advanced SEO | | Solid_Gold1 -
Dealing with Canonical tag in volusion
Hi We have an ecommerce site where we have some returns/scratch /dented products identical to the original one. The onpage content of the damaged/original is pretty much identical with the damaged just having a describing the damage. I had wanted to make a canonical tag on the damaged product to the original so it would not be a problem of duplicate content but as it is a volusion site we dont have that option - it only canonicalizes back to itself! Any ideas what else I can do - cant really change the content much and I dont really want to deindex it so people find it? Thanks!
Intermediate & Advanced SEO | | henya0 -
URL Optimisation Dilemma
First of all, I fully appreciate that I may be over analysing this, so feel free to highlight if you think I’m going overboard on this one. I’m currently trying to optimise the URLs for a group of new pages that we have recently launched. I would usually err on the side of leaving the urls as they are so that any incoming links are not diluted through the 301 re-direct. In this case, however, there are very few links to these pages, so I don’t think that changing URLs will harm them. My main question is between short URLs vs. long URLs (I have already read Dr. Pete’s post on this). Note: the URLs I have listed below are not the actual URLs, but very similar examples that I have created. The URLs currently exist in a similar format to the examples below: http://www.company.com/products/dlm/hire-ca My first response was that we could put a few descriptive keywords in the url, with something like the following: http://www.company/products/debt-lifecycle-management/hire-collection-agents - I’m worried though that the URL will get too long for any pages sitting under this. As a compromise, I am considering the following: http://www.company/products/dlm/hire-collection-agents My feeling is that the second approach will give the best balance between having the keywords for the products and trying to ensure good user experience. My only concern is whether the /dlm/ category page would suffer slightly, but this would have ‘debt-lifecycle-management’ in the title tag. Does this sound like a good approach to people? Or do you think I’m being a little obsessive about this? Any help would be appreciated 🙂
Intermediate & Advanced SEO | | RG_SEO0 -
Canonical url issue
Canonical url issue My site https://ladydecosmetic.com on seomoz crawl showing duplicate page title, duplicate page content errors. I have downloaded the error reports csv and checked. From the report, The below url contains duplicate page content.
Intermediate & Advanced SEO | | trixmediainc
https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=40&brands=66&click=brnd And other duplicate urls as per report are,
https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&click=colorsu&brands=66 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&brands=66&click=brnd But on every these url(all 4) I have set canonical url. That is the original url and an existing one(not 404). https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=0 Then how this issues are showing like duplicate page content. Please give me an answer ASAP.0 -
Canonical tag vs 301 in this Panda situation - trying to wrap my brain around this!
Here's the situation. Let's say you have a development site that was created on a subdomain such as examplesite.webdesign.com. When the new site, examplesite.com launches, the developer forgot to remove examplesite.webdesign.com from the index. As such, two copies of the site exist. Because the development site existed first, examplesite.com ends up being affected by Panda and drops out of the search results. As a result only the development site is visible on Google searches. I've been trying to wrap my head around whether using canonical tags or 301 redirects would be best. On one hand you could insert a canonical tag on each page of the subdomain to tell Google that the correct version to index is examplesite.com. On the other hand you could do a 301 redirect from every page of the development site to to examplesite.com. Now, here's where it gets complicated. Because the new site has been flagged as a Panda site, in either case will it need to see a Panda refresh in order to be included in the index?
Intermediate & Advanced SEO | | MarieHaynes0 -
Proper use and coding of rel = "canonical" tag
I'm working on a site that has pages for many wedding vendors. There are essentially 3 variations of the page for each vendor with only slightly different content, so they're showing up as "duplicate content" in my SEOmoz Campaign. Here's an example of the 3 variations: http://www.weddingreportsma.com/MA-wedding.cfm/vendorID/4161 http://www.weddingreportsma.com/MA-wedding.cfm?vendorID=4161&action=messageWrite http://www.weddingreportsma.com/MA-wedding.cfm?vendorID=4161&action=writeReview Because of this, we placed a rel="canoncial" tag in the second 2 pages to try to fix the problem. However, the coding does not seem to validate in the w3 html validator. I can't say I understand html well enough to understand the error the validator is pointing out. We also added a the following to the second 2 types of pages <meta name="robots" content="noindex"> Am I employing this tag correctly in this case? Here is a snippet of the code below. <html> <head> <title>Reviews on Astonishing Event, Inc from Somerset MAtitle> <link rel="stylesheet" type="text/css" href="[/includes/style.css](view-source:http://www.weddingreportsma.com/includes/style.css)"> <link href="[http://www.weddingreportsma.com/MA-wedding.cfm/vendorID/4161](view-source:http://www.weddingreportsma.com/MA-wedding.cfm/vendorID/4161)" rel="canonical" /> <meta name="robots" content="noindex">
Intermediate & Advanced SEO | | jeffreytrull1
<meta name="keywords" content="Astonishing Event, Inc, Somerset Massachusetts, Massachusetts Wedding Wedding Planners Directory, Massachusetts weddings, wedding Massachusetts ">
<meta name="description" content="Get information and read reviews on Astonishing Event, Inc from Somerset MA. Astonishing Event, Inc appears in the directory of Somerset MA wedding Wedding Planners on WeddingReportsMA.com."> <script src="[http://www.google-analytics.com/urchin.js](view-source:http://www.google-analytics.com/urchin.js)" type="text/javascript">script> <script type="text/javascript"> _uacct = "UA-173959-2"; urchinTracker(); script> head>0