URL Parameters as a single solution vs Canonical tags
-
Hi all,
We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved.
Still a few questions remain:
- Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling.
- What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'?
- Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem?
I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values.
Thank you for your help.
Ivor
-
Great! All clear to me now.
I'll let you know how things will have developed soon.
Thanks for your input!
Best,
Ivor
-
Hi Ivor,
I wouldn't pay much attention to those Google guidelines about duplicate content.
Yes, Canonical tags are best practice, but what you're dealing with is dynamically generated query URLs from your CMS. If you opted to follow Google's guidelines on this you'd have to either manually set Canonical tags for each query as it is created, or set up a rule to do this automatically.
Both sound tricky to me so I'd just stick with the robots.txt alterations you've made and you should be fine.
Make sure you set back everything to index, follow. This is because you're giving the search engine instructions to ignore specific URLs in the robots.txt and you're also doing this in the meta robots function.
When this occurs the search engine gets confused and then makes it's own best judgement as per the article you've referenced.
Best to keep it simple and leave everything index, follow and keep the robots.txt in place to block these URLs and see how your results go.
Also might be a good idea to touch up your content on the page. I'd suggest about 250 words of content with your targeted keyword twice and 2-3 LSI keywords once each. You can put this at the bottom of the page, after the products so it doesnt push your products down. For more info on content you can check out my blog post here: http://searchfactory.com.au/blog/optimise-content-marketing-writing-for-google-hummingbird-semantic-search/
All the best!
Stel (@StelinSEO )
-
Hi Stel,
It all seems to work fine. After i waited until this morning for the weekly MOZ crawl, I notice the technical issues dropped almost completely. But I keep being confused whether i should allow for these pages still to be set to either "index, follow" or rather to "no-index, no follow"?
Right now, we have set dissallow commands in robots.txt, canonical tags and no index, no follow tags.
If you read Google's guidelines, they don't recommend blocking duplicate content in robots.txt but seem to prefer using canonical tags only https://support.google.com/webmasters/answer/66359
Google does not recommend blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the
rel="canonical"
link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools.And with duplicate content not set to no-index, no-follow they claim they would choose for the right pages to be displayed:
Google tries hard to index and show pages with distinct information. This filtering means, for instance, that if your site has a "regular" and "printer" version of each article, and neither of these is blocked with a noindex meta tag, we'll choose one of them to list. In the rare cases in which Google perceives that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we'll also make appropriate adjustments in the indexing and ranking of the sites involved. As a result, the ranking of the site may suffer, or the site might be removed entirely from the Google index, in which case it will no longer appear in search results.
So if I read this, I should perhaps set my tags to index, follow? And still keep the robots.txt commands and canonical rel tags?
Thanks a lot for your input.
Ivor
-
Hi Ivor,
The problem with _Disallow: /*? _is it only blocks top level queries like this: **mercadonline.es/?page=13&sort=price_true **, but it won't block this: mercadonline.es/anuncios-ciudad-real/?page=13&sort=price_true
So by adding a wildcard directory (i.e. Disallow: //?) this will block queries that occur at any level of your URL structure, like the one second bold example above.
You can indeed just block all queries if you like, but I'm not 100% what your structure is like. If you're sure it won't adversely affect any other pages, then Disallow: //? will solve the sort, price and page issues you've highlighted.
Once you're happy with the robots.txt (just had a look and looks fine to me) run it through screamingfrog and siteliner.com and see if these domains have been blocked and what Duplicate content issues exist.
-
Thank your Donford!
- Ivor
-
Hi Stel,
Thanks for your answer.
- Since we have already added: Disallow: /*? to the robots.txt, will this already exclude all parameters? Or is it better to refine this as you describe as follows:
Disallow: /*/*sort
Disallow: /*/*descending
Disallow: /*/*orderby
- Moreover, would I have to add as well:
Disallow: /*/*page
Disallow: /*page
- Finally, is we have search strings in our parameters; could we add this as well to our robots.txt? Since this content changes all the time.
If you like, I can send you my robots.txt file in a PM.
Thanks a lot for your help!
Ivor
-
Hi Ivor,
I concur with donford's answer, definitely something that can be sorted out by the robots text file. However, I would suggest using the following parameters for robots.txt:
**User-agent: ***
*Disallow: /*/page
*Disallow: /*/sort
*Disallow: /*/descendingMy reason for suggesting the extra /* is this will target URLs that appear on the second or below level.
I may be wrong, but it's best to try both by using the robots.txt checker in Webmaster Tools.
This article will give you an overview of how the robots.txt checker works: https://support.google.com/webmasters/answer/6062598?hl=en
All you have to do is click the link on the post that says robots.txt checker, login to Webmaster Tools and paste everything you see in bold in the text box. Then paste the following (also in bold) into the field below that says Enter a URL to test if it is blocked anuncios-ciudad-real/?page=13&sort=price_true
Click the test button and if it says BLOCKED you can add this to your robots.txt file, stored at top level in your FTP server.
Feel free to Tweet me at @StelinSEO if you have any further issues!
All the best,
Stel
-
Hi Ivor,
This is a very good place for canonical tags. If you put the canonical tag on the root page then you should be okay when the page=2 or sort=Az parameters are added it will still canonical to root page. There is nothing wrong with putting a canonical page tag to itself so there is little worry about.
Fixing parameters in Google is only one of the search engines all the other crawlers won't know what Google sees so it is best to fix it for everybody.
The other option would be to use a exclude in your robots.txt so the pages are not seen as duplicates, but I would advise to use canonical first.
User-agent: *
Disallow: /*page
User-agent: *
Disallow: /*sort
For example.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are there any downsides to using a canonical tag temporarily?
I'm working on redesigning our website. One of the content types has a main archive page (/success-stories) containing all of the success stories (written by graduates of our program). Because we plan to have success stories for other people (non-graduates), I'm using category hierarchies (/success-stories/graduates and success-stories/nonprofits, for example). It will go one level deeper to organize graduates by graduation year (/success-stories/graduates/%year%). I think this will work out well. However, we won't have non-graduate success stories for a little while, probably at least a few weeks, which means that /success-stories and /.../graduates indices will contain the same content for a while. So my question is this: Will it hurt to use a canonical tag that points to /success-stories/graduates as the authority until the main archive page contains more than just graduates? Or would it be better to use a 302 redirect from /success-stories to /.../graduates until more diverse content is added?
Intermediate & Advanced SEO | | bcaples0 -
Question on Indexing, Hreflang tag, Canonical
Dear All, Have a question. We've a client (pharma), who has a prescription medicine approved only in the US, and has only one global site at .com which is accessed by all their target audience all over the world.
Intermediate & Advanced SEO | | jrohwer
For the rest of the US, we can create a replica of the home page (which actually features that drug), minus the existence of the medicine, and set IP filter so that non-US traffic see the duplicate of the home page. Question is, how best to tackle this semi-duplicate page. Possibly no-index won't do because that will block the site from the non-US geography. Hreflang won't work here possibly, because we are not dealing different languages, we are dealing same language (En) but different Geographies. Canonical might be the best way to go? Wanted to have an insight from the experts. Thanks,
Suparno (for Jeff)1 -
Www. or naked url?
Hi everyone, I am about to start a new WordPress site and debating whether to use www or naked URL for the URL structure. Using naked URL makes sense from a branding and minimalistic perspective but I am reading that using naked URL might have some technical deficiencies. Specifically, cookie issues and DNS can't be cname. Are these technical deficiencies still valid when using naked url? Would appreciate any feedback on this! Cheers
Intermediate & Advanced SEO | | nsereke1 -
Duplicate URL Parameters for Blog Articles
Hi there, I'm working on a site which is using parameter URLs for category pages that list blog articles. The content on these pages constantly change as new posts are frequently added, the category maybe for 'Heath Articles' and list 10 blog posts (snippets from the blog). The URL could appear like so with filtering: www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016 www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016&page=1 All pages currently have the same Meta title and descriptions due to limitations with the CMS, they are also not in our xml sitemap I don't believe we should be focusing on ranking for these pages as the content on here are from blog posts (which we do want to rank for on the individual post) but there are 3000 duplicates and they need to be fixed. Below are the options we have so far: Canonical URLs Have all parameter pages within the category canonicalize to www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general and generate dynamic page titles (I know its a good idea to use parameter pages in canonical URLs). WMT Parameter tool Tell Google all extra parameter tags belong to the main pages (e.g. www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016&page=3 belongs to www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general). Noindex Remove all the blog category pages, I don't know how Google would react if we were to remove 3000 pages from our index (we have roughly 1700 unique pages) We are very limited with what we can do to these pages, if anyone has any feedback suggestions it would be much appreciated. Thanks!
Intermediate & Advanced SEO | | Xtend-Life0 -
How to switch from URL based navigation to Ajax, 1000's of URLs gone
Hi everyone, We have thousands of urls generated by numerous products filters on our ecommerce site, eg./category1/category11/brand/color-red/size-xl+xxl/price-cheap/in-stock/. We are thinking of moving these filters to ajax in order to offer a better user experience and get rid of these useless urls. In your opinion, what is the best way to deal with this huge move ? leave the existing URLs respond as before : as they will disappear from our sitemap (they won't be linked anymore), I imagine robots will someday consider them as obsolete ? redirect permanent (301) to the closest existing url mark them as gone (4xx) I'd vote for option 2. Bots will suddenly see thousands of 301, but this is reflecting what is really happening, right ? Do you think this could result in some penalty ? Thank you very much for your help. Jeremy
Intermediate & Advanced SEO | | JeremyICC0 -
Canonical tag - link juice to the frontpage
I only wants to be 100% sure about using the canonical tag.. I want to use it on pages that rankes together with the frontpage in Google, but i only want the frontpage to rank alone and to have the link juice from the other 2 sites direct-ed to the frontpage.. Hope you agre that its the correct way to doo so?? Wich one is correct: http://www.testtest.com/”> Or this http://www.testtest.com/”/>
Intermediate & Advanced SEO | | seopeter290 -
Is this URL Structure SPAMMY
Hey guys/gals I have tried asking this very specific question 3-4 times already and some how my specific question seems to be getting side tracked and my very specif question pertaining to my URL structure keeps getting bypassed and overlooked. I am wondering about if this URL structure would become a possible issue in the somewhat near future with GOOGLE considering what I have seen go down in the SEO world the past 2 years. Does this URL Structure look SPAMMY? http://www.pcmedicsoncall.com/computer-repair/laptop-repair/ www.pcmedicsoncall.com/computer-repair/laptop-repair/laptop-screen-repair/ Below is a Screen shot of the Site which I designed where I have created a SILO Site Architecture. .....PLEASE... Look at the Picture Thank you Marshall SEOMOZ-PC-MEDICS-ON-CALL-1.jpg
Intermediate & Advanced SEO | | MarshallThompson310 -
Rel=Canonical URLs?
If I had two pages: PageA about Cats PageB about Dogs If PageA had a link rel=canonical to PageB, but the content is different, how would Google resolve this and what would users see if they searched "Cats" or "Dogs?" If PageA 301 redirected to PageB, (no content in PageA since it's 301 redirected), how would Google resolve this and what would users see if they searched "Cats" or "Dogs?"
Intermediate & Advanced SEO | | visionnexus0