URL Parameters as a single solution vs Canonical tags
-
Hi all,
We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved.
Still a few questions remain:
- Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling.
- What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'?
- Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem?
I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values.
Thank you for your help.
Ivor
-
Great! All clear to me now.
I'll let you know how things will have developed soon.
Thanks for your input!
Best,
Ivor
-
Hi Ivor,
I wouldn't pay much attention to those Google guidelines about duplicate content.
Yes, Canonical tags are best practice, but what you're dealing with is dynamically generated query URLs from your CMS. If you opted to follow Google's guidelines on this you'd have to either manually set Canonical tags for each query as it is created, or set up a rule to do this automatically.
Both sound tricky to me so I'd just stick with the robots.txt alterations you've made and you should be fine.
Make sure you set back everything to index, follow. This is because you're giving the search engine instructions to ignore specific URLs in the robots.txt and you're also doing this in the meta robots function.
When this occurs the search engine gets confused and then makes it's own best judgement as per the article you've referenced.
Best to keep it simple and leave everything index, follow and keep the robots.txt in place to block these URLs and see how your results go.
Also might be a good idea to touch up your content on the page. I'd suggest about 250 words of content with your targeted keyword twice and 2-3 LSI keywords once each. You can put this at the bottom of the page, after the products so it doesnt push your products down. For more info on content you can check out my blog post here: http://searchfactory.com.au/blog/optimise-content-marketing-writing-for-google-hummingbird-semantic-search/
All the best!
Stel (@StelinSEO )
-
Hi Stel,
It all seems to work fine. After i waited until this morning for the weekly MOZ crawl, I notice the technical issues dropped almost completely. But I keep being confused whether i should allow for these pages still to be set to either "index, follow" or rather to "no-index, no follow"?
Right now, we have set dissallow commands in robots.txt, canonical tags and no index, no follow tags.
If you read Google's guidelines, they don't recommend blocking duplicate content in robots.txt but seem to prefer using canonical tags only https://support.google.com/webmasters/answer/66359
Google does not recommend blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the
rel="canonical"
link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools.And with duplicate content not set to no-index, no-follow they claim they would choose for the right pages to be displayed:
Google tries hard to index and show pages with distinct information. This filtering means, for instance, that if your site has a "regular" and "printer" version of each article, and neither of these is blocked with a noindex meta tag, we'll choose one of them to list. In the rare cases in which Google perceives that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we'll also make appropriate adjustments in the indexing and ranking of the sites involved. As a result, the ranking of the site may suffer, or the site might be removed entirely from the Google index, in which case it will no longer appear in search results.
So if I read this, I should perhaps set my tags to index, follow? And still keep the robots.txt commands and canonical rel tags?
Thanks a lot for your input.
Ivor
-
Hi Ivor,
The problem with _Disallow: /*? _is it only blocks top level queries like this: **mercadonline.es/?page=13&sort=price_true **, but it won't block this: mercadonline.es/anuncios-ciudad-real/?page=13&sort=price_true
So by adding a wildcard directory (i.e. Disallow: //?) this will block queries that occur at any level of your URL structure, like the one second bold example above.
You can indeed just block all queries if you like, but I'm not 100% what your structure is like. If you're sure it won't adversely affect any other pages, then Disallow: //? will solve the sort, price and page issues you've highlighted.
Once you're happy with the robots.txt (just had a look and looks fine to me) run it through screamingfrog and siteliner.com and see if these domains have been blocked and what Duplicate content issues exist.
-
Thank your Donford!
- Ivor
-
Hi Stel,
Thanks for your answer.
- Since we have already added: Disallow: /*? to the robots.txt, will this already exclude all parameters? Or is it better to refine this as you describe as follows:
Disallow: /*/*sort
Disallow: /*/*descending
Disallow: /*/*orderby
- Moreover, would I have to add as well:
Disallow: /*/*page
Disallow: /*page
- Finally, is we have search strings in our parameters; could we add this as well to our robots.txt? Since this content changes all the time.
If you like, I can send you my robots.txt file in a PM.
Thanks a lot for your help!
Ivor
-
Hi Ivor,
I concur with donford's answer, definitely something that can be sorted out by the robots text file. However, I would suggest using the following parameters for robots.txt:
**User-agent: ***
*Disallow: /*/page
*Disallow: /*/sort
*Disallow: /*/descendingMy reason for suggesting the extra /* is this will target URLs that appear on the second or below level.
I may be wrong, but it's best to try both by using the robots.txt checker in Webmaster Tools.
This article will give you an overview of how the robots.txt checker works: https://support.google.com/webmasters/answer/6062598?hl=en
All you have to do is click the link on the post that says robots.txt checker, login to Webmaster Tools and paste everything you see in bold in the text box. Then paste the following (also in bold) into the field below that says Enter a URL to test if it is blocked anuncios-ciudad-real/?page=13&sort=price_true
Click the test button and if it says BLOCKED you can add this to your robots.txt file, stored at top level in your FTP server.
Feel free to Tweet me at @StelinSEO if you have any further issues!
All the best,
Stel
-
Hi Ivor,
This is a very good place for canonical tags. If you put the canonical tag on the root page then you should be okay when the page=2 or sort=Az parameters are added it will still canonical to root page. There is nothing wrong with putting a canonical page tag to itself so there is little worry about.
Fixing parameters in Google is only one of the search engines all the other crawlers won't know what Google sees so it is best to fix it for everybody.
The other option would be to use a exclude in your robots.txt so the pages are not seen as duplicates, but I would advise to use canonical first.
User-agent: *
Disallow: /*page
User-agent: *
Disallow: /*sort
For example.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Importance (or lack of) Meta keywords tags and Tags in Drupal
I'm wondering should I put any effort in making Meta Keywords tags for my pages or normal Tags (they're separate in Drupal), since apparently first are not considered by most of search engines, while not sure about normal tags. Obviously SERPS has to determine partial valu of the page by content, thus consider keywords / tags to some extend. What's your opinion on that. Thank you.
Intermediate & Advanced SEO | | Optimal_Strategies1 -
URL indexed but not submitted in sitemap, however the URL is in the sitemap
Dear Community, I have the following problem and would be super helpful if you guys would be able to help. Cheers Symptoms : On the search console, Google says that some of our old URLs are indexed but not submitted in sitemap However, those URLs are in the sitemap Also the sitemap as been successfully submitted. No error message Potential explanation : We have an automatic cache clearing process within the company once a day. In the sitemap, we use this as last modification date. Let's imagine url www.example.com/hello was modified last time in 2017. But because the cache is cleared daily, in the sitemap we will have last modified : yesterday, even if the content of the page did not changed since 2017. We have a Z after sitemap time, can it be that the bot does not understands the time format ? We have in the sitemap only http URL. And our HTTPS URLs are not in the sitemap What do you think?
Intermediate & Advanced SEO | | ZozoMe0 -
Canonical Chain
This is quite advanced so maybe Rand can give me an answer? I often have seen questions surrounding a 301 chain where only 85% of the link juice is passed on to the first target and 85% of that to the next one, up to three targets. But how about a canonical chain? What do I mean by this:? I have a client who sells lighting so I will use a real example (sans domain) I don't want 'new-product' pages appearing in SERPS. They dilute link equity for the categories they replicate and often contain identical products to the main categories and subcategories. I don't want to no index them all together I'd rather tell Google they are the same as the higher category/sub category. (discussion whether a noindex/follow tag would be better?) If I canonicalize new-products/ceiling-lights-c1/kitchen-lighting-c17/kitchen-ceiling-lights-c217 to /ceiling-lights-c1/kitchen-lighting-c17/kitchen-ceiling-lights-c217 I then subsequently discover that everything in kitchen-ceiling-lights-c217 is already in /kitchen-lighting-c17 and I decide to canonicalize those two - so I place a /kitchen-lighting-c17 canonical on /kitchen-ceiling-lights-c217. Then what happens to the new-products canonical? Is it the same rule - does it pass 85% of link equity back to the non new-product URL and 85% of that back to the category? does it just not work? or should I do noindexi/follow Now before you jump in: Let's assume these are done over a period of time because the obvious answer is: Canonicalize both back to /ceiling-lights-c1/kitchen-lighting-c17 I know that and that is not what I am asking. What if they are done in a sequence what is the real result? I don't want to patronise anyone but please read this carefully before giving an answer. Regards Nigel Carousel Projects.
Intermediate & Advanced SEO | | Nigel_Carr0 -
Should I disallow all URL query strings/parameters in Robots.txt?
Webmaster Tools correctly identifies the query strings/parameters used in my URLs, but still reports duplicate title tags and meta descriptions for the original URL and the versions with parameters. For example, Webmaster Tools would report duplicates for the following URLs, despite it correctly identifying the "cat_id" and "kw" parameters: /Mulligan-Practitioner-CD-ROM
Intermediate & Advanced SEO | | jmorehouse
/Mulligan-Practitioner-CD-ROM?cat_id=87
/Mulligan-Practitioner-CD-ROM?kw=CROM Additionally, theses pages have self-referential canonical tags, so I would think I'd be covered, but I recently read that another Mozzer saw a great improvement after disallowing all query/parameter URLs, despite Webmaster Tools not reporting any errors. As I see it, I have two options: Manually tell Google that these parameters have no effect on page content via the URL Parameters section in Webmaster Tools (in case Google is unable to automatically detect this, and I am being penalized as a result). Add "Disallow: *?" to hide all query/parameter URLs from Google. My concern here is that most backlinks include the parameters, and in some cases these parameter URLs outrank the original. Any thoughts?0 -
Dealing with non-canonical http vs https?
We're working on a complete rebuild of a client's site. The existing version of the site is in WordPress and I've noticed that the site is accessible via http and https. The new version of the site will have mostly or entirely different URLs. It seems that both http and https versions of a page will resolve, but all of the rel-canonical tags I've seen point to the https version. Sometimes image tags and stylesheets are https, sometimes they aren't. There are both http and https pages in Google's index. Having looked at other community posts about http/https, I've gathered the following: http/https is like two different domains. http and https versions need to be verified in Google Webmaster Tools separately. Set up the preferred domain properly. Rel-canonicals and internal links should have matching protocols. My thought is that we will do a .htaccess that redirects old URLs regardless of the protocol to new pages at one protocol. I would probably let the .css and image files from the current site 404. When we develop and launch the new site, does it make sense for everything to be forced to https? Are there any particular SEO issues that I should be aware of for a scenario like this? Thanks!
Intermediate & Advanced SEO | | GOODSIR0 -
Brackets vs Encoded URLs: The "Same" in Google's eyes, or dup content?
Hello, This is the first time I've asked a question here, but I would really appreciate the advice of the community - thank you, thank you! Scenario: Internal linking is pointing to two different versions of a URL, one with brackets [] and the other version with the brackets encoded as %5B%5D Version 1: http://www.site.com/test?hello**[]=all&howdy[]=all&ciao[]=all
Intermediate & Advanced SEO | | mirabile
Version 2: http://www.site.com/test?hello%5B%5D**=all&howdy**%5B%5D**=all&ciao**%5B%5D**=all Question: Will search engines view these as duplicate content? Technically there is a difference in characters, but it's only because one version encodes the brackets, and the other does not (See: http://www.w3schools.com/tags/ref_urlencode.asp) We are asking the developer to encode ALL URLs because this seems cleaner but they are telling us that Google will see zero difference. We aren't sure if this is true, since engines can get so _hung up on even one single difference in character. _ We don't want to unnecessarily fracture the internal link structure of the site, so again - any feedback is welcome, thank you. 🙂0 -
How to determine URL Parameters in Google Webmaster
Hi there! I have a new website with so many duplicate meta titles and descriptions because of its expanded features from the e-commerce shopping cart that I am using like mobile website, product sorting, etc. Aside from canonical, is it advisable to use the URL parameters from Google webmaster tools to disallow crawling of mobile website and other parameters like, "parent", "catalogsetview", "pcsid", "pg" "mode". I appreciate and advise. 🙂 Thanks!
Intermediate & Advanced SEO | | paumer800 -
Subdirectory URLs
If I have category pages for my site; is it better to use http://example.com/category/category or just http://example.com/category? Also, I'm creating a new section of the site; a resource center. Should the URLs of the pages in the resource center be http://example.com/learn/page or just http://example.com/page What are the reasons for the better choice?
Intermediate & Advanced SEO | | Visually0