URL Parameters as a single solution vs Canonical tags
-
Hi all,
We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved.
Still a few questions remain:
- Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling.
- What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'?
- Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem?
I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values.
Thank you for your help.
Ivor
-
Great! All clear to me now.
I'll let you know how things will have developed soon.
Thanks for your input!
Best,
Ivor
-
Hi Ivor,
I wouldn't pay much attention to those Google guidelines about duplicate content.
Yes, Canonical tags are best practice, but what you're dealing with is dynamically generated query URLs from your CMS. If you opted to follow Google's guidelines on this you'd have to either manually set Canonical tags for each query as it is created, or set up a rule to do this automatically.
Both sound tricky to me so I'd just stick with the robots.txt alterations you've made and you should be fine.
Make sure you set back everything to index, follow. This is because you're giving the search engine instructions to ignore specific URLs in the robots.txt and you're also doing this in the meta robots function.
When this occurs the search engine gets confused and then makes it's own best judgement as per the article you've referenced.
Best to keep it simple and leave everything index, follow and keep the robots.txt in place to block these URLs and see how your results go.
Also might be a good idea to touch up your content on the page. I'd suggest about 250 words of content with your targeted keyword twice and 2-3 LSI keywords once each. You can put this at the bottom of the page, after the products so it doesnt push your products down. For more info on content you can check out my blog post here: http://searchfactory.com.au/blog/optimise-content-marketing-writing-for-google-hummingbird-semantic-search/
All the best!
Stel (@StelinSEO )
-
Hi Stel,
It all seems to work fine. After i waited until this morning for the weekly MOZ crawl, I notice the technical issues dropped almost completely. But I keep being confused whether i should allow for these pages still to be set to either "index, follow" or rather to "no-index, no follow"?
Right now, we have set dissallow commands in robots.txt, canonical tags and no index, no follow tags.
If you read Google's guidelines, they don't recommend blocking duplicate content in robots.txt but seem to prefer using canonical tags only https://support.google.com/webmasters/answer/66359
Google does not recommend blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the
rel="canonical"
link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools.And with duplicate content not set to no-index, no-follow they claim they would choose for the right pages to be displayed:
Google tries hard to index and show pages with distinct information. This filtering means, for instance, that if your site has a "regular" and "printer" version of each article, and neither of these is blocked with a noindex meta tag, we'll choose one of them to list. In the rare cases in which Google perceives that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we'll also make appropriate adjustments in the indexing and ranking of the sites involved. As a result, the ranking of the site may suffer, or the site might be removed entirely from the Google index, in which case it will no longer appear in search results.
So if I read this, I should perhaps set my tags to index, follow? And still keep the robots.txt commands and canonical rel tags?
Thanks a lot for your input.
Ivor
-
Hi Ivor,
The problem with _Disallow: /*? _is it only blocks top level queries like this: **mercadonline.es/?page=13&sort=price_true **, but it won't block this: mercadonline.es/anuncios-ciudad-real/?page=13&sort=price_true
So by adding a wildcard directory (i.e. Disallow: //?) this will block queries that occur at any level of your URL structure, like the one second bold example above.
You can indeed just block all queries if you like, but I'm not 100% what your structure is like. If you're sure it won't adversely affect any other pages, then Disallow: //? will solve the sort, price and page issues you've highlighted.
Once you're happy with the robots.txt (just had a look and looks fine to me) run it through screamingfrog and siteliner.com and see if these domains have been blocked and what Duplicate content issues exist.
-
Thank your Donford!
- Ivor
-
Hi Stel,
Thanks for your answer.
- Since we have already added: Disallow: /*? to the robots.txt, will this already exclude all parameters? Or is it better to refine this as you describe as follows:
Disallow: /*/*sort
Disallow: /*/*descending
Disallow: /*/*orderby
- Moreover, would I have to add as well:
Disallow: /*/*page
Disallow: /*page
- Finally, is we have search strings in our parameters; could we add this as well to our robots.txt? Since this content changes all the time.
If you like, I can send you my robots.txt file in a PM.
Thanks a lot for your help!
Ivor
-
Hi Ivor,
I concur with donford's answer, definitely something that can be sorted out by the robots text file. However, I would suggest using the following parameters for robots.txt:
**User-agent: ***
*Disallow: /*/page
*Disallow: /*/sort
*Disallow: /*/descendingMy reason for suggesting the extra /* is this will target URLs that appear on the second or below level.
I may be wrong, but it's best to try both by using the robots.txt checker in Webmaster Tools.
This article will give you an overview of how the robots.txt checker works: https://support.google.com/webmasters/answer/6062598?hl=en
All you have to do is click the link on the post that says robots.txt checker, login to Webmaster Tools and paste everything you see in bold in the text box. Then paste the following (also in bold) into the field below that says Enter a URL to test if it is blocked anuncios-ciudad-real/?page=13&sort=price_true
Click the test button and if it says BLOCKED you can add this to your robots.txt file, stored at top level in your FTP server.
Feel free to Tweet me at @StelinSEO if you have any further issues!
All the best,
Stel
-
Hi Ivor,
This is a very good place for canonical tags. If you put the canonical tag on the root page then you should be okay when the page=2 or sort=Az parameters are added it will still canonical to root page. There is nothing wrong with putting a canonical page tag to itself so there is little worry about.
Fixing parameters in Google is only one of the search engines all the other crawlers won't know what Google sees so it is best to fix it for everybody.
The other option would be to use a exclude in your robots.txt so the pages are not seen as duplicates, but I would advise to use canonical first.
User-agent: *
Disallow: /*page
User-agent: *
Disallow: /*sort
For example.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Top hierarchy pages vs footer links vs header links
Hi All, We want to change some of the linking structure on our website. I think we are repeating some non-important pages at footer menu. So I want to move them as second hierarchy level pages and bring some important pages at footer menu. But I have confusion which pages will get more influence: Top menu or bottom menu or normal pages? What is the best place to link non-important pages; so the link juice will not get diluted by passing through these. And what is the right place for "keyword-pages" which must influence our rankings for such keywords? Again one thing to notice here is we cannot highlight pages which are created in keyword perspective in top menu. Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Keywords in URL: sub-directory or single layer keywords?
Hi guys, im putting together a proposal for a new site and trying to figure out if it'd be better to (A) have a keyword split across multiple directories or duplicate keywords to have the keyword hyphenated? For example, for the topic of "Christmas decor" would you use; (A) - www.domain.com/Christmas/Decor (B) - www.domain.com/Christmas/Christmas-Decor in example B the phrase 'Christmas' is duplicated which looks a little spammy, but the key term "Christmas decor" is in the URL without being broken up by directories. which is stronger? Any advice welcome! Thanks guys!
Intermediate & Advanced SEO | | JAR8971 -
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
Case Sensitive URLs, Duplicate Content & Link Rel Canonical
I have a site where URLs are case sensitive. In some cases the lowercase URL is being indexed and in others the mixed case URL is being indexed. This is leading to duplicate content issues on the site. The site is using link rel canonical to specify a preferred URL in some cases however there is no consistency whether the URLs are lowercase or mixed case. On some pages the link rel canonical tag points to the lowercase URL, on others it points to the mixed case URL. Ideally I'd like to update all link rel canonical tags and internal links throughout the site to use the lowercase URL however I'm apprehensive! My question is as follows: If I where to specify the lowercase URL across the site in addition to updating internal links to use lowercase URLs, could this have a negative impact where the mixed case URL is the one currently indexed? Hope this makes sense! Dave
Intermediate & Advanced SEO | | allianzireland0 -
Should the Title Tag and the H1 Tag not be the same or not anymore and can that be classed as over optimization?
Hi All, I am just evaluating my title tags, H1,H2's etc and wondered in light of the google algorithm changes over the last 12 months , we should look at more diversity as opposed to things possibly looking over optimized... Originally (18 months ago) my Title tags considered of 2/3 keyword phrases , then I reduced this to my keyword phrase | Brand Name but a majority of my H1's and H2's had the same keyword phrases. Historically this has served us very well and rankings for good but over the last 12 months, we were hit by panda, hummingbird etc...and which we are trying to recover from and from what I have read, the rules have changed with regards to good seo./ over optimized SEO. We have been writting unique content , making more of our links branded etc to sort things out from that perspective but on the page stuff is just as important so I would like to get this right. I am now thinking , that I may be getting penalized if my H1 and title's , H2 are the same ? and that they should be obviously related but different. H2's again , need to be related but not the same as either of the above. Is that how things should be these days ? from what I have read about this, most of the articles are not that recent so I don't what to do what is now redundant advice Any advice greatly appreciated. Thanks Pete
Intermediate & Advanced SEO | | PeteC120 -
Crawl Issue Found: No rel="canonical" Tags
Given that google have stated that duplicate content is not penalised is this really something that will give sufficient benefits for the time involved?Also, reading some of the articles on moz.com they seem very ambivalent about its use – for example http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questionsWill any page with a canonical link normally NOT be indexed by google?Thanks.
Intermediate & Advanced SEO | | fdmgroup0 -
Best way to implement canonical tags on an ecommerce site with many filter options?
What would be the best way to add canonical tags to an ecommerce site with many filter options, for example, http://teacherexpress.scholastic.com? Should I include a canonical tag for all filter options under a category even though the pages don't have the same content? Thanks for reading!
Intermediate & Advanced SEO | | DA20130 -
Canonical VS Rel=Next & Rel=Prev for Paginated Pages
I run an ecommerce site that paginates product pages within Categories/Sub-Categories. Currently, products are not displayed in multiple categories but this will most likely happen as time goes on (in Clearance and Manufacturer Categories). I am unclear as to the proper implementation of Canonical tags and Rel=Next & Rel=Prev tags on paginated pages. I do not have a View All page to use as the Canonical URL so that is not an option. I want to avoid duplicate content issues down the road when products are displayed in multiple categories of the site and have Search Engines index paginated pages. My question is, should I use the Rel=Next & Rel=Prev tags on paginated pages as well as using Page One as the Canonical URL? Also, should I implement the Canonical tag on pages that are not yet paginated (only one page)?
Intermediate & Advanced SEO | | mj7750