Query string in url - duplicate content?
-
Hi everyone I would appreciate some advice on the following.
I have a page which has some nice content on but it also has a search functionality. When a search is run a querystrong is run. So i will get something like mypage.php?id=20 etc.
With many different url potentials, will each query string be seen as a different page? If so i don't want duplicate content.
So am i best putting canonical tags in the head tags on mypage.php ? to avoid Google seeing potential duplicate content.
Many thanks for all your advice.
-
Yes, the best way is to handle via webmaster tools, and not try to handle via code on the sites. In your Google webmaster tools account, go to "Site Cinfiguration" and "URL Parameters". It will show you a handful of dynamic urls it has detected, how it is handling them currently. Then you can manually add a dynamic parameter that exists on your site and tell google how to handle it. They have a bunch of new options/features with this tool that makes it really useful now.
-
Could you explain more please, i've not tried that before.
Kind Regards,
-
You may not even have to bother with a canonical tag depending on your set up,
You may be able to amend the parameter settings in webmaster tools.
-
Thanks for that.
So mypage.php i will add a canonical tag which says mypage.php is the original page for the content.
This will then mean Google will only value mypage.php as the unique content - not any pages with a different query string in for example pages such as mypage.php?id=10&sport=football.
-
The short answer: yes and yes
Anything that changes in the URL, even one character, will be seen as a new page
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Duplicate Content
We have multiple collections being flagged as duplicate content - but I can't find where these duplications are coming from? The duplicate content has no introductory text, and no meta description. Please see examples:- This is the correct collection page:-
Technical SEO | | Caroline_Ardmoor
https://www.ardmoor.co.uk/collections/deerhunter This is the incorrect collection page:-
https://www.ardmoor.co.uk/collections/vendors How do I stop this incorrect page from showing?0 -
International SEO And Duplicate Content Within The Same Language
Hello, Currently, we have a .com English website serving an international clientele. As is the case we do not currently target any countries in Google Search Console. However, the UK is an important market for us and we are seeing very low traffic (almost entirely US). We would like to increase visibility in the UK, but currently for English speakers only. My question is this - would geo-targeting a subfolder have a positive impact on visibility/rankings or would it create a duplicate content issue if both pieces of content are in English? My plan was: 1. Create a geo-targeted subfolder (website.com/uk/) that copies our website (we currently cannot create new unique content) 2. Go into GSC and geo-target the folder to the UK 3. Add the following to the /uk/ page to try to negate duplicate issues. Additionally, I can add a rel=canonical tag if suggested, I just worry as an already international site this will create competition between pages However, as we are currently only targeting a location and not the language at this very specific point, would adding a ccTLD be advised instead? The threat of duplicate content worries me less here as this is a topic Matt Cutts has addressed and said is not an issue. I prefer the subfolder method as to ccTLD's, because it allows for more scalability, as in the future I would like to target other countries and languages. Ultimately right now, the goal is to increase UK traffic. Outside of UK backlinks, would any of the above URL geo-targeting help drive traffic? Thanks
Technical SEO | | Tom3_150 -
Duplicate content problem?
Hello! I am not sure if this is a problem or if I am just making something too complicated. Here's the deal. I took on a client who has an existing site in something called homestead. Files cannot be downloaded, making it tricky to get out of homestead. The way it is set up is new sites are developed on subdomains of homestead.com, and then your chosen domain points to this subdomain. The designer who built it has kindly given me access to her account so that I can edit the site, but this is awkward. I want to move the site to its own account. However, to do so Homestead requires that I create a new subdomain and copy the files from one to the other. They don't have any way to redirect the prior subdomain to the new one. They recommend I do something in the html, since that is all I can access. Am I unnecessarily worried about the duplicate content consequences? My understanding is that now I will have two subdomains with the same exact content. True, over time I will be editing the new one. But you get what I'm sayin'. Thanks!
Technical SEO | | devbook90 -
Hosted Wordpress Blog creating Duplicate Content
In my first report from SEOmoz, I see that there are a bunch of "duplicate content" errors that originate from our blog hosted on Wordpress. For example, it's showing that the following URLs all have duplicate content: http://blog.kultureshock.net/2012/11/20/the-secret-merger/ys/
Technical SEO | | TomHu
http://blog.kultureshock.net/2012/11/16/vendome-prize-website/gallery-7701/
http://blog.kultureshock.net/2012/11/20/the-secret-merger/sm/
http://blog.kultureshock.net/2012/11/26/top-ten-tips-to-mastering-the-twitterverse/unknown/
http://blog.kultureshock.net/2012/11/20/the-secret-merger/bv/ They all lead to the various images that have been used in various blog posts. But, I'm not sure why they are considered duplicate content because they have unique URLs and the title meta tag is unique for each one, too. But even so, I don't want these extraneous URLs cluttering up our search results, so, I'm removing all of the links that were automatically created when placing the images in the posts. But, once I do that, will these URLs eventually disappear, or continue to be there? Because our blog is hosted by Wordpress, I unfortunately can't add any of the SEO plugins I've read about, so, wondering how to fix this without special plugins. Thanks!
Tom0 -
Duplicated content on subcategory pages: how do I fix it?
Hello Everybody,
Technical SEO | | uMoR
I manage an e-commerce website and we have a duplicated content issue for subcategory. The scenario is like this: /category1/subcategory1
/category2/subcategory1
/category3/subcategory1 A single subcategory can fit multiple categories, so we have 3 different URL for the same subcategory with the same content (except of the navigation link). Which are the best practice to avoid this issue? Thank you!0 -
Duplicate content domains ranking successfully
I have a project with 8 domains and each domain is showing the same content (including site structure) and still all sites do rank. When I search for a specific word-string in google it lists me all 8 domains. Do you have an explanation, why Google doesn't filter those URLs to just one URL instead of 8 with the same content?
Technical SEO | | kenbrother0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0 -
Canonical Link for Duplicate Content
A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
Technical SEO | | kchandler
Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler0