Duplicate Page content / Rel=Cannonical
-
My SEO Moz crawl is showing duplicate content on my site. What is showing up are two articles I submitted to Submit your article (article submission service). I put their code in to my pages i.e.
"
<noscript><b>This article will only display in JavaScript enabled browsers.</b></noscript>
"
So do I need to delete these blog posts since they are showing up as dup content?
I am having a difficult time understanding rel=cannonical. Isn't this for dup content on within one site? So I could not use rel="cannonical" in this instance?
What is the best way to feature an article or press release written for another site, but that you want your clients to see? Rewritting seem ridiculous for a small business like ours. Can we just present the link?
Thank you.
-
I'd agree with Irving that NOINDEX will solve the problem (remove the risk), but I think you could cross-domain rel=canonical. This sounds like a syndication type of situation - you're not claiming credit for the content, but you think it has value to your visitors. So, give the credit to the originating site.
There was a syndication-source option, but Google has quietly deprecated that, unfortunately.
-
I have a feeling as SEO Moz continues to crawl my site I will have more of these errors, as this are from some time ago and I know I've submitted many, many.
Any Comments on this:
What is the best way to feature an article or press release written for another site, but that you want your clients to see? Rewritting seem ridiculous for a small business like ours. Can we just present the link?
-
noindex,follow the page and you're fine - don't cross domain canonicalize.
and if it's only two pages out of your entire site don't worry about it unless you have a 5 page site hehe
also would re code the article on your site instead of linking to submityourarticle.com in a script.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sudden Indexation of "Index of /wp-content/uploads/"
Hi all, I have suddenly noticed a massive jump in indexed pages. After performing a "site:" search, it was revealed that the sudden jump was due to the indexation of many pages beginning with the serp title "Index of /wp-content/uploads/" for many uploaded pieces of content & plugins. This has appeared approximately one month after switching to https. I have also noticed a decline in Bing rankings. Does anyone know what is causing/how to fix this? To be clear, these pages are **not **normal /wp-content/uploads/ but rather "index of" pages, being included in Google. Thank you.
Technical SEO | | Tom3_150 -
What could be the cause of this duplicate content error?
I only have one index.htm and I'm seeing a duplicate content error. What could be causing this? IUJvfZE.png
Technical SEO | | ScottMcPherson1 -
Duplicate Content Issue
SEOMOZ is giving me a number of duplicate content warnings related to pages that have an email a friend and/or email when back in stock versions of a page. I thought I had those blocked via my robots.txt file which contains the following... Disallow: /EmailaFriend.asp Disallow: /Email_Me_When_Back_In_Stock.asp I had thought that the robot.txt file would solve this issue. Anyone have any ideas?
Technical SEO | | WaterSkis.com0 -
Job/Blog Pages and rel=canonical
Hi, I know there are several questions and articles concerning the rel=canonical on SEOmoz, but I didn't find the answer I was looking for... We have some job pages, URLs are: /jobs and then jobs/2, jobs/3 etc.. Our blog pages follow the same: /blog, /blog2, /blog/3... Our CMS is self-produced, and every job/blog-page has the same title tag. According to SEOmoz (and the Webmaster Tools), we have a lots of duplicate title tags because of this problem. If we put the rel=canonical on each page's source code, the title tag problem will be solved for google, right? Because they will just display the /job and /blog main page. That would be great because we dont want 40 blog pages in the index. My concern (a stupid question, but I am not sure): if we put the rel=canonical on the pages, does google crawl them and index our job links? We want to keep our rankings for our job offers on pages 2-xxx. More simple: will we find our job offers on jobs/2, jobs/3... in google, if these pages have the rel=canonical on them? AND ONE MORE: does the SEOmoz bot also follow the rel=canonical and then reduce the number of duplicate title-tags in the campaigns??? Thanx........
Technical SEO | | accessKellyOCG0 -
Duplicate page error
SEO Moz gives me an duplicate page error as my homepage www.monteverdetours.com is the same as www.monteverdetours.com/index is this actually en error? And is google penalizing me for this?
Technical SEO | | Llanero0 -
Wordpress duplicate pages
I am using Wordpress and getting duplicate content Crawler error for following two pages http://edustars.yourstory.in/tag/edupristine/ http://edustars.yourstory.in/tag/education-startups/ These two are tags which take you to the same page. All the other tags/categories which take you to the same page or have same title are also throwing errors, how do i fix it?
Technical SEO | | bhanu22170 -
Does duplicate content on word press work against the site rank? (not page rank)
I noticed in the crawl that there seems to be some duplicate content with my word press blog. I installed a seo plugin, Yoast's wordpress seo plugin, and set it to keep from crawling the archives. This might solve the problem but my main question is can the blog drag my site down?
Technical SEO | | tommr10 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0