My content has been shared across different websites - how do I become the canonical link?
-
I wanted to ask about canonical links. Basically I produced some content for my website which was an interview with a famous band who were playing at a festival that summer. I told the festival and they asked to have exclusive dibs on releasing the piece in exchange for linking back to our domain. I said yes as I knew the link would be a good one. So this interview got posted up, I then posted in on my website's blog, and a month later the local newspaper also featured it on their website. Is there some way to have a creative license over this interview piece (which has been copied word for word) without getting the other websites to edit their code and add a canonical reference? I did ask them but my request was unsuccessful.
I'm thinking there might be no way to claim this content as my website was not the first domain to post it? Any thoughts appreciated.
Thanks
-
Here are a few things that many people do not understand.
-
The date of posting does not indicate who owns the content.
-
The date that Google finds the content does not indicate who posted it first or who owns it.
-
Ownership is independent of date of posting and date of Google discovery.
-
Google does not always grant best rankings to "who they discovered first". The rankings often go to "who is the most powerful".
-
If you file a DMCA against someone who has documented permision to post the content and they decide to sue you for having that content taken down, you are probably going to lose, and you might have to pay more than you expect in damages and more than you expect in attorney fees.
-
The good news is that legal advice on copyright often costs a lot less than you expect and a Hell of a lot less than getting sued. Know what you are doing and the potential consequences before filing DMCA.
-
There are two ways to get the canonical applied. A) the webmaster of the website that is publishing the content must insert a canonical tag into the of the html of the page. It should read like this... B) the webmaster of the website that is publishing the content can apply rel=canonical using .htaccess.
-
-
The date you posted your content will signal to google you are the original.
If the content has been copied contact the webmaster and ask for a reference in the form of a canonical tag.
Otherwise you can report copyright infringement to google
https://www.google.com/webmasters/tools/dmca-notice (Digital Millennium Copyright Act)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Googlebot and other spiders are searching for odd links in our website trying to understand why, and what to do about it.
I recently began work on an existing Wordpress website that was revamped about 3 months ago. https://thedoctorwithin.com. I'm a bit new to Wordpress, so I thought I should reach out to some of the experts in the community.Checking ‘Not found’ Crawl Errors in Google Search Console, I notice many irrelevant links that are not present in the website, nor the database, as near as I can tell. When checking the source of these irrelevant links, I notice they’re all generated from various pages in the site, as well as non-existing pages, allegedly in the site, even though these pages have never existed. For instance: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/feedback-and-testimonials/ allegedly linked from: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/ (doesn’t exist) In other cases, these goofy URLs are even linked from the sitemap. BTW - all the URLs in the sitemap are valid URLs. Currently, the site has a flat structure. Nearly all the content is merely URL/content/ without further breakdown (or subdirectories). Previous site versions had a more varied page organization, but what I'm seeing doesn't seem to reflect the current page organization, nor the previous page organization. Had a similar issue, due to use of Divi's search feature. Ended up with some pretty deep non-existent links branching off of /search/, such as: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/consultations/ allegedly linked from: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/ (doesn't exist). I blocked the /search/ branches via robots.txt. No real loss, since neither /search/ nor any of its subdirectories are valid. There are numerous pre-existing categories and tags on the site. The categories and tags aren't used as pages. I suspect Google, (and other engines,) might be creating arbitrary paths from these. Looking through the site’s 404 errors, I’m seeing the same behavior from Bing, Moz and other spiders, as well. I suppose I could use Search Console to remove URL/category/ and URL/tag/. I suppose I could do the same, in regards to other legitimate spiders / search engines. Perhaps it would be better to use Mod Rewrite to lead spiders to pages that actually do exist. Looking forward to suggestions about best way to deal with these errant searches. Also curious to learn about why these are occurring. Thank you.
Technical SEO | | linkjuiced0 -
Canonicals being ignored
Hi, I've got a site that I'm working with that has 2 ways of viewing the same page - a property details page. Basically one version if the long version: /property/Edinburgh/Southside-Newington/6CN99V and the other just the short version with the code only on the end: /6cn99v There is a canonical in place from the short version to the long version, and the sitemap.xml only lists the long version HOWEVER - Google is indexing the short version in the majority of cases (not all but the majority). http://www.website.com/property/Edinburgh/Southside-Newington/6CN99V"> Obviously "www.website.com" contains the URL of the site itself. Any thoughts?
Technical SEO | | squarecat.ben0 -
Confused on footer links (Which are best practices for footer links on other websites?)
Hello folks, We are eCommerce web design and Development Company and we give do follow links of our website to every projects which we have done with specific keywords. So now the concern is we are seeing huge amount of back-links are being generated from single root domain for particular keyword in webmaster tools. So what should be the best way to practice this? Should we give no follow attribute to it or can use our company logo with link? LtMjHER.png
Technical SEO | | CommercePundit0 -
Too many links?
Hello! I've just started with SEOmoz, and am getting an error about too many links on a few of my blog posts - it's pages with high numbers of comments, and the links are coming from each commenter's profile (hopefully that makes sense they're not just random stuffed links). Is there a way to help this not cause a problem? Thanks!
Technical SEO | | PaulineMagnusson0 -
Duplicate Page Content / Rel Canonical
Hi, The diagnostics shows me that I have 590 Duplicate Page Content , but when it shows the Rel Canonical I have over 1000, so dose that mean I have no Duplicate Page Content problem? Please help.
Technical SEO | | Joseph-Green-SEO0 -
No crawl code for pages of helpful links vs. no follow code on each link?
Our college website has many "owners" who want pages of "helpful links" resulting in a large number of outbound links. If we add code to the pages to prevent them from being crawled, will that be just as effective as making every individual link no follow?
Technical SEO | | LAJN0 -
Client has 3 websites, for various locations & duplicate content is a big issue...Is my solution the best?
Hi guys, I have a client who has 3 websites all for different locations in the same state in Australia. Obviously this is not the best practice but in the meeting he said that each area is quite particular about where they do business. What he means is that people from one area want to do business with a website from that particular area. He has 3 domains and we have duplicate content issues. We are solving these at the moment with the canonical tag however they are redesigning the site soon. My suggestion is that we have 1 domain and sub domains for the other 2 areas. This way the people from that area will see the company is from their area. Also this way we have 1 domain to optimise and build domain authority for. Has anyone else come across this and is my solution the best for this? Thanks! Jon
Technical SEO | | Jon_bangonline0 -
Local Search | Website Issue with Duplicate Content (97 pages)
Hi SEOmoz community. I have a unique situation where I’m evaluating a website that is trying to optimize better for local search and targeting 97 surrounding towns in his geographical location. What is unique about this situation is that he is ranking on the 1st and 2nd pages of the SERPs for his targeted keywords, has duplicate content on 97 pages to his site, and the search engines are still ranking the website. I ran the website’s url through SEOmoz’s Crawl Test Tool and it verified that it has duplicate content on 97 pages and has too many links (97) per page. Summary: Website has 97 duplicate pages representing each town, with each individual page listing and repeating all of the 97 surrounding towns, and each town is a link to a duplicate page. Question: I know eventually the site will not get indexed by the Search Engines and not sure the best way to resolve this problem – any advice?
Technical SEO | | ToddSEOBoston0