Questions about duplicate photo content?
-
I know that Google is a mystery, so I am not sure if there are answers to these questions, but I'm going to ask anyway! I recently realized that Google is not happy with duplicate photo content. I'm a photographer and have sold many photos in the past (but retained the rights for) that I am now using on my site. My recent revelations means that I'm now taking down all of these photos. So I've been reverse image searching all of my photos to see if I let anyone else use it first, and in the course of this I found out that there are many of my photos being used by other sites on the web. So my questions are:
With photos that I used first and others have stolen, If I edit these photos (to add copyright info) and then re-upload them, will the sites that are using these images then get credit for using the original image first?
If I have a photo on another one of my own sites and I take it down, can I safely use that photo on my main site, or will Google retain the knowledge that it's been used somewhere else first?
If I sold a photo and it's being used on another site, can I safely use a different photo from the same series that is almost exactly the same? I am unclear what data from the photo Google is matching, and if they can tell the difference between photos that were taken a few seconds apart.
-
I do now see where he did also claim visa versa that using originals wont help either.
why would matt or rand mention it? they don't work for google they will not know everything.
But he also said to the best of his knowledge, at that time, and that in the future that may change and it was 2 years ago. So I would still try to get original images if you can.
-
I don't know where you get the idea that Google is not happy with duplicate photo content but there no such thing as that. If there is, Matt or Rand should have already mentioned it. But there is none, and at the very least I already provided you an evidence to support my claim. Would you rather rely on other presumptions?
FYI, even duplicate content has its own stand on Google. You see, contents and photos are meant to be shared, so as long as you're not doing spammy things on the web, Google has no reason to punish you. Cheer up!
-
But that video is two years old and he says that using duplicate photo content is a good idea for a future quality signal. So I'm not sure that it's still relevant. But I guess we can't prove it one way or the other!
-
But they may reward you for unique images.
-
I think you might want to read this: https://www.plagiarismtoday.com/2013/07/29/google-we-dont-penalize-duplicate-images/.
-
Why do you think Google doesn't penalize for duplicate photo content? It seems like it would be a great way to find scrapers and those with low-value content.
-
Hi Lina,
I understand your trouble Lina but worry not because Google doesn't penalize duplicate photo content. On the other hand, you can optimize so it can be found easily by adding short but concise meta tag title and meta description.
You don't need to edit the Photos, but you're free to add copyright info. However, I don't think it is necessary for the photos you sold.
It is also not necessary to take down existing photos from one of your sites. But if it were me, I would add a referral link (stating that they can access more photos: or original version of:) to your main site. Not only your audience will take notice of your main site but it will also improve your main site's ranking in search results.
True, Google can tell the difference between photos that were taken a few seconds apart, but there is no reason it should be a big issue. Photos were meant to be shared from the start.
Uhmm I'm getting the image of you wanting to gather audience for your main site immediately. Regarding this, you might try Google Adwords for getting audience faster.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to solve this issue and avoid duplicated content?
My marketing team would like to serve up 3 pages of similar content; www.example.com/one, www.example.com/two and www.example.com/three; however the challenge here is, they'd like to have only one page whith three different titles and images based on the user's entry point (one, two, or three). To avoid duplicated pages, how would suggest this best be handled?
Intermediate & Advanced SEO | | JoelHer0 -
Medical / Health Content Authority - Content Mix Question
Greetings, I have an interesting challenge for you. Well, I suppose "interesting" is an understatement, but here goes. Our company is a women's health site. However, over the years our content mix has grown to nearly 50/50 between unique health / medical content and general lifestyle/DIY/well being content (non-health). Basically, there is a "great divide" between health and non-health content. As you can imagine, this has put a serious damper on gaining ground with our medical / health organic traffic. It's my understanding that Google does not see us as an authority site with regard to medical / health content since we "have two faces" in the eyes of Google. My recommendation is to create a new domain and separate the content entirely so that one domain is focused exclusively on health / medical while the other focuses on general lifestyle/DIY/well being. Because health / medical pages undergo an additional level of scrutiny per Google - YMYL pages - it seems to me the only way to make serious ground in this hyper-competitive vertical is to be laser targeted with our health/medical content. I see no other way. Am I thinking clearly here, or have I totally gone insane? Thanks in advance for any reply. Kind regards, Eric
Intermediate & Advanced SEO | | Eric_Lifescript0 -
Duplicate content, the distrubutors are copying the content of the manufacturer
Hi everybody! While I was checking all points of the Technical Site Audit Checklist 2015 (great checklist!), I found that the distrubutors of my client are copying part of the content to add it in their websites. When I take a content snippet, and put it in quotes and search for it I get four or five sites that have copied the content. They are distributors of my client. The first result is still my client (the manufacturer), but... should I recommend any action to this situation. We don't want to bother the distributors with obstacles. This situation could be a problem or is it a common situation and Google knows perfectly where the content is comming from? Any recommendation? Thank you!
Intermediate & Advanced SEO | | teconsite0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Avoiding Duplicate Content - Same Product Under Different Categories
Hello, I am taking some past advise and cleaning up my content navigation to only show 7 tabs as opposed to the 14 currently showing at www.enchantingquotes.com. I am creating a "Shop by Room" and "Shop by Category" link, which is what my main competitors do. My concern is the duplicate content that will happen since the same item will appear in both categories. Should I no follow the "Shop by Room" page? I am confused as to when I should use a no follow as opposed to a canonical tag? Thank you so much for any advise!
Intermediate & Advanced SEO | | Lepasti0 -
Bi-Lingual Site: Lack of Translated Content & Duplicate Content
One of our clients has a blog with an English and Spanish version of every blog post. It's in WordPress and we're using the Q-Translate plugin. The problem is that my company is publishing blog posts in English only. The client is then responsible for having the piece translated, at which point we can add the translation to the blog. So the process is working like this: We add the post in English. We literally copy the exact same English content to the Spanish version, to serve as a placeholder until it's translated by the client. (*Question on this below) We give the Spanish page a placeholder title tag, so at least the title tags will not be duplicate in the mean time. We publish. Two pages go live with the exact same content and different title tags. A week or more later, we get the translated version of the post, and add that as the Spanish version, updating the content, links, and meta data. Our posts typically get indexed very quickly, so I'm worried that this is creating a duplicate content issue. What do you think? What we're noticing is that growth in search traffic is much flatter than it usually is after the first month of a new client blog. I'm looking for any suggestions and advice to make this process more successful for the client. *Would it be better to leave the Spanish page blank? Or add a sentence like: "This post is only available in English" with a link to the English version? Additionally, if you know of a relatively inexpensive but high-quality translation service that can turn these translations around quicker than my client can, I would love to hear about it. Thanks! David
Intermediate & Advanced SEO | | djreich0 -
Duplicate Content - Panda Question
Question: Will duplicate informational content at the bottom of indexed pages violate the panda update? **Total Page Ratio: ** 1/50 of total pages will have duplicate content at the bottom off the page. For example...on 20 pages in 50 different instances there would be common information on the bottom of a page. (On a total of 1000 pages). Basically I just wanted to add informational data to help clients get a broader perspective on making a decision regarding "specific and unique" information that will be at the top of the page. Content ratio per page? : What percentage of duplicate content is allowed per page before you are dinged or penalized. Thank you, Utah Tiger
Intermediate & Advanced SEO | | Boodreaux0 -
Duplicate page Content
There has been over 300 pages on our clients site with duplicate page content. Before we embark on a programming solution to this with canonical tags, our developers are requesting the list of originating sites/links/sources for these odd URLs. How can we find a list of the originating URLs? If you we can provide a list of originating sources, that would be helpful. For example, our the following pages are showing (as a sample) as duplicate content: www.crittenton.com/Video/View.aspx?id=87&VideoID=11 www.crittenton.com/Video/View.aspx?id=87&VideoID=12 www.crittenton.com/Video/View.aspx?id=87&VideoID=15 www.crittenton.com/Video/View.aspx?id=87&VideoID=2 "How did you get all those duplicate urls? I have tried to google the "contact us", "news", "video" pages. I didn't get all those duplicate pages. The page id=87 on the most of the duplicate pages are not supposed to be there. I was wondering how the visitors got to all those duplicate pages. Please advise." Note, the CMS does not create this type of hybrid URLs. We are as curious as you as to where/why/how these are being created. Thanks.
Intermediate & Advanced SEO | | dlemieux0