Is Syndicated (Duplicate) Content considered Fresh Content?
-
Hi all,
I've been asking quite a bit of questions lately and sincerely appreciate your feedback. My co-workers & I have been discussing content as an avenue outside of SEO. There is a lot of syndicated content programs/plugins out there (in a lot of cases duplicate) - would this be considered fresh content on an individual domain?
An example may clearly show what I'm after:
domain1.com is a lawyer in Seattle.
domain2.com is a lawyer in New York.Both need content on their website relating to being a lawyer for Google to understand what the domain is about. Fresh content is also a factor within Google's algorithm (source: http://moz.com/blog/google-fresh-factor). Therefore, fresh content is needed on their domain. But what if that content is duplicate, does it still hold the same value?
Question: Is fresh content (adding new / updating existing content) still considered "fresh" even if it's duplicate (across multiple domains).
Purpose: domain1.com may benefit from a resource for his/her local clientale as the same would domain2.com. And both customers would be reading the "duplicate content" for the first time. Therefore, both lawyers will be seen as an authority & improve their website to rank well.
We weren't interested in ranking the individual article and are aware of canonical URLs. We aren't implementing this as a strategy - just as a means to really understand content marketing outside of SEO.
Conclusion: IF duplicate content is still considered fresh content on an individual domain, then couldn't duplicate content (that obviously won't rank) still help SEO across a domain? This may sound controversial & I desire an open-ended discussion with linked sources / case studies. This conversation may tie into another Q&A I posted: http://moz.com/community/q/does-duplicate-content-actually-penalize-a-domain.
TLDR version: Is duplicate content (same article across multiple domains) considered fresh content on an individual domain?
Thanks so much,
Cole
-
Hi all,
Thanks for the responses & feedback.
Alan, in this example, the fresh content would be relevant. Of course there are search queries that don't need freshness or updates, but I would argue most do need updates / freshness (even the ones we think we know the answer to over time).Once again, the conversation is not about RANKING for that page but about HELPING the domain achieve "freshness & relevance" around a topic with that duplicate content.
Would love to see others chime in.
Thanks,
Cole
-
Well that could mean that some don't need any.
Like
Q. Who discovered Australia, A. Captain Cook.
This does not need freshness.Also consider being original content, in that case the timestamp being older would be better.
I like to think that I own google, and say to myself would I rank it? of cause some things may rank that were not intended to, but I think its quite safe to think that way.
-
This was the part that triggered me:
"Google Fellow Amit Singhal explains that “Dif__ferent searches have different freshness needs.”
The implication is that Google measures all of your documents for freshness, then scores each page according to the type of search query."
-
Had a quick look at that page, did not see that it affects all pages. Anyhow google said 35% of queries, so could not be all pages.
Some points- Why would fresh data be excluded from duplicate content?
- Is it likely that syndicated data is fresh?
- What are google trying to do here, rank syndicated duplicate data?
I cant see it working
-
Thanks a lot! Kinda made me realize I really should read some more about this update. Might be off topic, but what's your view on freshness applied to **all **pages. In this Whiteboard Friday its stated it only impacts the terms you describe:
http://moz.com/blog/googles-freshness-update-whiteboard-friday
But in this blogpost of that time (before the sum up) it’s stated that it’s applied to all pages, but does affect search queries in different ways:
-
Yes, freshness update was not for all queries, it was for certain queries that need fresh content such as football scores, or whose on the team this week, obviously we don't want the score from last year or who is playing last year we want the current data, that is where the freshness update may give you a boost while your content is fresh. Having syndicated content I cant see falling into this category, even if it did, being duplicate content would mean that only once source is going to rank.
Also you have to look at indexing, will the duplicate content even be indexed? if so how often.
That's why I say the short answer is no.
-
Hi Alan,
Is there any source / own research that can back up this answer?
Would love to read more about this subject!
-
Short answer, NO
-
Thanks for your feedback Mike - definitely helpful!
In this hypothetical, we're looking at research or comprehensive articles for specific niches that could serve multiple businesses well as an authority.
Thanks,
Cole
-
Hi Cole,
Fresh by Google (if not noindexed) in this case would be kind of like the freshness value of a "fresh" error.
Maybe that's extreme, but point being, the content is not needed by the web, since it already exists. If there was absolutely nothing else being added to or changed about the site and my one option was adding duplicate content, I'd noindex/follow it and figure I might have gotten some small, small, small benefit from updating the site a little, maybe an improved user signal. I'd for sure keep it out of the index. I guess that's how I'd do it, if it had some value for visitors. If it's only value was adding something fresh and not that great for visitors, I'd find the extra hour necessary to re-write it into something fresh, unique and valued by visitors. .
The other thing about syndicated content is that after you make sure where else you can find it on the web via an exact phrase search in Google, it may not mean you've seen the only instance of it as it may evolve. Having duplicate content indexed with other sites of possibly low quality may put you in a bad neighborhood as sites with common content. If I had a ten foot pole, I wouldn't touch it with it.
I hope that helps. Best... Mike
-
Hi Mike,
Thanks for the feedback. That was one potential point I was making.
Am still curious if duplicate content would be considered "fresh" within a website. Good point of the duplicate content overriding the benefit of fresh content.
Thanks,
Cole
-
In phrasing the question as "is it considered fresh/unique," I'm going to assume you mean by google for the site's organic benefit. So, I guess the reasoning would be is the fact that it's fresh to the site a bigger positive than the negative of duplicate content. Is that what you're getting at? Personally, knowingly on-boarding duplicate content would be too big of a potential negative for me to consider doing it. I've done it as a noindex/follow for reasons other than Google, but not for some mystery freshness bump.
Not that you can't find examples of duplicate content ranking in more than one place. To me on-boarding indexed duplicate content seems like just asking for trouble.
Hope that helps. Best... Mike
-
I'm curious to see what others have to say on this, but I've always assumed that "fresh" and "unique" go hand in hand when it comes to website content. Therefore, duplicate content would not be fresh content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content warning: Same page but different urls???
Hi guys i have a friend of mine who has a site i noticed once tested with moz that there are 80 duplicate content warnings, for instance Page 1 is http://yourdigitalfile.com/signing-documents.html the warning page is http://www.yourdigitalfile.com/signing-documents.html another example Page 1 http://www.yourdigitalfile.com/ same second page http://yourdigitalfile.com i noticed that the whole website is like the nealry every page has another version in a different url?, any ideas why they dev would do this, also the pages that have received the warnings are not redirected to the newer pages you can go to either one??? thanks very much
White Hat / Black Hat SEO | | ydf0 -
Competitors with duplicate sites for backlinks
Hello all, In the last few months, my company has seen some keywords we historically rank well for fall off the first page, and there are a couple competitors that have appeared that use backlinks from seemingly the same site. For fairness, our site has slow page load speeds that we are working on changing, as well as not being mobile friendly yet. The sites that are ranking are mobile friendly and load fast, but we have heaps of other words still ranking well, and I'm more curious about this methodology. For example, these two pages: http://whiteboards.com.au/
White Hat / Black Hat SEO | | JustinBSLW
http://www.glasswhiteboards.com.au/ In OSE, glasswhiteboards has the majority of links from whiteboards, and the content between the sites is the same. My page has higher domain authority & page authority, but less backlinks. However, if you take away the backlinks from the duplicate site, they are the same. Isn't this type of content supposed to be flagged? My question is about whether this kind of similar site on different domains is a good idea to build links, as all my research shows that it's poor in the long run, but it seems to be working with these guys. Another group of sites that has been killing us uses this same method, with multiple sites that look the same that all link to each other to build up backlinks. These sites do have different content. It seems instead of building different categories within their own site, they have purchased multiple domains that act as their categories. Here's just a few: http://www.lockablenoticeboards.com.au/
http://www.snapperframes.com/
http://www.snapperdisplay.com.au/
http://www.light-box.com.au/
http://www.a-frame-signs.com.au/
http://www.posterhangers.com.au/0 -
Preventing CNAME Site Duplications
Hello fellow mozzers! Let me see if I can explain this properly. First, our server admin is out of contact at the moment,
White Hat / Black Hat SEO | | David-Kley
so we are having to take this project on somewhat blind. (forgive the ignorance of terms). We have a client that needs a cname record setup, as they need a sales.DOMAIN.com to go to a different
provider of data. They have a "store" platform that is hosted elsewhere and they require a cname to be
sent to a custom subdomain they set up on their end. My question is, how do we prevent the cname from being indexed along with the main domain? If we
process a redirect for the subdomain, then the site will not be able to go out and grab the other providers
info and display it. Currently, if you type in the sales.DOMAIN.com it shows the main site's homepage.
That cannot be allow to take place as we all know, having more than one domain with
exact same content = very bad for seo. I'd rather not rely on Google to figure it out. Should we just have the cname host (where its pointing at) add a robots rule and have it set to not index
the cname? The store does not need to be indexed, as the items are changed almost daily. Lastly, is an A record required for this type of situation in any way? Forgive my ignorance of subdomains, cname records and related terms. Our server admin being
unavailable is not helping this project move along any. Any advice on the best way to handle
this would be very helpful!0 -
Dynamic Content Boxes: how to use them without get Duplicate Content Penalty?
Hi everybody, I am starting a project with a travelling website which has some standard category pages like Last Minute, Offers, Destinations, Vacations, Fly + Hotel. Every category has inside a lot of destinations with relative landing pages which will be like: Last Minute New York, Last Minute Paris, Offers New York, Offers Paris, etc. My question is: I am trying to simplify my job thinking about writing some dynamic content boxes for Last Minute, Offers and the other categories, changing only the destination city (Rome, Paris, New York, etc) repeated X types in X different combinations inside the content box. In this way I would simplify a lot my content writing for the principal generic landing pages of each category but I'm worried about getting penalized for Duplicate Content. Do you think my solution could work? If not, what is your suggestion? Is there a rule for categorize a content as duplicate (for example number of same words in a row, ...)? Thanks in advance for your help! A.
White Hat / Black Hat SEO | | OptimizedGroup0 -
Noindexing Thin Content Pages: Good or Bad?
If you have massive pages with super thin content (such as pagination pages) and you noindex them, once they are removed from googles index (and if these pages aren't viewable to the user and/or don't get any traffic) is it smart to completely remove them (404?) or is there any valid reason that they should be kept? If you noindex them, should you keep all URLs in the sitemap so that google will recrawl and notice the noindex tag? If you noindex them, and then remove the sitemap, can Google still recrawl and recognize the noindex tag on their own?
White Hat / Black Hat SEO | | WebServiceConsulting.com0 -
Am I Syndicating Content Correctly?
My question is about how to syndicate content correctly. Our site has professionally written content aimed toward our readers, not search engines. As a result, we have other related websites who are looking to syndicate our content. I have read the Google duplicate content guidelines (https://support.google.com/webmasters/answer/66359?hl=en), canonical recommendations (https://support.google.com/webmasters/answer/139066?hl=en&ref_topic=2371375), and no index recommendation (https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag) offered by Google, but am still a little confused about how to proceed. The pros in our opinion are as follows:#1 We can gain exposure to a new audience as well as help grow our brand #2 We figure its also a good way to help build up credible links and help our rankings in GoogleOur initial reaction is to have them use a "canonical link" to assign the content back to us, but also implement a "no index, follow" tag to help avoid duplicate content issues. Are we doing this correctly, or are we potentially in threat of violating some sort of Google Quality Guideline?Thanks!
White Hat / Black Hat SEO | | Dirving4Success0 -
Schema.org tricking and duplicate content across domains
I've found the following abuse, and Im curious what could I do about it. Basically the scheme is: own some content only once (pictures, description, reviews etc) use different domain names (no problem if you use the same IP or IP-C address) have a different layout (this is basically the key) use schema.org tricking, meaning show (the very same) reviews on different scale, show a little bit less reviews on one site than on an another Quick example: http://bit.ly/18rKd2Q
White Hat / Black Hat SEO | | Sved
#2: budapesthotelstart.com/budapest-hotels/hotel-erkel/szalloda-attekintes.hu.html (217.113.62.21), 328 reviews, 8.6 / 10
#6: szallasvadasz.hu/hotel-erkel/ (217.113.62.201), 323 reviews, 4.29 / 5
#7: xn--szlls-gyula-l7ac.hu/szallodak/erkel-hotel/ (217.113.62.201), no reviews shown It turns out that this tactic even without the 4th step can be quite beneficial to rank with several domains. Here is a little investigation I've done (not really extensive, took around 1 and a half hour, but quite shocking nonetheless):
https://docs.google.com/spreadsheet/ccc?key=0Aqbt1cVFlhXbdENGenFsME5vSldldTl3WWh4cVVHQXc#gid=0 Kaspar Szymanski from Google Webspam team said that they have looked into it, and will do something, but honestly I don't know whether I could believe it or not. What do you suggest? should I leave it, and try to copy this tactic to rank with the very same content multiple times? should I deliberately cheat with markups? should I play nice and hope that these guys sooner or later will be dealt with? (honestly can't see this one working out) should I write a case study for this, so maybe if the tactics get bigger attention, then google will deal with it? Does anybody could push this towards Matt Cutts, or anybody else who is responsible for these things?0 -
Same content, different target area SEO
So ok, I have a gambling site that i want to target for Australia, Canada, USA and England separately and still have .com for world wide (or not, read further).The websites content will basically stays the same for all of them, perhaps just small changes of layout and information order (different order for top 10 gambling rooms) My question 1 would be: How should I mark the content for Google and other search engines that it would not be considered "duplicate content"? As I have mentioned the content will actually BE duplicate, but i want to target the users in different areas, so I believe search engines should have a proper way not to penalize my websites for trying to reach the users on their own country TLDs. What i thought of so far is: 1. Separate webmasterstools account for every domain -> we will need to setup the user targeting to specific country in it.
White Hat / Black Hat SEO | | SEO_MediaInno
2. Use the hreflang tags to indicate, that this content is for GB users "en-GB" the same for other domains more info about it http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
3. Get the country specific IP address (physical location of the server is not hugely important, just the IP)
4. It would be great if the IP address for co.uk is from different C-class than the one for the .com Is there anything I am missing here? Question 2: Should i target .com for USA market or is there some other options? (not based in USA so i believe .us is out of question) Thank you for your answers. T0