Accepting RSS feeds. Does it = duplicate content?
-
Hi everyone, for a few years now I've allowed school clients to pipe their news RSS feed to their public accounts on my site. The result is a daily display of the most recent news happening on their campuses that my site visitors can browse.
We don't republish the entire news item; just the headline, and the first 150 characters of their article along with a Read more link for folks to click if they want the full story over on the school's site. Each item has it's own permanent URL on my site.
I'm wondering if this is a wise practice. Does this fall into the territory of duplicate content even though we're essentially providing a teaser for the school?
What do you think?
-
Thanks for the advice.
There are roughly 7600 of these news excerpt page accessed from different areas of my site. A complete archive of news excerpts is accessed here:
http://www.admissionsquest.com/~SchlPostedNews/index.cfm/DisplayMax/999999999
Additionally, school specific news excerpts are available from the various tabs on profiles that have connected school news RSS feeds. Here's an example of a profile & a linked excerpt:
profile:
http://www.admissionsquest.com/cfm_Public/pg_SchlInfo2.cfm/SchlID/842/School/The-Webb-SchoolIn terms of them drawing traffic via search, they do. I see visitors accessing these pages via google, etc. on a regular basis.
Based on what you see above, should I:
1. eliminate our excerpt page model and shift to simply displaying links to new items?
Via this approach, clicking a link would take the visitor directly to the school's site. Right now, they have to visit the excerpt page before clicking the link to jump to my clients' sites.
2. add the tag
to keep them from indexing?
3. or maintain the status quo?
Thanks again for chiming in, everyone. I very much appreciate the feedback. I look forward to your responses.
-
I think that there are two potential problems: 1) duplicate content (which can get your pages filtered from the search results), and, 2) trivial content which can be bitten by panda)
I would not worry much about this content unless you have hundreds or thousands of pages of it.
I would check analytics to see if these pages pull any traffic from search. If not then I would merge them onto long pages instead of on separate pages - or I would block them from indexing by robots txt.
-
Thanks, Albin! Appreciate the response; I've been of the same opinion. I'd love to hear what others think too.
Still, I'm wondering if I should simply create an archive of links under a news heading that drives folks to the news items on the school sites rather than creating an individual page for each that contains an excerpt and then the link.
Interestingly, the excerpt pages tend to do pretty well in search. They often return well ahead of the schools'.
-
I won't say it's within the borders for duplicate content. 150 characters is a very short amount of text relative to the total amount of text and I don't think any search engine would take that as a bad sign. According to me; you don't have to worry - as long as it stays under 150 characters. I would be intrested to hear what others have to say about this though, someone might have a different opinion?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are backlinks within duplicate content ignored or devalued?
From what I understand, Googles no longer has a "Duplicate Content Penalty" instead duplicate content simply isn't show in the search results. Does that mean that any links in the duplicate content are completely ignored, or devalued as far as the backlink profile of the site they are linking to? An example would be an article that might be published on two or three major industry websites. Are only the links from the first website GoogleBot discovers the article on counted or are all the links counted and you just won't see the article itself come up in search results for the second and third website?
Intermediate & Advanced SEO | | Consult19010 -
Http vs. https - duplicate content
Hi I have recently come across a new issue on our site, where https & http titles are showing as duplicate. I read https://moz.com/community/q/duplicate-content-and-http-and-https however, am wondering as https is now a ranking factor, blocked this can't be a good thing? We aren't in a position to roll out https everywhere, so what would be the best thing to do next? I thought about implementing canonicals? Thank you
Intermediate & Advanced SEO | | BeckyKey0 -
Duplicating relevant category content in subcategories. Good or bad for google ranking?
In a travel related page I have city categories with city related information.
Intermediate & Advanced SEO | | lcourse
Would you recommend for or against duplicating some relevant city related in subcategory pages. For visitor it would be useful and google should have more context about the topic of our page.
But my main concern is how this may be perceived by google and especially whether it may make it more likely being penalized for thin content. We already were hit end of june by panda/phantom and we are working on adding also more unique content, but this would be something that we could do additionally and basically instantaneously. Just do not want to make things worse.0 -
Semi-duplicate content yet authoritative site
So I have 5 real estate sites. One of those sites is of course the original, and it has more/better content on most of the pages than the other sites. I used to be top ranked for all of the subdivsion names in my town. Then when I did the next 2-4 sites, I had some sites doing better than others for certain keywords, and then I have 3 of those sites that are basically the same URL structures (besides the actual domain) and they aren't getting fed very many visits. I have a couple of agents that work with me that I loaned my sites to to see if that would help since it would be a different name. My same youtube video is on each of the respective subdivision pages of my site and theirs. Also, their content is just rewritten content from mine about the same length of content. I have looked over and seen a few of my competitors who only have one site and their URL structures arent good at all, and their content isn't good at all and a good bit of their pages rank higher than my main site which is very frustrating to say the least since they are actually copy cats to my site. I sort of started the precedent of content, mapping the neighborhood, how far that subdivision is from certain landmarks, and then shot a video of each. They have pretty much done the same thing and are now ahead of me. What sort of advice could you give me? Right now, I have two sites that are almost duplicate in terms of a template and same subdivsions although I did change the content the best I could, and that site is still getting pretty good visits. I originally did it to try and dominate the first page of the SERPS and then Penguin and Panda came out and seemed to figure that game out. So now, I would still like to keep all the sites, but I'm assuming that would entail making them all unique, which seems to be tough seeing as though my town has the same subdivisions. Curious as to what the suggestions would be, as I have put a lot of time into these sites. If I post my site will it show up in the SERPS? Thanks in advance
Intermediate & Advanced SEO | | Veebs0 -
Questions about duplicate photo content?
I know that Google is a mystery, so I am not sure if there are answers to these questions, but I'm going to ask anyway! I recently realized that Google is not happy with duplicate photo content. I'm a photographer and have sold many photos in the past (but retained the rights for) that I am now using on my site. My recent revelations means that I'm now taking down all of these photos. So I've been reverse image searching all of my photos to see if I let anyone else use it first, and in the course of this I found out that there are many of my photos being used by other sites on the web. So my questions are: With photos that I used first and others have stolen, If I edit these photos (to add copyright info) and then re-upload them, will the sites that are using these images then get credit for using the original image first? If I have a photo on another one of my own sites and I take it down, can I safely use that photo on my main site, or will Google retain the knowledge that it's been used somewhere else first? If I sold a photo and it's being used on another site, can I safely use a different photo from the same series that is almost exactly the same? I am unclear what data from the photo Google is matching, and if they can tell the difference between photos that were taken a few seconds apart.
Intermediate & Advanced SEO | | Lina5000 -
RSS "fresh" content with static page
Hi SEOmoz members, Currently I am researching my competitor and noticed something what i dont really understand. They have hundreds of static pages that dont change, the content is already the same for over 6 months. Every time a customer orders a product they use their rss feed to publish: "Customer A just bought product 4" When i search in Google for product 4 in the last 24 hours, its always their with a new publishing date but the same old content. Is this a good SEO tactic to implant in my own site?
Intermediate & Advanced SEO | | MennoO0 -
Duplicate blog content and NOINDEX
Suppose the "Home" page of your blog at www.example.com/domain/ displays your 10 most recent posts. Each post has its own permalink page (where you have comments/discussion, etc.). This obviously means that the last 10 posts show up as duplicates on your site. Is it good practice to use NOINDEX, FOLLOW on the blog root page (blog/) so that only one copy gets indexed? Thanks, Akira
Intermediate & Advanced SEO | | ahirai0 -
Capitals in url creates duplicate content?
Hey Guys, I had a quick look around however I couldn't find a specific answer to this. Currently, the SEOmoz tools come back and show a heap of duplicate content on my site. And there's a fair bit of it. However, a heap of those errors are relating to random capitals in the urls. for example. "www.website.com.au/Home/information/Stuff" is being treated as duplicate content of "www.website.com.au/home/information/stuff" (Note the difference in capitals). Anyone have any recommendations as to how to fix this server side(keeping in mind it's not practical or possible to fix all of these links) or to tell Google to ignore the capitalisation? Any help is greatly appreciated. LM.
Intermediate & Advanced SEO | | CarlS0