Minimising duplicate content
-
From a minimising duplicate content perspective is it best to create all blog posts with a single tag so google doesn't think the same post being returned via a different tag search is duplicate content. I.e. the urls below return the same blog post; or doesn't it matter.
for example
http://www.ukholidayplaces.co.uk/blog/?tag=/stay+in+Margate
http://www.ukholidayplaces.co.uk/blog/?tag=/Margate+on+a+budget
are the same posts...
thanks
-
Hi!
Little late to the party here - thanks Geoff for helping out!!
While certainly creating excerpts on for the tag pages would be great - I'd suggest doing a crawl of your own site with something like Screaming Frog SEO Spider
I just did a crawl, and see a bunch of issues needing attention:
- Just about all of your meta descriptions are exactly the same
- Your H1s are all the same
- Bunch of duplicate titles (because for example, all the author archive subpages are being given the same title)
- I don't see any meta robots or canonical tags in use at all, which would be good to help control what pages you want indexed or counted for value.
- You have tons of meta keywords, mostly all duplicates, and the meta keywords tag should not be used anymore.
You've got some additional issues to work out besides just the tags thing.
Check webmaster tools to confirm this as well, Google webmaster tools will show you everything you need to fix!
-Dan
-
You're welcome Jonathan.
Feel free to see how a lot of other successful organisations implement this on their blogs on the web. Take Mashable for example, see their topics pages, these are essentially what blog articles are tagged with. Looks like they cut off their snippets at about 170 characters.
Also, ensure that you're using the canonical link element for blog article pages too to let search engines know that those are the originals and where you want the weight placed.
-
Thanks Geoff,
I wasn't sure after the recent updates.
Copy scape finds loads of matches but google didn't....
-
No, assigning multiple tags to multiple pages on your website is good practice. (Providing they are of relevance of course).
What you should think about doing is only displaying excerpts for tag / search result pages so that it doesn't flag as duplicate content. You don't need to be displaying the entire post(s) for a tag page, a small snippet with a 'Read More' or similar link will ensure the full original is only ever at one location, it's specific URI.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitemap and Privacy Policy marked for duplicate content?
On a recent crawl, Moz flagged a page of our site for duplicate content. However, the pages listed are our sitemap and our privacy policy -- both very different: http://elearning.smp.org/sitemap/ http://elearning.smp.org/privacy-policy/ What is our best option to address this issue? I had considered a noindex tag on the privacy policy page, but since we have enabled user insights in Google Analytics we need to have the privacy policy displayed and I worry that putting a noindex on the page would cause problems later.
Web Design | | calliek0 -
Copy partial content to other pages ?
One of our clients looking to redesign their website since we're redesigning the whole website we thought it would be good idea to separate services into individual pages so every service will have it's own page (currently there is 1 page that describes all of the services). what we're planing to do is to write unique content for each service page (about 300-400 keywords), but we also want to use some of the existing content which is kind of explains the process of provided services. so here i need your help! what would be the best practice to use same part of existing content on every service page without getting penalized for duplicated content? here is how we want to structure the page with h1 and h2 <main> Service name (same as page title) Subline new and unique content about 300-400 keywords Part of old content which is going to be placed on every service page </main> any help would be much appreciated!
Web Design | | MozPro30 -
Why would a developer build all page content in php?
Picked up a new client. Site is built on Wordpress. Previous developer built nearly all page content in their custom theme's PHP files. In other words, the theme's "page.php" file contains virtually all the HTML for each of the site's pages. Each individual page's back-end page editor appears blank, except for some of the page text. No markup, no widgets, no custom fields. And no dedicated, page-specific php files either. Pages are differentiated within page.php using: elseif (is_page("27") Has anyone ever come across this approach before? Why might someone do this?
Web Design | | mphdavidson0 -
Is it cloaking/hiding text if textual content is no longer accessible for mobile visitors on responsive webpages?
My company is implementing a responsive design for our website to better serve our mobile customers. However, when I reviewed the wireframes of the work our development company is doing, it became clear to me that, for many of our pages, large parts of the textual content on the page, and most of our sidebar links, would no longer be accessible to a visitor using a mobile device. The content will still be indexable, but hidden from users using media queries. There would be no access point for a user to view much of the content on the page that's making it rank. This is not my understanding of best practices around responsive design. My interpretation of Google's guidelines on responsive design is that all of the content is served to both users and search engines, but displayed in a more accessible way to a user depending on their mobile device. For example, Wikipedia pages have introductory content, but hide most of the detailed info in tabs. All of the information is still there and accessible to a user...but you don't have to scroll through as much to get to what you want. To me, what our development company is proposing fits the definition of cloaking and/or hiding text and links - we'd be making available different content to search engines than users, and it seems to me that there's considerable risk to their interpretation of responsive design. I'm wondering what other people in the Moz community think about this - and whether anyone out there has any experience to share about inaccessable content on responsive webpages, and the SEO impact of this. Thank you!
Web Design | | mmewdell0 -
Duplicate page title caused by Shopify CMS
Hi, We have an ecommerce site set up at devlinsonline.com.au using Shopify and the MOZ crawl is returning a huge number (hundreds!) of Duplicate Page Title errors. The issue seems to be the way that Shopify uses tagging to sort products. So, using the 'Riedel' collection as an example, the urls devlinsonline.com.au/collections/riedel-glasses/ devlinsonline.com.au/collections/riedel-glasses/decanters devlinsonline.com.au/collections/riedel-glasses/vinum all have the exact same page title. We are also having the same issue with the blog and other sections of our site. Is this something that is actually a serious issue or, perhaps, is Google's algorithm intelligent enough to recognise that this is part of Shopify's layout so it will not negatively affect our rankings and can, essentially, be ignored? Thanks.
Web Design | | SimonDevlin0 -
Im having duplicate content issues in wordpress
all of my pages are working fine. but i added my sitemap to my footer in my website and when i click on my blog from my footer it takes me to the homepage. so now im having duplicate content for two diff urls. ive tried adding a rel=canonical and a 301 redirect to the blog page but it doesnt resolve the problem. also, when i go to my footer and click blog. after it brings me to the homepage ill try to click on my pages from the original bar at the top of my screen and it will bring me to the right pages. but it will have the same blog url in the search bar even when im on other pages. other than that all of my pages in my footer and in my homepage toolbar work fine. its just that one particular problem with the blog page in the footer and how it stays with the same blog url on every page after i click the blog in the footer. can someone please help. im using yoast and idk if i should disable it or what.
Web Design | | ClearVisionDesign0 -
Is it common to have some of error/warning(currency duplicate,redirect, etc...) in most website that rank well?
Hi could any body could give me some idea on 'on page optimisation' Currently in my campaign I have around 3000+ errors, 14,000+ warning, 7000+ notices for the following reasons: Overly-Dynamic URL
Web Design | | LauraHT
Temporary Redirect
Title Element Too Long (> 70 Characters)
Duplicate Page Title
etc... First of all I know these have negative effect on SEO. Now to fix towards those issues it involve lots of works and times. At the same time most of our important keywords/url rank position have not changed over the last 12 months. Does that mean the above has only limited negative effect? I just want to know is it worthy to invest the man/hour/money to clean those issues. As it involves decent development time. Is it common to have some of error/warning in most website that rank well? (e.g. I 've seem may big website have duplicate title/meta-desc on their currency variant page)0 -
How will engines deal with duplicate head elements e.g. title or canonicals?
Obviously duplicate content is never a good thing...on separate URL's. Question is, how will the engines deal with duplicate meta tags on the same page. Example Head Tag: <title>Example Title - #1</title> <title>Example Title - #2</title> My assumption is that Google (and others) will take the first instance of the tag, such that "Example Title - #1" and canonical = "http://www.example.com" would be considered for ranking purposes while the others are disregarded. My assumption is based on how SE's deal with duplicate links on a page. Is this a correct assumption? We're building a CMS-like service that will allow our SEO team to change head tag content on the fly. The easiest solution, from a dev perspective, is to simply place new/updated content above the preexisting elements. I'm trying to validate/invalidate the approach. Thanks in advance.
Web Design | | PCampolo0