Duplicate content w/ same URLs
-
I am getting high priority issues for our privacy & terms pages that have the same URL. Why would this show up as duplicate content?
Thanks!
-
Agreed! If the copy is same it would be duplicate. It might be worth looking at the URLs - versions with and without trailing slashes, www and non-www, different protocols serving the same copy. They all look same and one to the human eyes but search engines consider them as duplicate. Maybe more details on the URLs would help us answer your query better.
-
Hi Ranvir,
Without actually viewing them I can only assume that there very similar? Is the exact same copy appearing on both pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
From: http://www. to https://
Hi all, I am changing my hosting for legal and SEO reasons from http://www to https:// . Now I hear different stories on the redirects: 1: should i try and change my backlinks? 2: internally all links will be 301 redirected at first. Than I want to (manually) change them. It;s within Wordpress so there should be a plugin for this. Tips? 3: Will it affect my rankings and for what period? What I now know that at first it will drop little but eventually you will rank higher than before. Thanks so much in advance! Tymen
Technical SEO | | Tymen1 -
Duplicate content question...
I have a high duplicate content issue on my website. However, I'm not sure how to handle or fix this issue. I have 2 different URLs landing to the same page content. http://www.myfitstation.com/tag/vegan/ and http://www.myfitstation.com/tag/raw-food/ .In this situation, I cannot redirect one URL to the other since in the future I will probably be adding additional posts to either the "vegan" tag or the "raw food tag". What is the solution in this case? Thank you
Technical SEO | | myfitstation0 -
Development Website Duplicate Content Issue
Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin dev.rollerbannerscheap.co.uk/ A description for this result is not available because of this site's robots.txt – learn more. This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google. In GWT I have tried to remove the sub domain. When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk. I want to remove a sub domain not a page. Can anyone help please?
Technical SEO | | SO_UK0 -
Duplicate Page Content for sorted archives?
Experienced backend dev, but SEO newbie here 🙂 When SEOmoz crawls my site, I get notified of DPC errors on some list/archive sorted pages (appending ?sort=X to the url). The pages all have rel=canonical to the archive home. Some of the pages are shorter (have only one or two entries). Is there a way to resolve this error? Perhaps add rel=nofollow to the sorting menu? Or perhaps find a method that utilizes a non-link navigation method to sort / switch sorted pages? No issues with duplicate content are showing up on google webmaster tools. Thanks for your help!
Technical SEO | | jwondrusch0 -
Duplicate Title/Meta Descriptions
Hi I had some error messages in the webmaster account, stating I had duplicate title/meta descriptions. Ive since fixed it, typically how long does it take for a full crawl if Ive fixed these issues? Webmaster is still showing problems with various Title/Descriptions. Also was wondering if I should block individual pages on a large ecomerce site? EX of a Large site - http://www.stubhub.com/chicago-bears-tickets/ (Page is structure and optimized) Then you have all individual games http://www.stubhub.com/chicago-bears-tickets/bears-vs-lions-soldier-field-4077064/ they have an H1 and a meta description, should the page above be blocked from google and concentrate only on the Pain page? Thanks!
Technical SEO | | TP_Marketing0 -
/$1 URL Showing Up
Whenever I crawl my site with any kind of bot or a sitemap generator over my site. it comes up with /$1 version of my URLs. For example: It gives me hdiconference.com & hdiconference.com/$1 and hdiconference.com/purchases & hdiconference.com/purchases/$1 Then I get warnings saying that it's duplicate content. Here's the problem: I can't find these /$1 URLs anywhere. Even when I type them in, I get a 404 error. I don't know what they are, where they came from, and I can't find them when I scour my code. So, I'm trying to figure out where the crawlers are picking this up. Where are these things? If sitemap generators and other site crawlers are seeing them, I have to assume that Googlebot is seeing them as well. Any help? My developers are at a loss as well.
Technical SEO | | HDI0 -
Are recipes excluded from duplicate content?
Does anyone know how recipes are treated by search engines? For example, I know press releases are expected to have lots of duplicates out there so they aren't penalized. Does anyone know if recipes are treated the same way. For example, if you Google "three cheese beef pasta shells" you get the first two results with identical content.
Technical SEO | | RiseSEO0 -
Duplicate content
This is just a quickie: On one of my campaigns in SEOmoz I have 151 duplicate page content issues! Ouch! On analysis the site in question has duplicated every URL with "en" e.g http://www.domainname.com/en/Fashion/Mulberry/SpringSummer-2010/ http://www.domainname.com/Fashion/Mulberry/SpringSummer-2010/ Personally my thoughts are that are rel = canonical will sort this issue, but before I ask our dev team to add this, and get various excuses why they can't I wanted to double check i am correct in my thinking? Thanks in advance for your time
Technical SEO | | Yozzer0