Duplicate Content Issue WWW and Non WWW
-
One of my sites got hit with duplicate content a while ago because Google seemed to be considering hhtp, https, www, and non ww versions of the site all different sites. We thought we fixed it, but for some reason https://www and just https:// are giving us duplicate content again.
I can't seem to figure out why it keeps doing this. The url is https://bandsonabudget.com if any of you want to see if you can figure out why I am still having this issue.
-
Came here to say what Ray already said.
-
It looks like you have canonical URL meta information in place. However, there is no 301 redirect from the non-ww version to the www version.
For instances such as this, you want users to resolve to a single URL, which I believe you want the www version of the URL. You should implement a 301 redirect from the non-www to the www URL.
Also, in your Google Webmaster Tools, make sure to configure the setting that tells Google what your preferred domain/home page is.
This should help clear up any duplicate content issues for this case.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Recurring events and duplicate content
Does anyone have tips on how to work in an event system to avoid duplicate content in regards to recurring events? How do I best utilize on-page optimization?
Technical SEO | | megan.helmer0 -
Possible duplicate content issue with my Blog and archive pages . Any help greatly appreciated
Dear Mozzers, I have been looking at my news section on my eCommerce site and I think I may have a duplicate content issue and wanted some advice on whether I do and if so , how best I handle this. http://www.website.co.uk/news
Technical SEO | | PeteC12
http://www.website.co.uk/news/page:1
http://www.website.co.uk/news/page:2
http://www.website.co.uk/news/page:3
http://www.website.co.uk/news/limit:9999 (This is show all) I also have the ability of showing articles by month : http://www.website.co.uk/news/archive/2015/04 (April)
http://www.website.co.uk/news/archive/2015/03 (March)
http://www.website.co.uk/news/archive/2015/02 (Feb)
http://www.website.co.uk/news/archive/2015/01 (Jan) I am wondering if there's a duplicate issue here or not given that I also articles by month as well and if so how best I handle this.? I already do pagination on my news pages (page 1 , page 2) by using rel=next and rel=Prev but I don't have an canconical or anything as yet. I enclose a couple of links if this would help and would appreciate if someone could take a browse. I have a View All link on my homepage for for all news items - http://goo.gl/JPPIvQ I which have a different urls - March 2015 Articles - http://goo.gl/0O1wYD and April 2015 articles - http://goo.gl/GdW2oK On another note, These articles are also linked to from the relevant category landing pages on my website to help with SEO. I have not used H tags on the article links in my landing pages , just displaying the weblink back to the news article.I've done this to try and improve the PR and rankings of my landing pages. Just wondered if anyone has any comments as to whether thats a good or bad idea and whether I could improve it in any way - An example is here (scroll down the page to the pressure washing guides) - http://goo.gl/nnRE49 Thanks Pete0 -
Is this duplicate content?
All the pages have same information but content is little bit different, is this low quality and considered as duplicate content? I only trying to make services pages for each city, any other way for doing this. http://www.progressivehealthofpa.com/brain-injury-rehabilitation-pennsylvania/
Technical SEO | | JordanBrown
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-new-york/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-new-jersey/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-connecticut/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-maryland/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-massachusetts/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-philadelphia/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-new-york-city/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-baltimore/
http://www.progressivehealthofpa.com/brain-injury-rehabilitation-boston/0 -
Development Website Duplicate Content Issue
Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin dev.rollerbannerscheap.co.uk/ A description for this result is not available because of this site's robots.txt – learn more. This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google. In GWT I have tried to remove the sub domain. When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk. I want to remove a sub domain not a page. Can anyone help please?
Technical SEO | | SO_UK0 -
Why is the report telling I have duplicate content for 'www' and No subdomain?
i am getting duplicate content for most of my pages. when i look into in your reports the 'www' and 'no subdomian' are the culprit. How can I resolve this as the www.domain.com/page and domain.com/page are the same page
Technical SEO | | cpisano0 -
What could be the cause of this duplicate content error?
I only have one index.htm and I'm seeing a duplicate content error. What could be causing this? IUJvfZE.png
Technical SEO | | ScottMcPherson1 -
How to avoid duplicate content penalty when our content is posted on other sites too ?
For recruitment company sites, their job ads are posted muliple times on thier own sites and even on other sites too. These are the same ads (job description is same) posted on diff. sites. How do we avoid duplicate content penalty in this case?
Technical SEO | | Personnel_Concept0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | | hawkvt10