An odd duplicate content issue...
-
Hi all,
my developers have just assured me that nothing has changed form last week but in the today's crawl I see all the website duplicated:
and the difference on the url is the '/' so basically the duplicated urls are:
htts://blabla.bla/crop
htts://blabla.bla/crop/
Any help in understanding why is much appreciated.
thanks
-
you currently have
https://www.paydaychannel.co.uk/cash-advance-loans/ & https://www.paydaychannel.co.uk/contact-payday-channel/ with that problem
These are the internal links you need to change to fix it
<colgroup><col width="393"> <col width="109"> <col width="83"></colgroup>
| URL | Anchor Text | Alt text |
| https://www.paydaychannel.co.uk/cash-advance-loans/ | Apply Now |
|
| https://www.paydaychannel.co.uk/cash-advance-loans/ | Close |
|
| https://www.paydaychannel.co.uk/cash-advance-loans/ | here |
|
| https://www.paydaychannel.co.uk/cash-advance-loans/ | Policies |
|
| https://www.paydaychannel.co.uk/cash-advance-loans/ | Articles |
|
| https://www.paydaychannel.co.uk/cash-advance-loans/ |
| rss logo |
| https://www.paydaychannel.co.uk/cash-advance-loans/ |
| addthis logo |<colgroup><col width="393"> <col width="109"> <col width="83"></colgroup>
| URL | Anchor Text | Alt text |
| https://www.paydaychannel.co.uk/faqs-about-loans | Contact us here! |
|
| https://www.paydaychannel.co.uk/contact-payday-channel/ | Close |
|
| https://www.paydaychannel.co.uk/contact-payday-channel/ | here |
|
| https://www.paydaychannel.co.uk/contact-payday-channel/ | Policies |
|
| https://www.paydaychannel.co.uk/contact-payday-channel/ | Articles |
|
| https://www.paydaychannel.co.uk/contact-payday-channel/ |
| rss logo |
| https://www.paydaychannel.co.uk/contact-payday-channel/ |
| addthis logo |
| https://www.paydaychannel.co.uk/faqs-about-loans/ | Contact us here! |
| -
Hi,
thanks for your email.
I do have a duplicated content issue on the basis of the seomoz crawl, and when I check the duplicated url it shows me those two (as an example but this work for all the entire website):
https://www.paydaychannel.co.uk/cash-advance-loans/
and
https://www.paydaychannel.co.uk/cash-advance-loans
I will check what you say...
thanks
-
you probably have internal links pointing to htts://blabla.bla/crop & to htts://blabla.bla/crop/
You can use screaming frog to find where those links are and fix them, but Google's algorithm is smart enough to figure this one out. I'd be very surprised if you experience any duplicate content issues because of this.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to fix duplicate content issues
Another question for the Moz Community. One of my clients has 4.5k duplicate content issues. For example: http://www.example.co.uk/blog and http://www.example.co.uk/index.php?route=blog/blog/listblog&year=2017. Most of the issues are coming from product pages. My initial thoughts are to set up 301 redirects in the first instance and if the issue persists, add canonical tags. Is this the best way of tackling this issue?
Technical SEO | | Laura-EMC0 -
Development Website Duplicate Content Issue
Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin <cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
Technical SEO | | SO_UK0 -
Affiliate urls and duplicate content
Hi, What is the best way to get around having an affiliate program, and the affiliate links on your site showing as duplicate content?
Technical SEO | | Memoz0 -
How do I get rid of duplicate content
I have a site that is new but I managed to get it to page one. Now when I scan it on SEO Moz I see that I have duplicate content. Ex: www.mysite.com, www.mysite.com/index and www.mysite.com/ How do I fix this without jeopardizing my SERPS ranking? Any tips?
Technical SEO | | bronxpad0 -
How to prevent duplicate content in archives?
My news site has a number of excerpts in the form of archives based on categories that is causing duplicate content problems. Here's an example with the nutrition archive. The articles here are already posts, so it creates the duplicate content. Should I nofollow/noindex this category page along with the rest and 2011,2012 archives etc (see archives here)? Thanks so much for any input!
Technical SEO | | naturalsociety0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0 -
Is there ever legitimate near duplicate content?
Hey guys, I’ve been reading the blogs and really appreciate all the great feedback. It’s nice to see how supportive this community is to each other. I’ve got a question about near duplicate content. I’ve read a bunch of great post regarding what is duplicate content and how to fix it. However, I’m looking at a scenario that is a little different from what I’ve read about. I’m not sure if we’d get penalized by Google or not. We are working with a group of small insurance agencies that have combined some of their back office work, and work together to sell the same products, but for the most part act as what they are, independent agencies. So we now have 25 different little companies, in 25 different cities spread across the southeast, all selling the same thing. Each agency has their own URL, each has their own Google local places registration, their own backlinks to their local chambers, own contact us and staff pages, etc. However, we have created landing pages for each product line, with the hopes of attracting local searches. While we vary each landing page a little per agency (the auto insurance page in CA talks about driving down the 101, while the auto insurance page in Georgia says welcome to the peach state) probably 75% of the land page content is the same from agency to agency. There is only so much you can say about specific lines of insurance. They have slightly different titles, slightly different headers, but the bulk of the page is the same. So here is the question, will Google hit us with a penalty for having similar content across the 25 sites? If so, how do you handle this? We are trying to write create content, and unique content, but at the end of the day auto insurance in one city is pretty much the same as in another city. Thanks in advance for your help.
Technical SEO | | mavrick0 -
Noindex duplicate content penalty?
We know that google now gives a penalty to a whole duplicate if it finds content it doesn't like or is duplicate content, but has anyone experienced a penalty from having duplicate content on their site which they have added noindex to? Would google still apply the penalty to the overall quality of the site even though they have been told to basically ignore the duplicate bit. Reason for asking is that I am looking to add a forum to one of my websites and no one likes a new forum. I have a script which can populate it with thousands of questions and answers pulled direct from Yahoo Answers. Obviously the forum wil be 100% duplicate content but I do not want it to rank for anyway anyway so if I noindex the forum pages hopefully it will not damage the rest of the site. In time, as the forum grows, all the duplicate posts will be deleted but it's hard to get people to use an empty forum so need to 'trick' them into thinking the section is very busy.
Technical SEO | | Grumpy_Carl0