Duplicate content warnings
-
I have a ton of duplicate content warnings for my site poker-coaching.net, but I can't see where there are duplicate URLs. I cannot find any function where I could check the original URL vs a list of other URLs where the duplicate content is?
-
thanks for the help. I am trying to cover all bases here. Duplicate content was one concern, the other one is too high link density and bad incoming links.
I have downloaded a full backlinks report now using Majestic SEO (OSE only shows incoming links from 74 domains...).
I think I may have found the problem. I used to have a forum on that domain years ago which was hacked and used for a lot of spammy outgoing links for stuff like Cialis, Viagra etc.. Those guys also linked from other sites to my forum pages. Example:| from: http://www.grupoibira.com/foro/viewto... | Anchor: buy flagyl in lincolnu... | 3 | to: http://www.poker-coaching.net/phpbb3/... |
When I closed the forum and deleted the forum plugin, I redirected all forum pages to my main page which, under the circumstances was a bad mistake I guess. Because with the redirect, all those spammy links now end up pointing to my main page, right? So first, I have removed that redirect now.
But the problem remains that I still have plenty of links from spam sites pointing to URLs of my domain that do not exist any more.
Is there anything else I can do to remove those links or have Google remove/disregard them, or do you think a reconsideration request explaining the situation would help? -
Honestly, with only 235 indexed pages, it's pretty doubtful that duplicate content caused you an outright penalty (such as being hit with Panda). Given your industry, it's much more likely you've got a link-based penalty or link quality issue in play.
You do have a chunk of spammy blog comments and some low-value article marketing, for example:
http://undertheinfluence.nationaljournal.com/2010/02/summit-attendees.php
A bit of that is fine (and happens in your industry a lot), but when it's too much of your link profile too soon, you could be getting yourself into penalty territory.
-
Hey There,
Just to clarify, to see the source of those errors, you’ll need to download your Full Crawl Diagnostics CSV and open it up in something like Excel. In the first column, perform a search for the URL of the page you are looking for. When you find the correct row, look in the last column labeled referrer. This tells you the referral URL of the page where our crawlers first found the target URL. You can then visit this URL to find the source of your errors. If you need more help with that, check out this link: http://seomoz.zendesk.com/entries/20895376-crawl-diagnostics-csv
Hope that helps! I will look at the issue on the back end to see if they are actually duplicate content.
Have a great day,
Nick
-
Thanks for looking into this. Actually I checked the whole site by doing a batch search on Copyscape and there were only minor duplicate content issues. I resolved those by editing the content parts in question (on February 24th 2012).
Since I am desperately searching for the reasons why this site was penalized (and it def is...), it would be great to know why your duplicate content checker finds errors. Could only be related to multiple versions of one page on different URLs. I do have all http://mysitedotcom redirected to www.mysitedotcom, and the trailing slash/notrailingslash URL problem was also resolved by a redirect long ago, so I do not know where the problem lies.
Thanks for the help! -
I think our system has roughly a 90-95% threshold for duplicate content. The pages I'm seeing in your campaign don't look that high, so something is up - I'm checking with support.
For now, use the "Duplicate Page Title" section - that'll tend to give you exact duplicates. The duplicate content detection also covers thin content and near duplicates.
-
Yes that is what I first thought too. If only it were that easy.
But when I do, I see a couple of URLs that definitely do not have any duplicate content . Could it be that the dupe content check considers text in sitewide modules (like the modules "Poker News" and "Tips for ...." in www.poker-coaching.net) as duplicate content, because they appear on all pages?
This way, the duplicate content finding function is totally worthless. -
If you drill down into your campaign report into 'Crawl Diagnostics' you will see a dropdown menu that's named "Show". Select 'Duplicate Page Content'... you will see a graph with a table below it. To the right of the URL you will see a column named "Other URL's". The numbers in that column are live links to a page with the list of URL's with duplicate content. At least that is how it is displayed in my campaigns.
-
You will find this information at google webmaster tools and at seomoz campaing. There you will the information you need.
One easy way to avoid this is to include the rel canonical metag. You need to include in every page (pages you want to be the official one) inside the head tag the follow:
where ww.example.com/index.html is your page adress. Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Tracking links and duplicate content
Hi all, I have a bit of a conundrum for you all pertaining to a tracking link issue I have run into on a clients site. They currently have over duplicate content. Currently, they have over 15,000 pages being crawled (using Screaming Frog) but only 7,000+ are legitimate pages in the sense of they are not duplicates of themselves. The client is using Omniture instead of Google Analytics and using an advanced tracking system on their site for internal and external links (ictids and ectids) in the URL parameters. This is creating thousands of duplicated pages being crawled by Google (as seen on their Search Console and on Screaming Frog). They also are in the middle of moving over from http to https and have thousands of pages currently set up for both, again, creating a duplicate content issue. What I have suggested for the tracking links is setting up a URL parameter in Search Console for these tracking links. I've also suggested they canonical all tracking links to point to the clean page so the pages that have already been indexed point to the correct clean url. Does this seam like the appropriate strategy? Additionally, I've told them before they submit a new sitemap to Google, they need to switch their website over to https to avoid worsening their duplicate content issue. They have not submitted a sitemap to Google Search Console since March 2015. Thank you for any help you can offer!
Reporting & Analytics | | Rydch410 -
How often does google content experiments stats update?
From my experience it seems to update once per day (every 24 hours), can anyone confirm this is the case or have a link to an official announcement which confirms how often the data updates? It would be handy to know when it updates so we can see the latest information as it comes in.
Reporting & Analytics | | Twist3600 -
Duplicate Title Errors on Product Category Pages - The best practice?
I'm getting quite a few 'Duplicate Title Error' on category pages which span over 2 - 3 pages. E.g. http://www.partwell.com/cutting-punches http://www.partwell.com/cutting-punches?page=1 http://www.partwell.com/cutting-punches?page=2 http://www.partwell.com/cutting-punches?page=3 All 4 pages currently have the same title... <title>Steel Cutting Punches</title> I was thinking of adding Page Numbers to the title of each corresponding page, thus making them all unique and clearing the Duplicate Page Title errors. E.g. <title>Steel Cutting Punches</title> <title>Steel Cutting Punches | Page 1 of 3</title> <title>Steel Cutting Punches | Page 2 of 3</title> <title>Steel Cutting Punches | Page 3 of 3</title> Is this the best way to go around it? Or is there another way that I'm not thinking of? Would I need to use the rel=canonical tag to show that the original page is the one I want to be found? Thanks
Reporting & Analytics | | bricktech0 -
How can you tell if your new content has been indexed?
Other than simply doing a search in each case, is there any way I can tell (in Webmaster Tools, for example) if the 500-1000 new pages of content I have added have been indexed and are now appearing in search results? My traffic hasn't risen much, but I know at least a few of them are in there... How can I tell when they're all in?
Reporting & Analytics | | corp08030 -
Duplicate Url with Google shopping feed
In webmaster tool I have many duplicate url tagged as google_shopping Obviously i'm tagging the url with the goog url builder Url: elettrodomestici.yeppon.it/cura-corpo/tagliacapelli/remington-tagliacapelli-funzionamento-rete-ricaricabile-lame-in-acciaio-inox-hc5150-garanzia/ Duplicate url: elettrodomestici.yeppon.it/cura-corpo/tagliacapelli/remington-tagliacapelli-funzionamento-rete-ricaricabile-lame-in-acciaio-inox-hc5150-garanzia/?utm_source=google_shopping&utm_medium=web&utm_content=Elettrodomestici+e+Clima+%3E+Cura+del+corpo+%3E+Tagliacapelli&utm_campaign=google_shopping How can I solve it? Thanks
Reporting & Analytics | | yeppon0 -
Time until duplicate penalty is lifted?
Hello, I recently discovered that half of the pages on my site, about 3,500 were not being indexed or were indexing very very slow and with a heavy weight on them. I discovered the problem in the "HTML Suggestions" within Google's Webmaster Tools. An example of my main issue. All 3 of these URL were showing 200 Status OK in Google. www.getrightmusic.com/mixtape/post/ludacris_1_21_gigawatts_back_to_the_first_time www.getrightmusic.com/mixtape/post/ludacris_1_21_gigawatts_back_to_the_first_time/ www.getrightmusic.com/mixtape/ludacris_1_21_gigawatts_back_to_the_first_time I added some code to the .htaccess in order to remove the trailing slashes across the board. I also properly set up my 404 redirects, which were not properly set up by my developer (when the site "relaunched" 6 months ago 😞 ) I then added the Canonical link rel tags on the site posts/entries. I'm hoping I followed all the correct steps in fixing the issue and now, I guess, I just have to wait until the penalty gets lifted? I'm also not %100 certain that I have been penalized. I'm just assuming based on the SERP ceiling I feel and the super slow or lack of indexing my content. Any insight, help or comments would be super helpful. Thank you. Jesse
Reporting & Analytics | | getrightmusic0 -
Regex in Analytics for longtail content
I have spent quit some time now, trying to figure this out. I wannt to segment landingoage traffic to the longtail content. That's content in the 3rd or 4th subdirectorylevel. I would like to do that by saying google: "Just show me traffic that arrives on landingpages with 3 OR 4 slashes "/" But I can't get a solution, can anybody help in the seoMOZ community?
Reporting & Analytics | | viventuraSEO0 -
Duplicate page content
I have a website which "houses" five different and completely separate departments, so the content is separated by subfolders. e.g. domain.com/department1 domain.com/department2 etc. and each have their own individual top navigation menus. There is an "About Us" section for each department which has about 6 subpages (Work for us, What we do, Awards etc.) but the problem is that the content for each department is exactly the same. The only difference is the navigation menu and the breadcrumbs. This isn't ideal as a change to one page means having to make the change to all 5 and from an SEO perspective it's duplicate content x5 (apart from the Nav). One solution I can see is to have the "About Us" section moved to the root level (domain.com/about-us) and have a generic nav, possibly with the department names on it. The only problem with this is that it disrupts the user journey if they are forced away from the department that they're chosen. Basically i'm looking for suggestions or examples of other sites that have got around this problem, I need inspiration! Any help would be greatly appreciated.
Reporting & Analytics | | haydennz0