Duplicate Content/Missing Meta Description | Pages DO NOT EXISIT!
-
Hello all,
For the last few months, Moz has been showing us that our site has roughly 2,000 duplicate content errors. Pages that were actually duplicate content, I took care of accordingly using best practice (301 redirects, canonicalization,etc.). Still remaining after these fixes were errors showing for pages that we have never created.
Our homepage is www.primepay.com. An example of pages that are being shown as duplicate content is http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/payroll/online-payroll with a referring page of http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/online-payroll. Some of these are even now showing up as 403 and 404 errors.
The only real page on our site within that URL strand is primepay.com/payroll or primepay.com/payroll/online-payroll. Therefore, I am not sure where Moz is getting these pages from.
Another issue we are having in relation to duplicate content is that moz is showing old campaign url’s tacked on to our blog page i.e. http://primepay.com/blog?title=&page=2&utm_source=blog&utm_medium=blogCTA&utm_campaign=IRSblogpost&qt-blog_tabs=1.
As of this morning, our duplicate content went from 2,000 to 18,000. I exported all of our crawl diagnostics data and looked to see what the referring pages were, and even they are not pages that we have created. When you click on these links, they take you to a random point in time from the homepage of our blog; some dating back to 2010.
I checked our crawl stats in both Google and Bing’s Webmaster tool, and there are no duplicate content or 400 level errors being reporting from their crawl. My team is truly at a loss with trying to resolve this issue and any help with this matter would be greatly appreciated.
-
Thanks Dirk. Very insightful tip about not using campaign tracking to check internal links. There was an old blog post that had anchor text with campaign tracking that was causing many SEO issues. As for the latter part, it is unknown why a string of gibberish can be placed after /blog/ and also for our locations page. Our team's web developer is looking further into this issue. If anyone has any more advice on the matter it would be greatly appreciated.
-
Hey there
Dirk pretty much hit upon the issue, which I'll reiterate with a visual. If you enter any gibberish /blog URL (like this: http://primepay.com/blog/jglkjglkjg) in the browser it returns a 200 OK which, but it should return a 404 code --> http://screencast.com/t/cStpPB5zE
Otherwise pages that are really broken will look to crawlers like they are supposed to exist.
-
You shouldn't use campaign tracking to check internal links - you have to use event tracking. Check http://cutroni.com/blog/2010/03/30/tracking-internal-campaigns-with-google-analytics/ . Apart from the reporting issue - it's also generating a huge number of url's that need to be crawled by Google bot and is just wasting it's time (most of these tagged url have a correct canonical version). You mention these tags are old - but they are still present on a lost of pages.
For cases like this it's better to check with a local tool like Screaming Frog which gives you a much better view which pages are generating these links.The other issue you have is probably related to a few pages that have a bad formatted (relative) url in a link - the way your site is configured it's just rendering a page on your site - so the bots are then crawling your site over and over again, each time encountering the same bad relative link - and each time adding the bad formatting to the url. It's an endless loop - best way to avoid this is to use absolute internal links rather than relative links. Not sure if it's the only one - but one of the pages with this error is :http://primepay.com/blog/7-ways-find-right-payroll-service-your-company - it contains a link to
[Your payroll service is no different.]([Link to - http://www.primepay.com/en/payrollservices/] "Your payroll service is no different.")
This page should generate a 404 but is generating a 200 and the loop starts here.
Again - with screaming frog you can for each of these bad url's you can generate a crawl path report which shows you exactly on which page the error is generated.
Hope this helps,
Dirk
-
Example:
http://primepay.com/blog/hgehergreg
Status:
My site as an example:
https://caseo.ca/blog/hgehergreg
If I put in random gibberish in this URL, it should be displaying a 404 page and not the blog page.
-
Getting you some help for direct advice on your problem, but wanted to leave a comment about the tool itself. When you are looking at the Moz crawl tool, it only updates once a week, so if there hasn't been that long between the last crawl and when you did the work, it won't be updated. Here's more info.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content issues with file download links (diff. versions of a downloadable application)
I'm a little unsure how canonicalisation works with this case. 🙂 We have very regular updates to the application which is available as a download on our site. Obviously, with every update the version number of the file being downloaded changes; and along with it, the URL parameter included when people click the 'Download' button on our site. e.g. mysite.com/download/download.php?f=myapp.1.0.1.exe mysite.com/download/download.php?f=myapp.1.0.2.exe mysite.com/download/download.php?f=myapp.1.0.3.exe, etc In the Moz Site Crawl report all of these links are registering as Duplicate Content. There's no content per se on these pages, all they do is trigger a download of the specified file from our servers. Two questions: Are these links actually hurting our ranking/authority/etc? Would adding a canonical tag to the head of mysite.com/download/download.php solve the crawl issues? Would this catch all of the download.php URLs? i.e. Thanks! Jon
Moz Pro | | jonmc
(not super up on php, btw. So if I'm saying something completely bogus here...be kind 😉 )0 -
Duplicate content - Product Categories
Dears, I've use "Site Crawl" tool to find any SEO warnings, and I found 991 duplicated content. The problem is that the pages are not duplicated its all products category pages, please check this exmaple: This page: https://www.jobedu.com/en/shop/category/prints/Postcards/na/all-colors/all-size and this page: https://www.jobedu.com/en/shop/category/Accessories/keychain/na/all-colors/all-size It said its duplicated, and it's 991 pages! How to fix this this? what I can do?
Moz Pro | | jobedu0 -
Why is my page rank disapointing
Hi fairly new here, so just getting used to everything one questions please. Just ran a crawl test of the website and this page http://www.livingphilosophy.org.uk/teaching-philosophy/index.htm came back with a page authority of 1. Other pages have a rank of 18 through 26 scratched my head for a few hours and came up with no ideas. thanks andy
Moz Pro | | livingphilosophy0 -
I've got quite a few "Duplicate Page Title" Errors in my Crawl Diagnostics for my Wordpress Blog
Title says it all, is this an issue? The pages seem to be set up properly with Rel=Canonical so should i just ignore the duplicate page title erros in my Crawl Diagnostics dashboard? Thanks
Moz Pro | | SheffieldMarketing0 -
Duplicate pages with canonical links still show as errors
On our CMS, there are duplicate pages such as /news, /news/, /news?page=1, /news/?page=1. From an SEO perspective, I'm not too worried, because I guess Google is pretty capable of sorting this out, but to be on the safe side, I've added canonical links. /news itself has no link, but all the other variants have links to "/news". (And if you go wild and add a bunch of random meaningless parameters, creating /news/?page=1&jim=jam&foo=bar&this=that, we will laugh at you and generate a canonical link back to "/news". We're clever like that.) So far so good. And everything appears to work fine. But SEOMoz is still flagging up errors about duplicate titles and duplicate content. If you click in, you'll see a "Note" on each error, showing that SEOMoz has found the canonical link. So SEOMoz knows the duplication isn't a problem, as we're using canonical links exactly the way they're supposed to be used, and yet is still flagging it as an error. Is this something I should be concerned about, or is it just a bug in SEOMoz?
Moz Pro | | LockyDotser0 -
Missing Juicy link finder
Hello, In the past, I was using two tools mainly: Keyword difficulty and the Juicy link finder. Buit Juicy Link Finder does not exist anymore and I miss it 😞 Any of you are missing it also? Do any of you who were using that tool have found a new tool to do the same good job than what Juicy Link Finder was doing? Thanks Nancy
Moz Pro | | EnigmaSolution0 -
Crawl reports urls with duplicate content but its not the case
Hi guys!
Moz Pro | | MakMour
Some hours ago I received my crawl report. I noticed several records with urls with duplicate content so I went to open those urls one by one.
Not one of those urls were really with duplicate content but I have a concern because website is about product showcase and many articles are just images with href behind them. Many of those articles are using the same images so maybe thats why the seomoz crawler duplicate content flag is raised. I wonder if Google has problem with that too. See for yourself how it looks like: http://by.vg/NJ97y
http://by.vg/BQypE Those two url's are flagged as duplicates...please mind the language(Greek) and try to focus on the urls and content. ps: my example is simplified just for the purpose of my question. <colgroup><col width="3436"></colgroup>
| URLs with Duplicate Page Content (up to 5) |0 -
This Rookie needs help! Duplicate content pages dropped significantly.
So I am pretty new to SEO Moz. I have an e-commerce site and recently did a website redesign. However, not without several mistakes and issues. That said, when SEO Moz did a crawl of my site, the results showed A LOT of Duplicate Content Pages on my site due to my having one item in many variations. It was almost over whelming and because the number of pages was so high, I have been trying to research ways to correct it quickly. The latest crawl from yesterday shows a drastic drop in the number of duplicate content pages and a slight increase in pages with too long page titles (which is fixable). I am embarrassed to give the number of duplicate pages that were showing but, just know, it's been reduced to a third of the amount. I am just wondering if I missed something and should I be happy or concerned? Has there been a change that could have caused this? Thanks for helping this rookie out!
Moz Pro | | AvenueSeo0