Duplicate Content/Missing Meta Description | Pages DO NOT EXISIT!
-
Hello all,
For the last few months, Moz has been showing us that our site has roughly 2,000 duplicate content errors. Pages that were actually duplicate content, I took care of accordingly using best practice (301 redirects, canonicalization,etc.). Still remaining after these fixes were errors showing for pages that we have never created.
Our homepage is www.primepay.com. An example of pages that are being shown as duplicate content is http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/payroll/online-payroll with a referring page of http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/online-payroll. Some of these are even now showing up as 403 and 404 errors.
The only real page on our site within that URL strand is primepay.com/payroll or primepay.com/payroll/online-payroll. Therefore, I am not sure where Moz is getting these pages from.
Another issue we are having in relation to duplicate content is that moz is showing old campaign url’s tacked on to our blog page i.e. http://primepay.com/blog?title=&page=2&utm_source=blog&utm_medium=blogCTA&utm_campaign=IRSblogpost&qt-blog_tabs=1.
As of this morning, our duplicate content went from 2,000 to 18,000. I exported all of our crawl diagnostics data and looked to see what the referring pages were, and even they are not pages that we have created. When you click on these links, they take you to a random point in time from the homepage of our blog; some dating back to 2010.
I checked our crawl stats in both Google and Bing’s Webmaster tool, and there are no duplicate content or 400 level errors being reporting from their crawl. My team is truly at a loss with trying to resolve this issue and any help with this matter would be greatly appreciated.
-
Thanks Dirk. Very insightful tip about not using campaign tracking to check internal links. There was an old blog post that had anchor text with campaign tracking that was causing many SEO issues. As for the latter part, it is unknown why a string of gibberish can be placed after /blog/ and also for our locations page. Our team's web developer is looking further into this issue. If anyone has any more advice on the matter it would be greatly appreciated.
-
Hey there
Dirk pretty much hit upon the issue, which I'll reiterate with a visual. If you enter any gibberish /blog URL (like this: http://primepay.com/blog/jglkjglkjg) in the browser it returns a 200 OK which, but it should return a 404 code --> http://screencast.com/t/cStpPB5zE
Otherwise pages that are really broken will look to crawlers like they are supposed to exist.
-
You shouldn't use campaign tracking to check internal links - you have to use event tracking. Check http://cutroni.com/blog/2010/03/30/tracking-internal-campaigns-with-google-analytics/ . Apart from the reporting issue - it's also generating a huge number of url's that need to be crawled by Google bot and is just wasting it's time (most of these tagged url have a correct canonical version). You mention these tags are old - but they are still present on a lost of pages.
For cases like this it's better to check with a local tool like Screaming Frog which gives you a much better view which pages are generating these links.The other issue you have is probably related to a few pages that have a bad formatted (relative) url in a link - the way your site is configured it's just rendering a page on your site - so the bots are then crawling your site over and over again, each time encountering the same bad relative link - and each time adding the bad formatting to the url. It's an endless loop - best way to avoid this is to use absolute internal links rather than relative links. Not sure if it's the only one - but one of the pages with this error is :http://primepay.com/blog/7-ways-find-right-payroll-service-your-company - it contains a link to
[Your payroll service is no different.]([Link to - http://www.primepay.com/en/payrollservices/] "Your payroll service is no different.")
This page should generate a 404 but is generating a 200 and the loop starts here.
Again - with screaming frog you can for each of these bad url's you can generate a crawl path report which shows you exactly on which page the error is generated.
Hope this helps,
Dirk
-
Example:
http://primepay.com/blog/hgehergreg
Status:
My site as an example:
https://caseo.ca/blog/hgehergreg
If I put in random gibberish in this URL, it should be displaying a 404 page and not the blog page.
-
Getting you some help for direct advice on your problem, but wanted to leave a comment about the tool itself. When you are looking at the Moz crawl tool, it only updates once a week, so if there hasn't been that long between the last crawl and when you did the work, it won't be updated. Here's more info.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What all other SEO tool (along with Moz) are you using at your company / SEO Agency / Marketing Agency?
Hello Folks. Let me put it this way. Moz is great. especially, this community of like minded people. But, when it comes to Moz tool's features- it has its own Pros and Cons. And, that's what force us to add other tools to our tool kit. So, just wanted to know what all other SEO tools you use and why??? Let me start this by answering this question: I use SERPed.net as my other SEO tools suite. And the reasons are: It offers 40+ SEO tools ( Few are similar to the ones that I get with Moz, but the other ones that I don't get in Moz are real good. For ex: **Site Auditor Pro, Mobile **Prospector) Along with showing data for their metrics and algos, they pull data from- SEMrush and Majestic SEO. So, I don't need to have subscription of these 2 tools and still I get the data from them. A big money saver 🙂 Let's start this interesting thread 🙂
Moz Pro | | w1t1 -
Is The Number of Duplicate Pages reduced after adding canonical ref to the dupe versions ?
Hi Is the number of duplicate pages reported in a dupe page content error report reduced on subsequent crawls, if you have resolved the dupe content problem via adding the canonical tag to duplicate versions (referring the original page). Like it would if you were solving the problem via a 301 redirect (i think/presume) ? Cheers Dan
Moz Pro | | Dan-Lawrence0 -
Why do I see a duplicate content errors when rel="canonical" tag is present
I was reviewing my first Moz crawler report and noticed the crawler returned a bunch of duplicate page content errors. The recommendations to correct this issue are to either put a 301 redirect on the duplicate URL or use the rel="canonical" tag so Google knows which URL I view as the most important and the one that should appear in the search results. However, after poking around the source code I noticed all of the pages that are returning duplicate content in the eyes of the Moz crawler already have the rel="canonical" tag. Does the Moz crawler simply not catch whether that tag is being used? If I have that tag in place, is there anything else I need to do in order to get that error to stop showing up in the Moz crawler report?
Moz Pro | | shinolamoz0 -
Seomoz legacy pages?
Hello, I am finding that I miss several of the old seomoz sections. The legacy tools in particular like the visual website comparison. Where is that now? Also, where is the ongoing list of the top 100 sites? So much was lost in the shift to MOZ, I hope some of the good old stuff is still available. Thank you, Nolan
Moz Pro | | QuietProgress0 -
Seomoz duplicate rel="next" pages
Hello my page has this Although with seomoz crawl it says that this pages has duplicate titles. If my blog has 25 pages, i have according seomoz 25 duplicate titles. Can someone tell me if this is correct or if the seomoz crawl cannot recognize rel="next" or if there is another better way to tell google when there a pages generated from the blog that as the same title Should i ignore these seomoz errors thank you,
Moz Pro | | maestrosonrisas0 -
How long will it take for Page Rank (or Page Authority) to flow via a 301 redirect?
I've recently redeveloped a static site using WordPress and have created 301 redirects for the original urls to the new urls. I know I won't get all the value passed via the 301, but I'm hoping some will. Any idea how long this may take? It's been nearly a month since the changeover so wondering if it would be weeks, months or more?
Moz Pro | | annomd0 -
Finding the source of duplicate content URL's
We have a website that displays a number of products. The product has variations (sizes) and unfortunately every size has its own URL (for now anyway). Needless to say, this causes duplicate content issues. (And of course, we are looking to change the URL's for our site as soon as possible) However, even though these duplicate URL's exist, you should not be able to land on them by navigating through the site. In theory, the site should always display the link to the smallest size. It seems that there is a flaw in our system somewhere, as these links are now found in our campaign here on SEOmoz. My question: is there any way to find the crawl path that lead to the URL's that shouldn't have been found, so we can locate the problem?
Moz Pro | | DocdataCommerce0 -
In my crawl diagnostics, there are links to duplicate content. How can I track down where these links originated in?
How can I find out how SEOMOz found these links to begin with? That would help fix the issue. Where's the source page where the link was first encountered listed at?
Moz Pro | | kirklandsl0