Duplicate Content/Missing Meta Description | Pages DO NOT EXISIT!
-
Hello all,
For the last few months, Moz has been showing us that our site has roughly 2,000 duplicate content errors. Pages that were actually duplicate content, I took care of accordingly using best practice (301 redirects, canonicalization,etc.). Still remaining after these fixes were errors showing for pages that we have never created.
Our homepage is www.primepay.com. An example of pages that are being shown as duplicate content is http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/payroll/online-payroll with a referring page of http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/online-payroll. Some of these are even now showing up as 403 and 404 errors.
The only real page on our site within that URL strand is primepay.com/payroll or primepay.com/payroll/online-payroll. Therefore, I am not sure where Moz is getting these pages from.
Another issue we are having in relation to duplicate content is that moz is showing old campaign url’s tacked on to our blog page i.e. http://primepay.com/blog?title=&page=2&utm_source=blog&utm_medium=blogCTA&utm_campaign=IRSblogpost&qt-blog_tabs=1.
As of this morning, our duplicate content went from 2,000 to 18,000. I exported all of our crawl diagnostics data and looked to see what the referring pages were, and even they are not pages that we have created. When you click on these links, they take you to a random point in time from the homepage of our blog; some dating back to 2010.
I checked our crawl stats in both Google and Bing’s Webmaster tool, and there are no duplicate content or 400 level errors being reporting from their crawl. My team is truly at a loss with trying to resolve this issue and any help with this matter would be greatly appreciated.
-
Thanks Dirk. Very insightful tip about not using campaign tracking to check internal links. There was an old blog post that had anchor text with campaign tracking that was causing many SEO issues. As for the latter part, it is unknown why a string of gibberish can be placed after /blog/ and also for our locations page. Our team's web developer is looking further into this issue. If anyone has any more advice on the matter it would be greatly appreciated.
-
Hey there
Dirk pretty much hit upon the issue, which I'll reiterate with a visual. If you enter any gibberish /blog URL (like this: http://primepay.com/blog/jglkjglkjg) in the browser it returns a 200 OK which, but it should return a 404 code --> http://screencast.com/t/cStpPB5zE
Otherwise pages that are really broken will look to crawlers like they are supposed to exist.
-
You shouldn't use campaign tracking to check internal links - you have to use event tracking. Check http://cutroni.com/blog/2010/03/30/tracking-internal-campaigns-with-google-analytics/ . Apart from the reporting issue - it's also generating a huge number of url's that need to be crawled by Google bot and is just wasting it's time (most of these tagged url have a correct canonical version). You mention these tags are old - but they are still present on a lost of pages.
For cases like this it's better to check with a local tool like Screaming Frog which gives you a much better view which pages are generating these links.The other issue you have is probably related to a few pages that have a bad formatted (relative) url in a link - the way your site is configured it's just rendering a page on your site - so the bots are then crawling your site over and over again, each time encountering the same bad relative link - and each time adding the bad formatting to the url. It's an endless loop - best way to avoid this is to use absolute internal links rather than relative links. Not sure if it's the only one - but one of the pages with this error is :http://primepay.com/blog/7-ways-find-right-payroll-service-your-company - it contains a link to
[Your payroll service is no different.]([Link to - http://www.primepay.com/en/payrollservices/] "Your payroll service is no different.")
This page should generate a 404 but is generating a 200 and the loop starts here.
Again - with screaming frog you can for each of these bad url's you can generate a crawl path report which shows you exactly on which page the error is generated.
Hope this helps,
Dirk
-
Example:
http://primepay.com/blog/hgehergreg
Status:
My site as an example:
https://caseo.ca/blog/hgehergreg
If I put in random gibberish in this URL, it should be displaying a 404 page and not the blog page.
-
Getting you some help for direct advice on your problem, but wanted to leave a comment about the tool itself. When you are looking at the Moz crawl tool, it only updates once a week, so if there hasn't been that long between the last crawl and when you did the work, it won't be updated. Here's more info.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Site Content found in Moz; Have a URL Parameter set in Google Webmaster Tools
Hey, So on our site we have a Buyer's Guide that we made. Essentially it is a pop-up with a series of questions that then recommends a product. The parameter ?openguide=true can be used on any url on our site to pull this buyer's guide up. Somehow the Moz Site Crawl reported each one of our pages as duplicate content as it added this string (?openguide=true) to each page. We already have a URL Parameter set in Google Webmaster Tools as openguide ; however, I am now worried that google might be seeing this duplicate content as well. I have checked all of the pages with duplicate title tags in the Webmaster Tools to see if that could give me an answer as to whether it is detecting duplicate content. I did not find any duplicate title tag pages that were because of the openguide parameter. I am just wondering if anyone knows:
Moz Pro | | MitchellChapman
1. a way to check if google is seeing it as duplicate content
2. make sure that the parameter is set correctly in webmaster tools
3. or a better way to prevent the crawler from thinking this is duplicate content Any help is appreciated! Thanks, Mitchell Chapman
www.kontrolfreek.com0 -
Crawl report - duplicate page title/content issue
When the crawl report is finished, it is saying that there are duplicate content/page titles issues. However there is a canonical tag that is formatted correctly so just wondered if this was a bug or if anyone else was having the same issues? For example, I'm getting a error warning for this page http://www.thegreatgiftcompany.com/categories/categories_travel?sort=name_asc&searchterm=&page=1&layout=table
Moz Pro | | KarlBantleman0 -
Analyzing backlinks - domain, page, anchor text, contact info, description
Hello, If I have an OSE file of the site's backlinks, is there any way to easily put together a spreadsheet with domain page link is on anchor text contact info description/feed Buzzstream does domain, page, and anchor text but you have to add the contact info one by one and there's no description/feed. Is there any way with any set of tools to do it all? Talking about 1375 domains for a total of 4500 pages. Thanks!
Moz Pro | | BobGW0 -
How to increase page authority
I wonder how to increase the page authority or the domain authority to begin with. It seems you are putting a lot of weight on this in your analysis.
Moz Pro | | wcsinc0 -
On-Page SEO Fixes - Are They Relative?
So, I'm implementing on-page fixes for a site that my company runs SEO services for (www.ShadeTreePowersports.com). However, I was wondering if there was a way to rank a pages' SEO quality, in general? As of now, it seems like the only way your recommendations can be consumed and altered is on a keyword basis. However, this seems be the reason I have a good amount of my F-Grades. Since my website sells powersports apparel and accessories, we cover a variety of applicable (but different) keywords like 'Motorcycle parts' or 'snow tubes,' because we sell so many different types of products. But, when I look at my F-Grades - SEOMoz is telling me my homepage is ranking poorly for a multitude of those pertinent keywords - but only because my page isn't catered specifically to each of them (IE: 'Snowmobile Parts' - 'Water Sport Apparel') But, with so many different types of products, catering to a specific one is impossible and would be detrimental. Is there a way to see how a page ranks, without factoring in those keywords? Or a better way that I can use these recommendations more efficiently? Thanks guys!
Moz Pro | | BrandLabs0 -
"Duplicate Page Title" Problem - Please Help
Hello, My website is categorized into 2 main categories. Sci/Tech (Has 4 sub-categories) Gadgets(Has 2 sub-categories) The Crawl diagnostic tool shows "Duplicate Page Title" error on Gadget's sub-categories while there's no error on the Sci/Tech. I don't really know how to get rid of these errors. Anyone has a solution to this?
Moz Pro | | MighteeObvious0 -
Our Duplicate Content Crawled by SEOMoz Roger, but Not in Google Webmaster Tools
Hi Guys, We're new here and I couldn't find the answer to my question. Here it goes: We had SEOMoz's Roger Crawl all of our pages and he came up with quite a few erros (Duplicate Content, Duplicate Page Titles, Long URL's). Per our CTO and using our Google Webmaster Tools, we informed Google not to index those Duplicate Content Pages. For our Long URL Errors, they are redirected to SEF URL's. What we would like to know is if Roger is able to know that we have instructed Google to not index these pages. My concern is Should we still be concerned if Roger is still crawling those pages and the errors are not showing up in our Webmaster Tools Is there a way we can let Roger know so they don't come up as errors in our SEOMoz Tools? Thanks so much, e
Moz Pro | | RichSteel0 -
Why would the SEOMoz Page analysis pick up exact keywords used in page title and text?
Hi, I am trying to optimise this URL : www.adaptiveconsultancy.com/ecommerce/features/advanced-ecommerce with the keyword being 'advanced ecommerce' With the 'On-Page Report Card' from SEOMoz that the exact keyword isn't featured in the page title or text, but it is in there. Why would this not be picked up? Thank you in advance,
Moz Pro | | adaptiveconsultancy
M0