Duplicate Page and Title Issues
-
On the last crawl, we received errors for duplicate page titles and some duplicate content pages.
Here is the issue:
We went through our page titles that were marked as duplicate and changed them to make sure their titles were different. However, we just received a new crawl this week and it is saying there are even more duplicate page title errors detected than before. We're wondering if this is a problem with just us or if it has been happening to other Moz users.
As for the duplicate content pages, what is the best way to approach this and see what content is being looked at as a "duplicate" set?
-
I am being told I have hundreds of duplicate titles and content on my site. It's not possible. Not only do I only have 120ish posts on my site and they all have different titles, but google console shows me that no on on the web is claiming my posts as theirs (ie they are not saying they are the originals, etc.) So what the heck is the issue? Why am I being told that posts that are NOT duplicate of anything are duplicate content and that I have a bunch of duplicate titles when NONE OF MY titles are duplicates of any other title?????????
And, how can I have 400 pieces of duplicate content when I only have 120 blog posts??
Please someone help me with this
-
Always glad to help!
-
Thank you for your help!
-
Hi there!
I took a look at your campaign and it seems that we crawled a lot more page of the site this week in general than we did the previous week. Two weeks ago, we crawled about 500 pages and this week the crawl jumped up to 2800. So it isn't that there are more duplicate pages on your site now, it is just that we are able to crawl more pages now than we were before, which led to us finding even more duplicate pages that had not been reported before.
As for why we are crawling so many more pages, it could be for a few reasons. If you made changes to the links on your site, the page hierarchy of the site, or if you remove noindex tags or updated the robots.txt file for the site, those things could all affect how we are able to crawl the site.
I hope this helps! Please let me know if I can help you with anything else.
Chiaryn
Help Team Sensei -
Hey There!
I responded to your support request with more information on how we view canonicals. I hope it was helpful!
If you can send me examples of pages that you believe are being counted incorrectly, that will help me determine if there is an issue with the crawl or if the pages correctly fall into how we determine duplicates based on the canonical tags.
I look forward to hearing back from you there soon!
Chiaryn
Help Team Sensei -
They have different purposes, but all relating to preventing the same page from showing duplicate content because of the different ways a URL can be written. There is nothing on this page to show that using a canonical tag can take two different pages and remove the similar content. The purpose of the canonical tag is to set the preferred URL for one page.
-
The problem is, our content is unique to each page and post. Yet they still have been coming up duplicated. I'll have to try Netrepid's google searching suggestion to double check these.
-
I would remove the domain name when you use the query.
moz keyword research in the united states best practice gets a much different SERP than keyword research in the united states best practice.
If you are trying to get a pure SERP result than you shouldn't use your domain name. That will tell you if there are any other search results in the web. If you want to find duplicate content on your site use copyscape.com or go to GWT and look for internal duplicate content.
Again, not only copy creates a duplicate content message. It is having an off html to text ratio, repetitive links in the HTML with not enough copy to balance it out. If Moz is reading a duplicate content error, and the number is increasing week to week I wouldn't discredit the finding simply because you don't understand why the error is occurring. The canonical tag won't prevent two different URLs from showing duplicate content. If you want to do that, no follow one of the URLs. That isn't best practice though, best practice is to fix the copy.
-
Hey Monica... canonical tagging can be used for a lot more than just 'www' or non-www:
-
Yeh, the weird thing I am noticing is that the canonical tags are already present on this domain, and it's not picking up ALL of the pages with canonical as duplicates.
I'm not really sure what's going on, but when in doubt, I always Google the following query:
site:domain intext:block of text unique to that page
If there is a duplicate content issue, Google should tell you that by showing more than one result. If it shows only one, then Google is reading your code right.
I know we all love Moz and want them to show we have no errors on our sites, but at the end of the day... don't we really want Google to find no issues with our sites, not Moz?
-
Canonical tags just point non www URL address to www addresses. It tells the engines that whether or not the WWW is used, the two URLs are the same page. It will only solve the duplicate content errors if that is in fact what is causing the error. If the actual cause is duplicate content the only way to solve it is to write unique copy.
-
Thank you for that information!
-
No Magento. We were debating the use of Canonical tags ourself.
-
I'm noticing a similar issue for an online store I consult for. We added canonical tags to all product pages and category pages, and Moz doesn't appear to be correctly attributing them.
Are you using a Magento store by chance?
-
Depending on when you made the changes, it could just be that they weren't fixed in time for the next crawl. Fixing these duplicate titles are really important for your SEO.
Open the medium priority issues and set it to duplicate page titles only. When you do that you will see which titles are duplicate. Sometimes there are more than one duplicate title so make sure you completely expand each line. I would make sure I go back into all of them and see if what is showing as duplicate on the crawl report is still the same information on your title tags. If it matches, then those pages need to be changed. If the titles are different, then wait another week and just see if the timing was off somehow.
If you have pages with duplicate content there could be a few things triggering it. There could be a too much of the same HTML and not enough text to make the pages look different. The content on the pages could be very thin, and very similar. The best way to offset duplicate page errors on your site is to get original, informative, unique content on those pages. You can set your crawl report to high priority errors, then select duplicate page content. You can then look at all of the duplicate pages side by side to determine if you can get unique content on those pages. If you are getting duplicate page errors for the same web page, one with a WWW and one without, then check to make sure your REL Canonical tags are in place and functioning properly. If the pages are different then you need to get great content up.
-
Hi there,
See here - I think Moz Analytics/PRO don't process rel=prev/next properly, so they may give false alarms on those pages, even if the titles are properly implemented.
Cheers
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Issue with Keyword List Export
I am having an issue with the Keyword list export. I exported a keyword list, then went back in a made changes to the list (deleted keywords, changed priority, etc). Then when I re-export the list it still downloads the original list. For instance, I had a keyword list of 120 keywords. I exported it. Then I reduced the list to 30 keywords. Now when I export that list it is still the original 120 keywords. Any one else having this issue? Is there a way to fix this?
Moz Bar | | Sundaram1 -
How do can the crawler not access my robots.txt file but have 0 crawler issues?
So I'm getting this errorOur crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.https://www.evernote.com/l/ADOmJ5AG3A1OPZZ2wr_ETiU2dDrejywnZ8kHowever, Moz is saying I have 0 Crawler issues. Have I hit an edge case? What can I do to rectify this situation? I'm looking at my robots.txt file here: http://www.dateideas.net/robots.txt however, I don't see anything that woudl specifically get in the way.I'm trying to build a helpful resource from this domain, and getting zero organic traffic, and I have a sinking suspicion this might be the main culprit.I appreciate your help!Thanks! 🙂
Moz Bar | | will_l0 -
Www and non www / duplicate content / redirects / www resolve issue
I am not getting docked for these specific errors, but I am getting docked for 1 page has a WWW resolve issue and 1 wrong URL in the sitemap... (SEM Rush) but when I use moz, it's not showing any issues. So I have these things set up so far: In .htaccess i have a command that removes the www. 301 redirect from www version to the non www (homepage) canonical on index.html pointing to non www version, I also set up a canonical tag for each page on the site search console with non www, www, https www, https non www all set to non www preference. Also, when I fetch the www version in google search console it says it's being 301 redirected to non www version which is basically what I want.Is there anything that i'm missing? These errors on SEM Rush are giving me anxiety lol.
Moz Bar | | donnieath1 -
902 Error and Page Size Limit
Hello, I am getting a 902 error when attempting to crawl one of my websites that was recently upgraded to a modern platform to be mobile friendly, https, etc. After doing some research it appears this is related to the page size. On Moz's 902 error description it states: "Pages larger than 2MB will not be crawled. For best practices, keep your page sizes to be 75k or less." It appears all pages on my site are over 2MB because Rogbot is no longer doing any crawling and not reporting issues besides the 902. This is terrible for us because we purchased MOZ to track and crawl this site specifically. There are many articles which show the average page size on the web is well over 2MB now: http://www.wired.com/2016/04/average-webpage-now-size-original-doom/ Due to that I would imagine other users have come up against this as well and I'm wondering how they handled it. I hope Moz is planning to increase the size limit on Rogbot as it seems we are on a course towards sites becoming larger and larger. Any insight or help is much appreciated!
Moz Bar | | Paul_FL0 -
Moz Page Analysis Country different to Who.is?
If I analyse a domain with Moz Page Analysis tool, it says that the domain is hosted in the United States but if look up the same domain on who.is, the hosting location is Italy?
Moz Bar | | Marketing_Today0 -
On-Page Grader fails?
I have optimized the following page for the norwegian word "Dusjkabinett": https://www.vvskupp.no/Produkter/Baderom/Dusjkabinett The On-Page grader gives the URL an F-grade. BUT as you can see it seems like the tool fails in giving a correct report. See attachment. Can anybody help me understand? pCyEYaWd3
Moz Bar | | moggiew0 -
On-page optimization
I have a list of the top 350 keywords sending volume to my site, sorted by volume. I am using your On-Page Optimization tool to look at the top 10 keywords and the grade for each of the relevant pages on the website. So for "hard wood flooring," I am searching for that term on Google and finding the first listing for my site lumberliquidators.com that comess up. Then I paste that page link into the On-Page Optimizer. Is this the best way to do this to determine performance for the most relevant page? Moz gave this keyword an F (home page) even though LL came up #2 in the organic Google rankings.
Moz Bar | | AlanJacob0 -
Moz Crawl Report showing non-existent Duplicate Errors since new reporting layout
Hi Moz Community, Since Moz changed to the new style of Crawl report, we've seen a jump in duplicate errors for our site. These duplicate errors do not exist and were not present on the Crawl reports before the report change and also we have not made any changes to the flagged pages on our site since then either. When you download the report data in csv it appears that the Moz report is mixing up data for two or more pages on the site. e.g.in csv for 'Page1' data, it will show the meta description for 'Page2' and 'Page2' shows that for 'Page1', so this then gets flagged as duplicate, however looking at the actual Meta description assigned onsite, both Page 1 and Page 2 are completely unique. Has anyone else experienced this and Moz Team - are you looking into this? Thanks, V
Moz Bar | | WWTeam1