Why am I getting all these duplicate pages?
-
This is going for basically all my pages, but my website has 3 'duplicates' as the rest just have 2 (no index)
Why are these 3 variations counting as duplicate pages?
-
Actually, canonical tags are the absolute last-ditch way of dealing with this issue.
The correct solution is to use 301-redirects to force all version of the URL except the primary to redirect to the primary (also called canonical) URL. Canonical in this instance just means the primary or most authoritative version of something. Nothing to do with the tags of the same name.
The only reason to use the rel=canonical tag for this is if you have absolutely no way to do it through 301-redirects. (For instance your host doesn't allow access to the .htaccess file and your DNS system doesn't allow it either.)
Use Travis's info below for exactly how to do this in .htaccess. There are also many other posts here in Q&A that address this if you want more reference points.
Paul
-
Your next question is; "Great, but how do I fix it?"
It looks like this particular detail was missed during server configuration. You would handle this with rewrites via .htaccess if you're using an Apache server. However, if you're unfamiliar with the file - proceed with caution - if you can't push and pull from a test environment for some reason. A little bit of white space or a syntax error can knock the site down until you find the error.
Otherwise, Ultimate Htaccess has just about everything you need to know. Here are the commands you will need. If you're using WordPress, make sure the redirects go before the section of your file that pertains to WordPress.
-
Technically all of these urls are different. A web server could return completely different content for all the urls above. When Google “canonicalizes” a url, pick the url that seems like the best representative from that set.
Check this link: http://moz.com/learn/seo/duplicate-content
-
Hello W2GITeam and welcome to the world of SEO!
The problem you've described is covered in basic/fundamental SEO concepts. The specific topic to help you turn those 3 pages into a single, non-duplicate, indexed page is through Canonical tags.
Learn more about those tags and how they help suppress duplicate content from being indexed here: http://moz.com/learn/seo/canonicalization
That's plenty to get you started
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No page authority for https and www versions
In Moz page onsite explorer, I'm getting a page rank of 1 for https://productcategory, and www./productcategory, but for the non-www version, I have a page rank of 33. The DA for each link is 25 Google is indexing my https:// version of my site (with the page rank of 1). Could this be why I'm getting outranked by my competitors (who have no content on their page, no links, and a lower domain and page authority than me?) Does this indicate some kind of issue with my redirections or settings in google search console? I would love to know how to fix this if it could be causing issues Thanks
Moz Pro | | Golden850 -
Pages with Temporary Redirects on pages that don't exist!
Hi There Another obvious question to some I hope. I ran my first report using the Moz crawler and I have a bunch of pages with temporary redirects as a medium level issue showing up. Trouble is the pages don't exist so they are being redirected to my custom 404 page. So for example I have a URL in the report being called up from lord only knows where!: www.domain.com/pdf/home.aspx This doesn't exist, I have only 1 home.aspx page and it's in the root directory! but it is giving a temp redirect to my 404 page as I would expect but that then leads to a MOZ error as outlined. So basically you could randomize any url up and it would give this error so I am trying to work out how I deal with it before Google starts to notice or before a competitor starts to throw all kinds at my site generating these errors. Any steering on this would be much appreciated!
Moz Pro | | Raptor-crew0 -
I need an interlinking report for my site, is there a report in Moz or another application that tell me how all of my pages are linked to other pages on my site?
I am in the process of doing a redesign for one of my sites. I need an interlinking report for my site. Is there a report in Moz or another application that tell me how all of my pages are linked to other pages on my site?
Moz Pro | | seoflorida0 -
What's my best strategy for Duplicate Content if only www pages are indexed?
The MOZ crawl report for my site shows duplicate content with both www and non-www pages on the site. (Only the www are indexed by Google, however.) Do I still need to use a 301 redirect - even if the non-www are not indexed? Is rel=canonical less preferable, as usual? Facts: the site is built using asp.net the homepage has multiple versions which use 'meta refresh' tags to point to 'default.asp'. most links already point to www Current Strategy: set the preferred domain to 'www' in Google's Webmaster Tools. set the Wordpress blog (which sits in a /blog subdirectory) with rel="canonical" to point to the www version. Ask programmer to add 301 redirects from the non-www pages to the www pages. Ask programmer to use 301 redirects as opposed to meta refresh tags & point all homepage versions to www.site.org. Does this strategy make the most sense? (Especially considering the non-indexed but existent non-www pages.) Thanks!!
Moz Pro | | kimmiedawn0 -
Moz crawl only shows 2 pages, but we have more than 1000 pages.
Hi Guys Is there anyway we can test Moz crawler ? it showing only 2 pages crawls. We are running website on HTTPS ? Is HTTPS is issues for Moz ?
Moz Pro | | dotlineseo0 -
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
How do I find the corresponding duplicate content pages from my SEOmoz report?
Once I have run my report and the duplicate content pages come up, is there a way to find out which pages have the duplicate content on them? I have one URL but where can I find the duplicate content that corresponds to it? Thanks Barry
Moz Pro | | MrBarrytg0 -
My campaigns are not analyzing all my pages.
Hi I created a campaign against http://www.universalpr.com, and this campaign reports that only one page has been crawled. This site uses a jsvascript redirect to the real page which can be found through the following: www.universalpr.com/wps/portal/universal/univhome/!ut/p/c5/04_SB8K8xLLM9MSSzPy8xBz9CP0os_hQdwtfCydDRwN_Jw9LA0-LAOPQYCdDI_9QY_1wkA6zeAMcwNFA388jPzdVPzi1WL8gO68cANNcdLU!/dl3/d3/L2dBISEvZ0FBIS9nQSEh/ Now I also attempted to create a campaign against this page in case that the javascript redirect was breaking things, but that campaign also reported 1 page crawled. Can anyone instruct me as to what I'm doing wrong? Thank you
Moz Pro | | jcmoreno0