404 Page/Content Duplicates & its "Warning"
-
My website has MANY duplicate pages and content which are both derived from the MANY 404 pages on my website. While these are flagged in SEOmoz as "Warnings," should this be of concern to SEO effectiveness?
-
Hi Darren,
Sorry, but I'm a bit confused. Technically, both duplicate content and 4xx errors (404s) both qualify as "errors" instead of warnings.
An error is usually considered something that could harm your SEO. For large sites, a few errors wont hurt you much and are to be expected, but these are definitely something you want to address.
Feel free to let us know if you have any questions.
-
Do you mean you have many different pages for 404 errors, as opposed to having many pages that are returning 404?
If you have specialized 404 pages, for whatever reason, you should probably noindex them. That way google isn't trying to index a page thats just there to help your users find your content, and you're not getting dinged as a site that produces duplicate content.
-
Duplicate content is always a problem is the wrong page is being served up for a search term (i.e, a 404 page vs an active page). It's bad for bounce rate and conversions, and search engines eventually drop 404 pages from their index.
So, as far as SEO effectiveness goes, if people aren't getting served the content you want them to, yes, it's a problem.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Duplicate content report - question on best practice
Hello all, New to MOZ Pro and SEO - so lots to get my head round! I’m working through the Duplicate Content section of the Crawl report and am not sure what the best practice is for my situation. Background: We are a reference guide for luxury hotels around the world, but the hotels that are featured on the site vary year on year. When we add a new hotel page, it sets up the url as ourwebsite.com/continent/country/regionORcity/hotel. When the hotels come off, I redirect their URL to the country or region where we have other hotels. Example: http://www.johansens.com/europe/switzerland/zermatt/ The hotel in Zermatt has come off the site, showing 0 results on this landing page. Question: My duplicate content report is showing a number of these regional pages that are displaying the copy “0 places - Region’ because the hotel has come off, but the landing page is still live. Should I redirect the regional page back to the main country page? And then if I add a new hotel to the site from that region in the future, simply remove the redirect? Should I also delete the page? Any tips would be much appreciated!
Moz Pro | | CN_Johansens0 -
Duplicate Content errors - not going away with canonical
I am getting Duplicate Content Errors reported by Moz on search result pages due to parameters. I went through the document on resolving Duplicate Content errors and implemented the canonical solution to resolve it. The canonical in the header has been in place for a few weeks now and Moz is still showing the pages as Duplicate Content despite the canonical reference. Is this a Moz bug? http://mathematica-mpr.com/news/?facet={81C018ED-CEB9-477D-AFCC-1E6989A1D6CF}
Moz Pro | | jpfleiderer0 -
How can I correct this massive duplicate content problem?
I just updated a clients website which resulted in about 6000 duplicate page content errors. The way I set up my clients new website is I created a sub folder calles blog and installed wordpress on that folder. So when you go to suncoastlaw.com your taken to an html website, but if you click on the blog link in the nav, your taken to the to blog subfolder. The problem I'm having is that the url's seem to be repeating them selves. So for example, if you type in in http://suncoastlaw.com/blog/aboutus.htm/aboutus.htm/aboutus.htm/aboutus.htm/ that somehow is a legitimate url and is being considered duplicate content of of http://suncoastlaw.com/aboutus.htm/. This repeating url only seems to be a problem when the blog/ is in the url. Any ideas as to how I can fix this?
Moz Pro | | ScottMcPherson0 -
Duplicate page reported in Wordpress site, but I can't find it in All Pages list
In a crawl report a duplicate page content warning has been displayed. The two urls are 1. http://www.superheroes.com.au/shop
Moz Pro | | Andyfools
and
2. http://www.superheroes.com.au/shop/category/catalog/ It's a WordPress site and I cannot find the second page anywhere in the list of All Pages in the Admin (I want to add canonicalisation code) When I view the 2nd page and click Edit Page, it redirects to the 1st page. Any ideas where SEOMoz would be finding this 2nd page or how it might be being generated? (btw I didn't build this site) Thanks Simon0 -
Domain.com and domain.com/index.html duplicate content in reports even with rewrite on
I have a site that was recently hit by the Google penguin update and dropped a page back. When running the site through seomoz tools, I keep getting duplicate content in the reports for domain.com and domain.com/index.html, even though I have a 301 rewrite condition. When I test the site, domain.com/index.html redirects to domain.com for all directories and root. I don't understand how my index page can still get flagged as duplicate content. I also have a redirect from domain.com to www.domain.com. Is there anything else I need to do or add to my htaccess file? Appreciate any clarification on this.
Moz Pro | | anthonytjm0 -
Why would the SEOMoz Page analysis pick up exact keywords used in page title and text?
Hi, I am trying to optimise this URL : www.adaptiveconsultancy.com/ecommerce/features/advanced-ecommerce with the keyword being 'advanced ecommerce' With the 'On-Page Report Card' from SEOMoz that the exact keyword isn't featured in the page title or text, but it is in there. Why would this not be picked up? Thank you in advance,
Moz Pro | | adaptiveconsultancy
M0 -
Duplicate page title
I own a store www.mzube.co.uk and the scam always says that I have duplicate page titles or duplicate page. What happens is thn I may have for example www.mzube.co.uk/allproducts/page1. And if I hve 20 pages all what will change from each page is the number at the end and all the rest of the page name will be the same but really the pages are if different products. So the scans think I have 20 pages the same but I havent Is this a concern as I don't think I can avoid this Hope you can answer
Moz Pro | | mzube0