We are still seeing duplicate content on SEOmoz even though we have marked those pages as "noindex, follow." Any ideas why?
-
We have many pages on our website that have been set to "no index, follow." However, SEOmoz is indexing them as duplicate content. Why is that?
-
Hi Gary,
Great answer from Daniel.
One thing that you can do is to create a list of noindexed pages in excel, then add all pages identified by SEOmoz as duplicates and run a simple comparison in excel. This will identify any pages that do not match. You will easily see whether the new pages in the report can be ignored.
There is already a feature request in the works with the SEOmoz engineering team which will enable us to "turn off" pages that can be ignored (like those that are already noindexed). In the meantime, keeping track of the pages you can ignore is probably the best option.
You can keep track of progress by following updates on the Feature Request here.
Hope that helps,
Sha
-
Go to Google and search site:yourdomain.com and see if the pages in question come up. If so, Google has indexed them. If not, Google has not indexed them. Like SEOMoz, Google can crawl any page. Doesn't mean they will index the page. If you have noindexed a page, it should not be indexed by Google and should not be problematic for you.
-
So, it indexes issues that Google does see and doesn't see. How do we differentiate between the two?
Additionally, what would be some suggestions as to what we should do?
-
SEOMoz is not a search engine index, it uses a crawler. If those pages are not blocked by the robots.txt file, then SEOMoz will crawl them. They ignore the noindex tag because they don't index anything. Search engines will honor the noindex tag and not index a page if you specify with the robots meta tag. However, to remove pages from the crawl, disallow them in the robots.txt.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content
When I crawl my site through moz, it shows lots of Pages with Duplicate Content. The thing is all that pages are pagination pages. How should I solve this issue?
Technical SEO | | 100offdeal0 -
Purchasing duplicate content
Morning all, I have a client who is planning to expand their product range (online dictionary sites) to new markets and are considering the acquisition of data sets from low ranked competitors to supplement their own original data. They are quite large content sets and would mean a very high percentage of the site (hosted on a new sub domain) would be made up of duplicate content. Just to clarify, the competitor's content would stay online as well. I need to lay out the pros and cons of taking this approach so that they can move forward knowing the full facts. As I see it, this approach would mean forgoing ranking for most of the site and would need a heavy dose of original content as well as supplementing the data on page to build around the data. My main concern would be that launching with this level of duplicate data would end up damaging the authority of the site and subsequently the overall domain. I'd love to hear your thoughts!
Technical SEO | | BackPack851 -
Added 301 redirects, pages still earning duplicate content warning
We recently added a number of 301 redirects for duplicate content pages, but even with this addition they are still showing up as duplicate content. Am I missing something here? Or is this a duplicate content warning I should ignore?
Technical SEO | | cglife0 -
How to avoid duplicate content on internal search results page?
Hi, according to Webmaster Tools and Siteliner our website have an above-average amount of duplicate content. Most of the pages are the search results pages, where it finds only one result. The only difference in this case are the TDK, H1 and the breadcrumbs. The rest of the layout is pretty static and similar. Here is an example for two pages with "duplicate content": https://soundbetter.com/search/Globo https://soundbetter.com/search/Volvo Edit: These are legitimate results that happen to have the same result. In this case we want users to be able to find the audio engineers by 'credits' (musicians they've worked with). Tags. We want users to rank for people searching for 'engineers who worked with'. And searching for two different artists (credit tags) returns this one service provider, with different urls (the tag being the search parameter) hence the duplicate content. I guess every e-commerce/directory website faces this kind of issue. What is the best practice to avoid duplicate content on search results page?
Technical SEO | | ShaqD1 -
Duplicate Page Content for sorted archives?
Experienced backend dev, but SEO newbie here 🙂 When SEOmoz crawls my site, I get notified of DPC errors on some list/archive sorted pages (appending ?sort=X to the url). The pages all have rel=canonical to the archive home. Some of the pages are shorter (have only one or two entries). Is there a way to resolve this error? Perhaps add rel=nofollow to the sorting menu? Or perhaps find a method that utilizes a non-link navigation method to sort / switch sorted pages? No issues with duplicate content are showing up on google webmaster tools. Thanks for your help!
Technical SEO | | jwondrusch0 -
Duplicate Page Content
Hi within my campaigns i get an error "crawl errors found" that says duplicate page content found, it finds the same content on the home pages below. Are these seen as two different pages? And how can i correct these errors as they are just one page? http://poolstar.net/ http://poolstar.net/Home_Page.php
Technical SEO | | RouteAccounts0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0 -
Duplicate Content
We have a main sales page and then we have a country specific sales page for about 250 countries. The country specific pages are identical to the main sales page, with the small addition of a country flag and the country name in the h1. I have added a rel canonical tag to all country pages to send the link juice and authority to the main page, because they would be all competing for rankings. I was wondering if having the 250+ indexed pages of duplicate content will effect the ranking of the main page even though they have rel canonical tag. We get some traffic to country pages, but not as much as the main page, but im worried that if we remove those pages and redirect all to main page that we will loose 250 plus indexed pages where we can get traffic through for odd country specific terms. eg searching for uk mobile phone brings up the country specific page instead of main sales page even though the uk sales pages is not optimized for uk terms other than having a flag and the country name in the h1. Any advice?
Technical SEO | | -Al-0