We are still seeing duplicate content on SEOmoz even though we have marked those pages as "noindex, follow." Any ideas why?
-
We have many pages on our website that have been set to "no index, follow." However, SEOmoz is indexing them as duplicate content. Why is that?
-
Hi Gary,
Great answer from Daniel.
One thing that you can do is to create a list of noindexed pages in excel, then add all pages identified by SEOmoz as duplicates and run a simple comparison in excel. This will identify any pages that do not match. You will easily see whether the new pages in the report can be ignored.
There is already a feature request in the works with the SEOmoz engineering team which will enable us to "turn off" pages that can be ignored (like those that are already noindexed). In the meantime, keeping track of the pages you can ignore is probably the best option.
You can keep track of progress by following updates on the Feature Request here.
Hope that helps,
Sha
-
Go to Google and search site:yourdomain.com and see if the pages in question come up. If so, Google has indexed them. If not, Google has not indexed them. Like SEOMoz, Google can crawl any page. Doesn't mean they will index the page. If you have noindexed a page, it should not be indexed by Google and should not be problematic for you.
-
So, it indexes issues that Google does see and doesn't see. How do we differentiate between the two?
Additionally, what would be some suggestions as to what we should do?
-
SEOMoz is not a search engine index, it uses a crawler. If those pages are not blocked by the robots.txt file, then SEOMoz will crawl them. They ignore the noindex tag because they don't index anything. Search engines will honor the noindex tag and not index a page if you specify with the robots meta tag. However, to remove pages from the crawl, disallow them in the robots.txt.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Duplicate Content
We have multiple collections being flagged as duplicate content - but I can't find where these duplications are coming from? The duplicate content has no introductory text, and no meta description. Please see examples:- This is the correct collection page:-
Technical SEO | | Caroline_Ardmoor
https://www.ardmoor.co.uk/collections/deerhunter This is the incorrect collection page:-
https://www.ardmoor.co.uk/collections/vendors How do I stop this incorrect page from showing?0 -
Why are my 301 redirects and duplicate pages (with canonicals) still showing up as duplicates in Webmaster Tools?
My guess is that in time Google will realize that my duplicate content is not actually duplicate content, but in the meantime I'd like to get your guys feedback. The reporting in Webmaster Tools looks something like this. Duplicates /url1.html /url2.html /url3.html /category/product/url.html /category2/product/url.html url3.html is the true canonical page in the list above._ url1.html,_ and url2.html are old URLs that 301 to url3.html. So, it seems my bases are covered there. _/category/product/url.html _and _/category2/product/url.html _ do not redirect. They are the same page as url3.html. Each of the category URLs has a canonical URL of url3.html in the header. So, it seems my bases are covered there as well. Can I expect Google to pick up on this? Why wouldn't it understand this already?
Technical SEO | | bearpaw0 -
Added 301 redirects, pages still earning duplicate content warning
We recently added a number of 301 redirects for duplicate content pages, but even with this addition they are still showing up as duplicate content. Am I missing something here? Or is this a duplicate content warning I should ignore?
Technical SEO | | cglife0 -
Duplicate Pages , Do they matter ?
I have been told buy the company who created my site that duplicate page warning are not a problem ? my site is small and only has 50 pages ( including product pages etc ) yet the crawl shows over 6500 duplicate pages am I right to be concerned?
Technical SEO | | Gardening4you0 -
URL Structure for "Find A Professional" Page
I've read all the URL structure posts out there, but I'm really undecided and would love a second opinion. Currently, this is how the developer has our professionals directory working: 1. You search by inputting your Zip Code and selecting a category (such as Pool Companies) and we return all professionals within a X-mile radius of that ZIP. This is how the URL's are structured... 1. Main Page: /our-professionals 2. The URL looks like this after a search for "Deck Builders" in ZIP 19033: /our-professionals?zipcode=19033&HidSuppliers=&HiddenSpaces=&HidServices=&HidServices_all=[16]%2C&HidMetroareas=&srchbox= 3. When I click one of the businesses, URL looks like this: viewprofile.php?id=409 I know how to go about doing this, but I'm undecided on the best structure for the URL's. Maybe for results pages do this: find-professionals/deck-builders/philadelphia-pa-19033 And for individual pro's profiles do this: /deck-builders/philadelphia-pa-19033/Billys-Deck-Service Any input on how to best structure this so that we can have a good chance of showing in SERPs for "Deck Builders near New Jersey" and the such, would be much appreciated.
Technical SEO | | zDucketz0 -
Follow up to Archive of Content
This is a follow up to the Question I ask: http://www.seomoz.org/q/archive-of-content I have decided that I am going to move the articles from example.com (non-commercial) to website.com (commercial) however I was having a think, some of the articles on example.com and ranking well for some keywords, maybe getting around 20,000 visits from natural search, would it be possible when moving this article just to do a 301 redirect from the page with the article example.com to the new website? Hope that makes some sense. Kind Regards,
Technical SEO | | Paul780 -
What's the best way to eliminate duplicate page content caused by blog archives?
I (obviously) can't delete the archived pages regardless of how much traffic they do/don't receive. Would you recommend a meta robot or robot.txt file? I'm not sure I'll have access to the root directory so I could be stuck with utilizing a meta robot, correct? Any other suggestions to alleviate this pesky duplicate page content issue?
Technical SEO | | ICM0 -
Should i use NoIndex, Follow & Rel=Canonical Tag In One Page?
I am having pagination problem with one of my clients site , So I am deciding to use noindex, follow tag for the Page 2,3,4 etc for not to have duplicated content issue, Because obviously SEOMoz Crawl Diagnostics showing me lot of duplicate page contents. And past 2 days i was in constant battle whether to use noindex, follow tag or rel=canonical tag for the Page 2,3,4 and after going through all the Q&A,None of them gives me crystal clear answer. So i thought "Why can't i use 2 of them together in one page"? Because I think (correct me if i am wrong) 1.noindex, follow is old and traditional way to battle with dup contents
Technical SEO | | DigitalJungle
2.rel=canonical is new way to battle with dup contents Reason to use 2 of them together is: Bot finds to the non-canonical page first and looks at the tag nofollow,index and he knows not to index that page,meantime he finds out that canonical url is something something according to the url given in the tag,NO? Help Please???0