Functionality of SEOmoz crawl page reports
-
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration!
Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file?
For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
<meta name="robots" content="noindex"> </meta name="robots" content="noindex">This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004
and also at
http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex">
So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too.
Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear.
Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described.
Anyone?
Jim
-
Hi Jimmy,
Thanks for writing in with a great question.
In regard to the "noindex" meta tag, our crawler will obey that tag as soon as we find it in the code, but we will also crawl any other source code up until we hit the tag in the code so pages with the "noindex" tag will still show up in the crawl. We just don't crawl any information past that tag. One of the notices we include is "Blocked by meta robots" and for the truthbook.com campaign, we show over 2000 pages under that notice.
For example, on the page http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010, there are six lines of code, including the title, that we would crawl before hitting the "noindex" directive. Google's crawler is much more sophisticated than ours, so they are better at handling the meta robots "noindex" tag.
As for http://truthbook.com/contemplative_prayer/, we do respect the "*" wildcard directive in the robots.txt file and we are not that page. I checked your full CSV report and there is no record of us crawling any pages with /contemplative_prayer/ in the URL (http://screencast.com/t/hMFuQnc9v1S) so we are correctly respecting the disallow directives in the robots.txt file.
Also, if you would ever like to reach out to the Help Team directly in the future, you can email us from the Help Hub here: http://www.seomoz.org/help, but we are happy to answer questions in the Q&A forum, as well.
I hope this helps. Please let me know if you have any other questions.
Chiaryn
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to make a Crawl Report readable?
Hi! I am trying to find out how I make my CSV report neat, so I can interprete the report. I know have a CSV report with just numbers and text all in one column. I tried the button text to columns but that doesn't work because when I do that at column A it overwrites column B in which I have the same problem! Thanks
Moz Pro | | HetCommunicatielokaal0 -
Too many on-page links
I received a warning in my most recent report for too many on-page links for the following page: http://www.fateyes.com/blog/. I can't figure out why this would be. I am counting between 60-70 including all pull downs, "read more's", archive, category and a few additional misc. links. Any ideas or suggestions on this? Or what I might do to rectify? Perhaps it's just an SEOmoz report blip... We currently don't have the post list rolling to additional pages so it's kind of passively set up to be endless, but it's in the works.
Moz Pro | | gfiedel0 -
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them.
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them. So i get something like: http://example.com/page1, http://example.com/page2, http://example.com/page3, http://example.com/page4, Because I now have to open each in "Issue: Duplicate Page Content", and this takes a lot of time. The same for duplicate page title.
Moz Pro | | nvs.nim0 -
Is seomoz rogerbot only crawling the subdomains by links or as well by id?
I´m new at seomoz and just set up a first campaign. After the first crawling i got quite a few 404 errors due to deleted (spammy) forum threads. I was sure there are no links to these deleted threads so my question is weather the seomoz rogerbot is only crawling my subdomains by links or as well by ids (the forum thread ids are serially numbered from 1 to x). If the rogerbot crawls as well serially numbered ids do i have to be concerned by the 404 error on behalf of the googlebot as well?
Moz Pro | | sauspiel0 -
I have a Rel Canonical "notice" in my Crawl Diagnostics report. I'm presuming that means that the spider has detected a rel canonical tag and it is working as opposed to warning about an issue, is this correct?
I know this seems like a really dumb question but the site I'm working on is a BigCommerce one and I've been concerned about canonicalisation issues prior to receiving this report (I'm a SEOmoz pro newbie also!) and I just want to be clear I am reading this notice correctly. I presume this means that the site crawl has detected the rel canonical tag on these pages and it is working correctly. Is this correct?? Any input is much appreciated. Thanks
Moz Pro | | seanpearse0 -
Question to SEOMOZ when will the On-Page Optimization tool be Updated
Hi, When will your On-page Optimization Tool be updated to reflect the Over Optimization Penalty which is coming. I would think that grading out at an 'A' will have adverse effects.
Moz Pro | | Bucky0 -
SEOmoz indicating duplicate page content on one of my campaigns
Hello All, Alright, according to SEOmoz's PRO campaign manager, one of my websites is returning about 2,700 pages that supposedly have duplicate content. I checked a few of them manually and am not seeing where the issue lies. Is anyone else experiencing something similar to this and do you know if it is just a glitch with the crawl? Here are 2 of the pages it is indicating have dup page content: http://www.dieselpowerproducts.com/c-3120-1994-98-59l-12v-dodge-cummins-carbon-fiber-hoods.aspx http://www.dieselpowerproducts.com/c-90-dodge-cummins-94-02-59l-12v24v.aspx Any insight would be greatly appreciated! -Craig
Moz Pro | | ckilgore0 -
Crawl Diagnostics Report Lacks Information
When I look at the crawl diagnostics, SEOMoz tells me there are 404 errors. This is understandable, because some pages were removed. What this report doesn't tell me is how those pages were discovered. This is a very important piece of information, because it would tell me there are links pointing to those pages, either internal or external. I believe the internal links have been removed. If the report told me how if found the link, I would be able to take immediate action. Without that information, I have to go so a lot of investigation. And when you have a million pages, that isn't easy. Some possibilities: The crawler remembered the page from the previous crawl. There was a link from an index page - i.e. it is in the database still There was an individual link from another story - so now there are broken links Ditto, but it in on a static index page The link was from an external source - I need to make a redirect Am I missing something, or is this a feature the SEO Moz crawler doesn't have yet? What can I do (other than check all my pages) to discover this?
Moz Pro | | loopyal0