Is the update site crawl feature following robot.txt rules?
-
I noticed that most of the errors would not be occurring if Moz's tool followed the rules implemented in sites robots.txt. Has anyone else seen this problem and do you know if Moz will fix this?
-
I'm not a 100% sure about it, but probably in specific cases you want to have your own statements for Rogerbot.
-
No because I thought Moz followed GoogleBot rules. Is this not the case?
-
Hi James.
Do you also have rules for the crawler of Moz in there? You can learn more about Roger here: https://moz.com/help/guides/moz-procedures/what-is-rogerbot
Martijn.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Limit MOZ crawl rate on Shopify or when you don't have access to robots.txt
Hello. I'm wondering if there is a way to control the crawl rate of MOZ on our site. It is hosted on Shopify which does not allow any kind of control over the robots.txt file to add a rule like this: User-Agent: rogerbot Crawl-Delay: 5 Due to this, we get a lot of 430 error codes -mainly on our products- and this certainly would prevent MOZ from getting the full picture of our shop. Can we rely on MOZ's data when critical pages are not being crawled due to 430 errors? Is there any alternative to fix this? Thanks
Moz Bar | | AllAboutShapewear2 -
Duplicate page found with MOZ crawl test?
When I crawl my website www.radiantguard.com, the crawl test comes back with what appears to be a duplicate of my home page: http://www.radiantguard.com and http://www.radiantguard.com/ Does the crawler indeed see two different pages and therefore, are my search engine rankings potentially affected, AND is this because of how my rel canonical is set up?
Moz Bar | | rhondafranklin0 -
Canonical in Moz crawl report
I'm wondering if the moz bot is seeing my rel="canonical" on my pages. There are 2 notices that are bothering me: Overly Dynamic URL Rel Canonical Overly Dynamic URL - This notice is being generated by urls with query strings. On the main page I have the rel="canonical" tag in the header. So every page with the query string has the canonical tag that points to the page that should be indexed. So my question...Why the notice? Isn't this being handled properly with the canonical tag? I know I can use my robots.txt or the tool in Google search console but is it really necessary when I have the canonical on every page? Here is one of the links that has the "Overly Dynamic URL" notice, as you can see the the canonical in the header points to the page without the query string: https://www.vistex.com/services/training/traditional-classroom/registration-form/?values=true&course-title=DMP101 – Data Maintenance Pricing – Business Processes&date=March 14, 2016 Rel Canonical - Every page in my report has this notice "Using rel=canonical suggests to search engines which URL should be seen as canonical". I'm using the rel="canonical" tag on all of my pages by default. Is the report suggesting that I don't do this? Or is it suggesting that I should? Again...why the notice?
Moz Bar | | Brando160 -
Performed a Moz Crawl Test - Says I have 107 External Links on Homepage??
Hello Mozzers! Exactly at the title suggests, I performed a crawl test on one our sites and the report says we have 107 external links on the homepage and another 34 on one of our internal category pages. On both of these pages I can only find 7 external links, anyone know why the crawl test is saying this? And if Moz is finding these external links could google be doing the same and punishing our site for the high number of external links? Any response appreciated! Richard
Moz Bar | | Richard-Kitmondo0 -
Crawl Diagnostics: Exlude known errors and others that have been detected by mistake? New moz analytics feature?
I'm curious if the new moz analytics will have the feature (filter) to exclude known errors from the crwal diagnostics. For example, the attached screenshot shows the URL as 404 Error, but it works fine: http://en.steag.com.br/references/owners-engineering-services-gas-treatment-ogx.php To maintain a better overview which errors can't be solved (so I just would like to mark them as "don't take this URL into account...") I will not try to fix them again next time. On the other hand I have hundreds of errors generated by forums or by the cms that I can not resolve on my own. Also these kind of crawl errors I would like to filter away and categorize like "errors to see later with a specialist". Will this come with the new moz analytics? Anyway is there a list that shows which new features will still be implemented? knPGBZA.png?1
Moz Bar | | inlinear0 -
Blocked Production Site from Search Engines - How to get it Crawled by Moz Crawler
I have an 'under development' site hosted, (which is an exact replica of live site as working on to add new functionalities & modules) - but its password protected, excluded from robots.txt (Disallow) & also marked noindex on all pages in the index - so that Googlebot & other Search Engines can not crawl the site At present the development work is almost 95% completed., Now - feel like to crawl the site through SEOMOZ Roger Bot - to know the errors and all indexed urls by Rogerbot. What's the best way to get Moz Bot crawl the site - but simultaneously continue it blocking its access to Search Engines I have gone through - https://support.google.com/webmasters/answer/93708?hl=en, it says a) Save it in a password-protected directory. Googlebot and other spiders won't be able to access the content- But this way Moz will also not be able to crawl the site b) Use a robots.txt to control access to files and directories on your server - However it also says - It's important to note that even if you use a robots.txt file to block spiders from crawling content on your site, Google could discover it in other ways and add it to our index. c) Use a noindex meta tag to prevent content from appearing in our search results - It also says that a link to the page can still appear in their search results. Because we have to crawl your page in order to see the noindex tag, there's a small chance that Googlebot won't see and respect the noindex meta tag Password Protected thus seems the best way to continue blocking. However, continuing with it will also block Moz bot to crawl the site. Any suggestions Thanks
Moz Bar | | Modi0 -
Open site explorer linking root domains
My company has been trying to increase the number of linking root domains for a specific page on our website using our PR company and press releases that are sent out and linked back to this page. This is working nicely, but the number of linking root domains is still not increasing under the "linking root domains" tab. I am noticing the correct links to this domain under "fresh web mentions" though. I know this tool can take a bit to update, but it has been quite some time and I still only see the one link.
Moz Bar | | isret_efront0