Firefox Add-On for crawl frequency??
-
Hi all,
a short one: is there a firefox add-on available, which lets you see the crawl frequency of your page(s)? Would be interesting to see if google bot comes around more lately...
There are some statistics in the webmaster tools, but I don't find them very attractive
I know there is something for Wordpress, but we don't use it...
I don't to put up an excel-sheet and check the cache-version for myself. And I would love to see how deep the crawler gets and which sites do not get crawled...
So, any existing add-ons or tools that are for free??
Thanx....
-
Hi,
Your best bet is Web master Tools or any other server side tool like the ones available for Word Press like you've mention. Depending on your server (unix or microsoft based) and/or your platform for the backend you can install / add additional features to get more info for this - google bot visits / crawl rates and formats.
There are no fireFox addons since the browser dosen't have access to your site / server in order to see when google bot is visiting what pages and how is it performing - as that 's a behind the scene type of process.
If you have access to your server you might try newrelic.com  (I am not affiliate in any way with them - I just love the tool). However there are several others in the same spectrum that will give you more then the Google bot stats and data.
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Crawled My Site. Now What?
Hey everyone! So Moz crawled my site and I passed it over to my dev team who's curious about what they should prioritize. Curious what everyone's thoughts are. Here are the issue types: Duplicate Content - Missing Title - Duplicate Title Tag - Redirect Chain - Title too long - Description too short - Missing Description - Missing h1 - Thin Content - URL Too Long - Has meta noindex Would love any assistance! Thank you!
Technical SEO | | inksoft_mm0 -
URLs dropping from index (Crawled, currently not indexed)
I've noticed that some of our URLs have recently dropped completely out of Google's index. When carrying out a URL inspection in GSC, it comes up with 'Crawled, currently not indexed'. Strangely, I've also noticed that under referring page it says 'None detected', which is definitely not the case. I wonder if it could be something to do with the following? https://www.seroundtable.com/google-ranking-index-drop-30192.html - It seems to be a bug affecting quite a few people. Here are a few examples of the URLs that have gone missing: https://www.ihasco.co.uk/courses/detail/sexual-harassment-awareness-training https://www.ihasco.co.uk/courses/detail/conflict-resolution-training https://www.ihasco.co.uk/courses/detail/prevent-duty-training Any help here would be massively appreciated!
Technical SEO | | iHasco0 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
Pages crawled is only 23 even after 8 days??
Hello all, My site www.practo.com has at least more than 500+ pages. Still seomoz says its only 23 crawled till date even after 8 -10 days of the trial period. Now most of the pages on my site are in-site search pages. They appear when you search relevant terms with combinations etc. Is that hindering the moz crawler to look for those pages? Aditya
Technical SEO | | shanky11 -
My report only says it crawled 1 page of my site.
My report used to crawl my entire site which is around 90 pages. Â Any idea of why this would happen? www.treelifedesigns.com
Technical SEO | | nathan.marcarelli0 -
Application/x-msdownload in crawl report?
In a crawl report for my company's site, the "content_type_header" column usually contains "text/html". But there are some random pages with "application/x-msdownload"... what does "application/x-msdownload" mean? <colgroup><col width="121"></colgroup>
Technical SEO | | JimLynch
| |0 -
Reading Crawl Diagnostics and Taking Action on results
My site crawl diagnostics are showing a high number of duplicate page titles and content. When i look at the flagged pages, many errors are simply listed from multiple pages of product category search results. This looks pretty normal to me and I am at a loss for understanding how to fix this situation. Can I talk with someone? thanks, Gary
Technical SEO | | GaryQ0 -
Crawl issues/ .htacess issues
My site is getting crawl errors inside of google webmaster tools. Google believe a lot of my links point to index.html when they really do not. That is not the problem though, its that google can't give credit for those links to any of my pages. I know I need to create a rule in the .htacess but the last time I did it I got an error. I need some assistance on how to go about doing this, I really don't want to lose the weight of my links. Thanks
Technical SEO | | automart0