Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
WEbsite cannot be crawled
-
I have received the following message from MOZ on a few of our websites now
Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
I have spoken with our webmaster and they have advised the below:
The Robots.txt file is definitely there on all pages and Google is able to crawl for these files. Moz however is having some difficulty with finding the files when there is a particular redirect in place.
For example, the page currently redirects from threecounties.co.uk/ to https://www.threecounties.co.uk/ and when this happens, the Moz crawler cannot find the robots.txt on the first URL and this generates the reports you have been receiving. From what I understand, this is a flaw with the Moz software and not something that we could fix form our end.
_Going forward, something we could do is remove these rewrite rules to www., but these are useful redirects and removing them would likely have SEO implications. _
Has anyone else had this issue and is there anything we can do to rectify, or should we leave as is?
-
Ok, I made a quick test of your robot.txt file and looks fine,
https://www.threecounties.co.uk/robots.txtThen I made a test https://httpstatus.io/ to check the status code
of your robot.txt file and show me 200 status code (So it's fine)Also, you need to make sure that your robot.txt file is accessible for the Rogerbot (Moz crawler)
This day the hosting providers have become very strict with third-party crawlers
This includes Moz, Majestic SEO, Semrush and Ahrefs.Here you can find all the possible sources of the problem and recommended solutions
https://moz.com/help/guides/moz-pro-overview/site-crawl/unable-to-crawlRegards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GOOGLE ANALYTIC SKEWED DATA BECAUSE OF GHOST REFERRAL SPAM ND CRAWL BOTS
Hi Guys, We are having some major problems with our Google Analytics and MOz account. Due to the large number of ghost/referral spam and crawler bots we have added some heavy filtering to GA. This seems to be working protecting the data from all these problems but also filtering out much needed data that is not coming through. In example, we used to get a hundred visitors a day at the least and now we are down to under ten. ANYBODY PLEASE HELP. HAVE READ THROUGH MANY ARTICLES WITH NO FIND TO PERMANENT SOLID SOLUTION (even willing to go with paid service instead of GA) Thank You so Much, S.M.
Moz Pro | | KristyKK0 -
Woocommerce filter urls showing in crawl results, but not indexed?
I'm getting 100's of Duplicate Content warnings for a Woocommerce store I have. The urls are
Moz Pro | | JustinMurray
etc These don't seem to be indexed in google, and the canonical is for the shop base url. These seem to be simply urls generated by Woocommerce filters. Is this simply a false alarm from Moz crawl?0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
A tool to tell a websites estimated traffic
I am new to Moz (as a member), so I am not sure if Moz has a tool that I need. I don't want this post to be about self promotion, so I will keep it short. Our business helps increase conversions and sales for online businesses. Our ideal prospects belongs to some key categories of businesses like ecommerce, saas etc. However, I would like to know the estimated volume of traffic for a website before approaching them and introducing our service. So if there was a tool I could use to estimate the volume of visitors a specific website receives on average a day or month, it would be hugely beneficial.Obviously, these are prospective clients, so we do not have access to their system or their analytics. I just want to get an estimate. So for example, if I entered the domain abc.com into the system, I would hope it could tell me, that abc.com gets an average of 900 unique visitors a day. I don't need too much detail like geographic locations etc, but it would be a bonus having that additional information. I also don't mind paying for a tool that's quality. So it doesn't have to be free.
Moz Pro | | RyanShahed0 -
Adding new categories to my website and getting them ranked
Hi, I currently run a website and have added a couple of new categories lately. The website is Planes Trains Automobiles Co UK. For example I want to add "Planes for sale" as a new category. What is the best way to go about this? Will each aircraft added add content therefore ranking? I don't want to approach an SEO company yet as the service would be free for the time being? Thanks
Moz Pro | | PlanesTrainsAutos0 -
How to resolve Duplicate Content crawl errors for Magento Login Page
I am using the Magento shopping cart, and 99% of my duplicate content errors come from the login page. The URL looks like: http://www.site.com/customer/account/login/referer/aHR0cDovL3d3dy5tbW1zcGVjaW9zYS5jb20vcmV2aWV3L3Byb2R1Y3QvbGlzdC9pZC8xOTYvY2F0ZWdvcnkvNC8jcmV2aWV3LWZvcm0%2C/ Or, the same url but with the long string different from the one above. This link is available at the top of every page in my site, but I have made sure to add "rel=nofollow" as an attribute to the link in every case (it is done easily by modifying the header links template). Is there something else I should be doing? Do I need to try to add canonical to the login page? If so, does anyone know how to do it using XML?
Moz Pro | | kdl01 -
How long does a crawl take?
A crawl of my site started on the 8th July & is still going on - is there something wrong???
Moz Pro | | Brian_Worger1