Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
WEbsite cannot be crawled
-
I have received the following message from MOZ on a few of our websites now
Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
I have spoken with our webmaster and they have advised the below:
The Robots.txt file is definitely there on all pages and Google is able to crawl for these files. Moz however is having some difficulty with finding the files when there is a particular redirect in place.
For example, the page currently redirects from threecounties.co.uk/ to https://www.threecounties.co.uk/ and when this happens, the Moz crawler cannot find the robots.txt on the first URL and this generates the reports you have been receiving. From what I understand, this is a flaw with the Moz software and not something that we could fix form our end.
_Going forward, something we could do is remove these rewrite rules to www., but these are useful redirects and removing them would likely have SEO implications. _
Has anyone else had this issue and is there anything we can do to rectify, or should we leave as is?
-
Ok, I made a quick test of your robot.txt file and looks fine,
https://www.threecounties.co.uk/robots.txtThen I made a test https://httpstatus.io/ to check the status code
of your robot.txt file and show me 200 status code (So it's fine)Also, you need to make sure that your robot.txt file is accessible for the Rogerbot (Moz crawler)
This day the hosting providers have become very strict with third-party crawlers
This includes Moz, Majestic SEO, Semrush and Ahrefs.Here you can find all the possible sources of the problem and recommended solutions
https://moz.com/help/guides/moz-pro-overview/site-crawl/unable-to-crawlRegards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ive been using moz for just a minute now , i used it to check my website and find quite a number of errors , unfortunately i use a wordpress website and even with the tips , is till dont know how to fix the issues.
ive seen quite a number of errors on my website hipmack.co a wordpress website and i dont know how to begin clearing the index errors or any others for that matter , can you help me please? ghg-1.jpg
Moz Pro | | Dogara0 -
GOOGLE ANALYTIC SKEWED DATA BECAUSE OF GHOST REFERRAL SPAM ND CRAWL BOTS
Hi Guys, We are having some major problems with our Google Analytics and MOz account. Due to the large number of ghost/referral spam and crawler bots we have added some heavy filtering to GA. This seems to be working protecting the data from all these problems but also filtering out much needed data that is not coming through. In example, we used to get a hundred visitors a day at the least and now we are down to under ten. ANYBODY PLEASE HELP. HAVE READ THROUGH MANY ARTICLES WITH NO FIND TO PERMANENT SOLID SOLUTION (even willing to go with paid service instead of GA) Thank You so Much, S.M.
Moz Pro | | KristyKK0 -
Potential spam websites with high DA linking back to us
Hey everybody, I'm going through all my sites and disavowing crap links. However, I'm having trouble distinguishing which high DA sites to disavow. What would you do? For example:
Moz Pro | | MEllsworth
https://moz.com/researchtools/ose/spam-analysis?site=busca.starmedia.com&target=domain&source=subdomain&page=1&sort=spam_score and https://moz.com/researchtools/ose/spam-analysis?site=cc879fe.activerain.com&target=domain&source=subdomain&page=1&sort=spam_score They both have tons of backlinks - both good and crap. The first has a DA of 72 and a Moz spam score of 4/17 and the second has a DA of 86 and a Moz spam score of 9/171 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Crawlers crawl weird long urls
I did a crawl start for the first time and i get many errors, but the weird fact is that the crawler tracks duplicate long, not existing urls. For example (to be clear): there is a page: www.website.com/dogs/dog.html but then it is continuing crawling:
Moz Pro | | r.nijkamp
www.website.com/dogs/dog.html
www.website.com/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dogs/dog.html what can I do about this? Screaming Frog gave me the same issue, so I know it's something with my website0 -
Best Chrome extension to find contact emails on a website
Hi, I've done some digging around the Q and A and SEOMoz articles. Still not finding exactly what I need. I'm just looking for a tool that will quickly help me find the best contact email on a particular website. Whether it be the one the site is registered to a different one or both. Thanks in advance for the help. Aaron
Moz Pro | | arkana0 -
How to download an entire Website (HTML only), ready to rehost
Hi all, I work for a large retail brand and we have lots of counterfeit sites ranking for our products. Our legal team seizes the websites from the owners who then setup more counterfeit sites and so forth. As soon as we seize control of a website, the site content is deleted and subsequently it falls out of the SERPs to be immediately replaced by the next lot of counterfeit sites. I need to be able to download a copy of the site before it is seized, so that once I have control of it I can put the content back and hopefully quickly regain the SERPs (with an additional 'counterfeit site' notice superimposed on that page in JS). Does anyone know or can recommend good software to be able to download an entire website, so that it can be easily rehosted? Thanks FashionLux (Edited title to reflect only wanting to download html, CSS and images of site. I don't want the sites to actually be functional - only appear the same to Google)
Moz Pro | | FashionLux0