Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
WEbsite cannot be crawled
-
I have received the following message from MOZ on a few of our websites now
Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
I have spoken with our webmaster and they have advised the below:
The Robots.txt file is definitely there on all pages and Google is able to crawl for these files. Moz however is having some difficulty with finding the files when there is a particular redirect in place.
For example, the page currently redirects from threecounties.co.uk/ to https://www.threecounties.co.uk/ and when this happens, the Moz crawler cannot find the robots.txt on the first URL and this generates the reports you have been receiving. From what I understand, this is a flaw with the Moz software and not something that we could fix form our end.
_Going forward, something we could do is remove these rewrite rules to www., but these are useful redirects and removing them would likely have SEO implications. _
Has anyone else had this issue and is there anything we can do to rectify, or should we leave as is?
-
Ok, I made a quick test of your robot.txt file and looks fine,
https://www.threecounties.co.uk/robots.txtThen I made a test https://httpstatus.io/ to check the status code
of your robot.txt file and show me 200 status code (So it's fine)Also, you need to make sure that your robot.txt file is accessible for the Rogerbot (Moz crawler)
This day the hosting providers have become very strict with third-party crawlers
This includes Moz, Majestic SEO, Semrush and Ahrefs.Here you can find all the possible sources of the problem and recommended solutions
https://moz.com/help/guides/moz-pro-overview/site-crawl/unable-to-crawlRegards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO impact of redirecting high ranking mirror site to the main website
During SEO audit for a client I noticed that they had over a dozen duplicate websites that are carbon copies of the main website. This was done via CMS platform and DNS. One of the mirror sites has about 400 indexed pages and has Moz DA of 42 and 137k External Equity-Passing Links. Full metrics comparison is attached. I originally planned on doing rel="canonical" on the mirror site but the CMS vendor never even heard of it and is refusing to implement it in the header. My only other option is doing one to one 301 redirects. Since the mirror site ranks well, even competes with main domain for some positions on the 1st page of SERP, what will be the impact after the redirects? Is doing 301's still the best option? Thanks! PrUpN3q
Moz Pro | | dasickle0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Special Characters in URL & Google Search Engine (Index & Crawl)
G'd everyone, I need help with understanding how special characters impact SEO. Eg. é , ë ô in words Does anyone have good insights or reference material regarding the treatment of Special Characters by Google Search Engine? how Page Title / Meta Desc with Special Chars are being index & Crawl Best Practices when it comes to URLs - uses of Unicode, HTML entity references - when are where? any disadvantage using special characters Does special characters in URL have any impact on SEO performance & User search, experience. Thanks heaps, Amy
Moz Pro | | LabeliumUSA0 -
How can a site have a backlink from Barclays website?
Hi, I have entered a competitiors website www.my-wardrobe.com into Open Site to see who they get links from and to my surprise they have a load from Barclays Business Banking. When I visit the page I can not see the links. But if I search the pages source code for my-wardrobe, there I have it, a link to my-wardrobe.com. How have they done this? Surely Barclays haven't sold them it? And more so, why are they receiving link juice when you cant even see the link on the Barclays page in question - http://www.barclays.co.uk/BusinessBanking/P1242557952664 Thanks | |
Moz Pro | | YNWA
| | <a <span="">href</a><a <span="">="</a>http://www.my-wardrobe.com" class="popup" title="Link opens in a new window" rel='' onmousedown="dcsMultiTrack('DCS.dcsuri','BusinessBankingfromBarclays/Footer/wwwmywardrobecom', 'WT.ti', '','WT.dl','1');"> |
| | www.my-wardrobe.com |
| |
|
| | |0 -
How to resolve Duplicate Content crawl errors for Magento Login Page
I am using the Magento shopping cart, and 99% of my duplicate content errors come from the login page. The URL looks like: http://www.site.com/customer/account/login/referer/aHR0cDovL3d3dy5tbW1zcGVjaW9zYS5jb20vcmV2aWV3L3Byb2R1Y3QvbGlzdC9pZC8xOTYvY2F0ZWdvcnkvNC8jcmV2aWV3LWZvcm0%2C/ Or, the same url but with the long string different from the one above. This link is available at the top of every page in my site, but I have made sure to add "rel=nofollow" as an attribute to the link in every case (it is done easily by modifying the header links template). Is there something else I should be doing? Do I need to try to add canonical to the login page? If so, does anyone know how to do it using XML?
Moz Pro | | kdl01 -
How to download an entire Website (HTML only), ready to rehost
Hi all, I work for a large retail brand and we have lots of counterfeit sites ranking for our products. Our legal team seizes the websites from the owners who then setup more counterfeit sites and so forth. As soon as we seize control of a website, the site content is deleted and subsequently it falls out of the SERPs to be immediately replaced by the next lot of counterfeit sites. I need to be able to download a copy of the site before it is seized, so that once I have control of it I can put the content back and hopefully quickly regain the SERPs (with an additional 'counterfeit site' notice superimposed on that page in JS). Does anyone know or can recommend good software to be able to download an entire website, so that it can be easily rehosted? Thanks FashionLux (Edited title to reflect only wanting to download html, CSS and images of site. I don't want the sites to actually be functional - only appear the same to Google)
Moz Pro | | FashionLux0 -
A tool to submit websites in directories
Hello I am looking for a tool to help me to submit websites in directories, something like the yooda tool. http://www.yooda.com/outils_referencement/submit_center_yooda/ This tool seems good no? do you offer something similar at seomoz? or where could I find some similar tools and in which languages is it available?
Moz Pro | | bigtimeseo2