804 error preventing website being crawled
-
Hi
For both subdomains https://us.sagepub.com and https://uk.sagepub.com crawling is being prevented by a 804 error.
I can't see any reason why this should be so as all content is served through https.
Thanks
-
I'm afraid that's the case if you're using Cloudfront
Our system really ought to be able to handle that type of configuration, but it's proving to be quite an undertaking to make those changes from an engineering perspective. So, I'm not really able to say when we'll be able to accommodate SNI with our crawler. Sorry for the trouble!
-
I should have read the whole thread...I read the first part.
We use Cloudfront so I guess Moz cannot crawl the sites
-
Hi there.
The problem would be misconfigured SSL.
There was same q&a here: https://moz.com/community/q/804-https-ssl-error
Read that, see if it answers your question
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I check a website authority?
Pls I want to ask how I can check a website authority? E.g the authority of https://naijafreshgist.com
Link Explorer | | Inforeniks0 -
Account Error
Hey I have Moz account free trail for 30 days whenever I try to analyze website that is based on it show error please solve my problem.
Link Explorer | | cihiloj7770 -
Angular SPA & MOZ Crawl Issues
Website: https://www.exambazaar.com/ Issue: Domain Authority & Page Authority 1/100 I am using Prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (https://www.exambazaar.com/). Hence I think it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this? List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff Adding dotbot to Express Server:
Link Explorer | | gparashar
prerender.crawlerUserAgents.push('dotbot');0 -
Sufficient Words in Content error, despite having more than 300 words
My client has just moved to a new website, and I receive "Sufficient Words in Content" error over all website pages , although there are much more than 300 words in those pages. for example: https://www.assuta.co.il/category/assuta_sperm_bank/ https://www.assuta.co.il/category/international_bank_sperm_donor/ I also see warnings for "Exact Keyword Used in Document at Least Once" although there is use of them in the pages.. The question is why can't MOZ crawler see the pages contents?
Link Explorer | | michalos12210 -
Why is Moz not crawling my backlinks
Hi my website www.dealwithautism.com is 3 months old and has been on DA 1 and PA1 ever since, even though the site is actively developed with quality content (a couple of posts already have 1k+ fb likes acquired editorially, while that doesnt necessarily improve SERP, it sure tells you that the post is engaging). In contrast another site of mine, www.deckmymac.com which is hardly ever managed, not have more than 15 posts and just 1 backlink, has DA 14. Running an on page analysis on www.dealwithautism.com I observed that Moz has not identified any backlinks nor social signals (except G+). However, according to Webmasters, I have 57 links, 51 of them to the root. Even Majestic is able to report 32+ backlinks. So what am I missing? Certainly, at this stage my website doesn't deserve DA 1, or does it?
Link Explorer | | DealWithAutism0 -
I have a robots.txt error on Moz but not on Google Webmaster tools. Wondering what to do.
For the site www.patrickwerry.com, I'm getting a DA of 1 and a Error Code 612: Error response for robots.txt However, when I check webmaster tools, it's showing no errors and allowing robots.txt for the domain. Is there anything I can do to fix the issue on the Moz side so I can get better data? If you can respond in layman's terms even better. 🙂 Not an SEO. Lisa
Link Explorer | | LisaGerber0 -
Moz crawling bot
Hi guys, in OpenSiteExplorer -> Top Pages, there are no page titles displayed in a raport for certain domain, and "HTTP Status" column shows: "Blocked by robots.txt". I tried to find out what the ID of Moz crawling bot is, and on this page: http://moz.com/community/q/seomoz-spider-bot-details someone says it's: Mozilla/5.0 (compatible; rogerBot/1.0; http://www.seomoz.org/dp/rogerbot). However, my robots.txt doesn't have such entry. Take a look: Automatically banned scanners and crawlers section User-agent: 008 Disallow: / user-agent: AhrefsBot Disallow: / User-agent: MJ12bot Disallow: / User-agent: metajobbot Disallow: / User-agent: Exabot Disallow: / User-agent: Ezooms Disallow: / User-agent: fyberspider Disallow: / User-agent: dotbot Disallow: / User-agent: MojeekBot Disallow: / Section end What could be the problem here, then? Why does the Moz bot think I'm blocking it?
Link Explorer | | superseopl0 -
How Is a Page Crawled by Moz When Moz Says 'No Links'?
As above, really. I've crawled a new client's site to find the Moz crawler has identified a handful of 404 errors. The Moz crawler says these pages have '0 linking domains', and OSE has no data for these pages. So how are these pages being crawled by Moz and what should I advise my client?
Link Explorer | | xerox4320