Crawl error : 804 https (SSL) error
-
Hi,
I have a crawl error in my report : 804 : HTTPS (SSL) error encountered when requesting page.
I check all pages and database to fix wrong url http->https but this one persist.
Do you have any ideas, how can i fix it ?
Thanks
website : https://ilovemypopotin.fr/
-
ok thanks Daven i'm going to sign up to the beta test.
Have a goog day
-
Hi There, Dave here from the Moz help team!
Unfortunately at this time our crawler isn't compatible with Server Name Indication (SNI), which appears to be in use on your site. In general, SNI is a totally acceptable security configuration, but our crawler simply isn't equipped to be able to handle these settings.
https://www.screencast.com/t/lddqo8gVOt2
The good news is that our team has been working hard on a new crawler that will support this configuration and this crawler is currently in its Beta phase. If you're interested in being a part of the beta testing, the link to the sign up page is here: http://goo.gl/forms/LCvL9Ix8JDHfbAvr1. Once you fill out that form, you will automatically be added into the Beta during the next round.
We will be rolling the new crawler out to all of our users once it's out of it's Beta phase, but I don't have an exact timeframe of when that will be at this time as it's still being worked on.
In the meantime, I apologize for the inconvenience and let me know if you have any questions or need any additional clarification.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why my site not crawl?
my error in dashboard: **Moz was unable to crawl your site on Jul 23, 2020. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster i thing edit robots.txt how fix that?
Feature Requests | | alixxf0 -
Errors for URLS being too long and archive data is duplicate
I have hundreds of errors for the same three things. The URL is too long. Currently the categories are domain.com/product-category/name main product/accessory product/
Feature Requests | | PacificErgo
-Do I eliminate somehow the product category? Not sure how to fix this. 2) It has all of my category pages listed as archives showing duplicates. I don't know why, as they are not blog posts, they hold products on them. I don't have an archived version of this. How do I fix this? 3. It is saying my page speed is slow. I am very careful to optimize all my photos in PhotoShop. Plus I have a tool on the site to further compress. I just went with another host company that is supposed to be faster. Any ideas/ I would so appreciate your help and guidance. All my best to everyone, be safe and healthy.0 -
Access all crawl tests
How can I see all crawl tests ran in the history of the account? Also, can I get them sent to an email that isn't the primary one on the account? Please advise as I need this historical data ASAP.
Feature Requests | | Brafton-Marketing0 -
Is there a way to take notes on a crawled URL?
I'm trying to figure out the best way to keep track of there different things I've done to work on a page (for example, adding a longer description, or changing h2 wording, or adding a canonical URL. Is there a way to take notes for crawled URLs? If not what do you use to accomplish this?
Feature Requests | | TouchdownTech0 -
MOZ Site Crawl - Ignore functionality Request
I now understand that the ignore option found in the MOZ Site Crawl tool will permanently remove the item from ever showing up in our Issues again. We desire to use the issues list as kind of like a To-Do checklist with the ultimate goal to have no issues found and would like to "Temporarily Remove" an issue to see if it is shows back up in future crawls. If we properly fix the issue it shouldn't show back up. However, based on the current Ignore function, if we ignore the issue it will never show back up, even if the issue is still a problem. At the same time, the issue could be a known issue that the end user doesn't want to ever show back up and they desire to never have it show again. In this case it might be nice to maintain the current "Permanently Ignore" option. Use the following imgur to see a mockup of my idea for your review. pzdfW
Feature Requests | | StickyLife0 -
MOZ Site Crawl - Ignore functionality question
Quick question about the ignore feature found in the MOZ Site Crawl. We've made some changes to pages containing errors found by the MOZ Site Crawl. These changes should have resolved issues but we're not sure about the "Ignore" feature and do not want to use it without first understanding what will happen when using it. Will it clear the item from the current list until the next Site Crawl takes place. If Roger finds the issue again, it will relist the error? Will it clear the item from the list permanently, regardless if it has not been properly corrected?
Feature Requests | | StickyLife1 -
Moz Crawler failing with https redirect?
Is there a way to get the Moz Crawl Test to work with HTTPS? I just got back this error: 902 : Network errors prevented crawler from contacting server for page. Site is set up with a standard 301 to redirect http to https - or at least I certainly hope it is! Rex Swain's HTTP Header Checker took shows a standard 301. Anyone else experiencing this error? btw - this is both a specific question and an opportunity for open discussion... Thanks!
Feature Requests | | seo_plus0 -
Crawl test limitaton - ways to take advantage of large sites?
Hello I have a large site (120,000+) and crawl test is limited to 3,000 pages. I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example? Thanks!
Feature Requests | | CamiRojasE0