We recently switched from HTTP to HTTPS and we are having crawling issues!
-
We switched our website from HTTP to HTTPS and we started to get an email from Moz about the robots.txt being unable to crawl our website. The website is hosted through wordpress but we haven't had any issues until we switched. We have no idea what to do or even what the problem is! If you have had a similar problem and fixed it, we need your help! Thank you.
-
I know this is an old thread, but we are still having the same problem. I finally got around to sending a note to flywheel about this problem and it came back that everything is fine. I am not sure what to do here? It's on a shared hosted, so I don't have console\audit log access, however Flywheel is one of the best wordpress hosting companies out there (only thing they do).
As far as accessing the robots.txt file, I can go directly to it without any problems?
https://southernil.com/robots.txt -
Hi there!
Thanks so much for reaching out! I'm sorry you're having trouble!
I took a look at your crawl data and your site to see if I could figure out the issue. When I first tried to access your robots.txt file from a browser, it returned an error saying there were too many redirects in place. I checked to see what our crawler was receiving from your server and looks like it keeps being served a 301 redirect which points back to itself. However, when I tried to access the file from a browser a bit later, it loaded without a problem. I'm wondering if you can check your server logs to see what your server is sending back to our crawler Rogerbot?
If you could send any further info over to help@moz.com, that would be great! That way we can do some more digging and see what's going on.
Looking forward to hearing from you!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does Moz only seem to be crawling a snap shot of the site I am working with?
I was wondering if anyone can help? I am working using Moz to help improve the SEO on a website I am working with, the website contains thousands of pages, yet for some reason Moz only seems to be crawling a small snap shot of the website. I know there are particular pages that I had added a couple of weeks ago - about 300 in total - and none of these were showing on the first crawl, so I did another on-demand crawl and some of these showed up then. Despite this, it says it crawled 700ish pages, but there are getting close to 20-30ish thousand live pages on the site. Any thoughts and guidance as to why they crawling may be stopping?
Getting Started | | dsmith8020200 -
Do I have to manually mark as 'fixed' or 'Ignore' manually on Metadata Issues when completed?
When I have amended the missing description Metadata issue on an individual page do I have to manually mark as 'fixed' or 'Ignore'? I have attended to several pages regards Missing Description Metadata Issues and not manually marked them as fixed. However they still appear in metadata missing description issues after a second on demand crawl? Will they continue to appear until I manually mark them as fixed?
Getting Started | | LM_Marketing_Solutions_Ltd0 -
Moz unable to crawl my Zenfolio website
Hey guys, I am attempting to optimize a website for my wife's business but Moz is unable to crawl it. Zenfolio is the web hosting service (she is a photographer). The error message is: **Moz was unable to crawl your site on Apr 1, 2019. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. I did read the troubleshooting guide but nothing worked. My robots.txt file disallows a few bots, but not roger bot. Anyone have any idea what is going on? Or do I need to request server logs from Zenfolio? Thanks
Getting Started | | bpenn111 -
When I crawl my site On Moz it says it can't access the robots.txt file, but crawl is fine on SEM Rush - Anyone know any reason for this?
Hi guys, When I try to run a site crawl on Moz it returns an error saying that it has failed due to an error with the robots.txt file. However, my site can be crawled by SEM Rush with no mention of problems with roots.txt file issues. My developer has looked into it and insists their is no problem with my robots.txt and I've tried the Moz crawl at least 6 times over an 8 week period. Has anyone ever seen such a large discrepancy between Moz and SEM Rush or have any ideas why Moz has this issue with my site?? TIA everyone
Getting Started | | Webreviewadmin0 -
Does MOZ pick up every issue in one crawl?
Hi, Does MOZ pick up every error/warning in one crawl? Or does it take numerous crawls? Many thanks Lee
Getting Started | | lbagley0 -
My page can not be crawled
Hi all,Hope you could you help me here.I just seen this message but I don't know how to fix it?"Your page redirects or links to a page that is outside of the scope of your campaign settings. Your campaign is limited to pages with _____ in the URL path, which prevents us from crawling through the redirect or the links on your page. To enable a full crawl of your site, you may need to create a new campaign with a broader scope, adjust your redirects, or add links to other pages that include ig.com/de. Typically errors like this should be investigated and fixed by the site webmaster."Any ideas about how should I fix it?Thanks
Getting Started | | lauracelada23100 -
A lot of duplicate content issues - does Moz understand canonical URL?
Hi, Since I subscribed to Moz my Magento store has given a lot of duplicate content issues. However, I did have a problem with Canonical URL at the time. It has been settled for a couple of weeks by now and although I had 302 redirects before, I configured Magento to 301 today. Since Moz has been crawling and showing duplicate content for exactly the same Magento pages but with endings like store=us, store=aus etc (since I have several store views enabled), I am wondering whether canonical URL does actually help Google to skip these versions of the duplicate pages and does Moz also understand it and will it reduce the amount of duplicate content errors once the 301 redirects and canonical URLs have been properly set for a week or so? Thanks!
Getting Started | | speedbird12290