Why RogerBot can't crawl site https://unplag.com
-
Hello
Please help me to solve the problem.
The on-page grader and Crawl Test are not working for Unplag.com website. Both said that they can't access the url. Yes, I've tried different variants like unplag.com, http://unplag.com
One more thing - RogerBot was disallowed in robots.txt file. I deleted it from the file a week ago so maybe moz index haven't been renewed.
-
Thank you. I'll try to solve the problem
-
The trouble is not with your robot.txt - in the server config you block rogerbot completely and serve a 400 for each request it makes..
If you have a user agent switcher plugin in your browser & change the user agent to rogerbot (rogerbot/1.1 (http://moz.com/help/guides/search-overview/crawl-diagnostics#more-help, rogerbot-crawler+pr2-crawler-101@moz.com) - the server returns a 400 Bad Request.
Dirk
-
The logs are like this:
"GET / HTTP/1.0" 400 166 "-" "rogerbot/1.1 (http://moz.com/help/guides/search-overview/crawl-diagnostics#more-help, rogerbot-crawler+pr2-crawler-101@moz.com)" "-" - "https"
and of course sometimes rogerbot is trying to see the robots file:
"GET /robots.txt HTTP/1.1" 400 166 "-" "rogerbot/1.1 (http://moz.com/help/guides/search-overview/crawl-diagnostics#more-help, rogerbot-crawler+pr2-crawler-101@moz.com)" "-" - "https"
for me it looks like the rogerbot is disallowed in robots.txt but the file is like this https://unplag.com/robots.txt
-
thanks a lot!
-
Follow the advice from Jordan below and try to check your log files to see what the server response is when Rogerbot is trying to visit the site.
I noticed some DNS issues with your site - check http://dnscheck.pingdom.com/?domain=unplag.com - Nameservers don't seem to be ok. Also noticed that you have a 302 redirect from http -> https - while this should be 301. Probably not related to your main issue but worth checking.
-
Thanks.
The last crawl was after the robots.txt change.
And I don't see any errors in the dashboard.
-
After creating a fresh test campaign for the site, I'm still seeing a 400 response being served to rogerbot from https://unplag.com/. While I'm not able to pinpoint the exact setting that is causing the site to serve that response, I'd recommend checking your server logs to verify the response that is being served.
-
It's possible that your site hasn't been crawled yet (since you changed the robots.txt). You can see in your campaign dashboard (upper right corner) when the next crawl is scheduled.
Do you see any specific error codes on your dashboard?
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I use json-ld in our site but schema not found in markup tool bar moz
i use json-ld in our site. structure data in test without any error but in test markup moz in our site schema is not found! Is it time for Google to identify the code to verify the schema?
Moz Bar | | shokoufezand0 -
Why doesn't the Keyword Explorer "Explore By Site" work?
Whatever page/domain url I put in here, I get a message coming up saying ""Getting rankings counts failed" Why does his happen? I can't find anything in the Help about this.
Moz Bar | | mfrgolfgti0 -
Why do i get multiple variations of my url with ?order=asc and ?view=list at the end of it in my crawl report?
I just did a crawl for one my clients to validate any error in the structure. Next thing I know is that the website have multiple variation of the same url with query like ?order=asc and ?view=list at the end of it. I am wondering why these url variations appears in the crawl I just did since bots aren't suppose to go further thant the ? normally. Just to show you a couple of url's of my crawl test. <colgroup><col width="484"></colgroup>
Moz Bar | | alexrbrg
| https://test.com/exemple/?per_page=9 |
| https://test.com/exemple/?per_page=15 |
| https://test.com/exemple/?per_page=30 |
| https://test.com/exemple/?orderby=popularity |
| https://test.com/exemple/?orderby=date |
| https://test.com/exemple/?orderby=price |
| https://test.com/exemple/?orderby=price-desc |
| https://test.com/exemple/?order=asc |
| https://test.com/exemple/?order=desc |
| https://test.com/exemple/?view=list | Thank you Guys0 -
My crawl report only shows 1 link
Hello, I've tried a crawl for the site www.doctify.co.uk and it's only returned 1 link in the report which is the homepage. Do you know what the issue could be? Thanks, Nina
Moz Bar | | Global_Blue0 -
Getting 'Sorry, but that URL is inaccessible' error msg when trying to run On-Page Grader
I just signed up for MOZ Pro for the first time today. Tried to run the 'on-page grader' tool on some of my pages but I'm getting a 'Sorry, but that URL is inaccessible' error msg. I have verified against the robot.txt file that the pages are NOT blocking any crawlers. Can anybody help?
Moz Bar | | spinoki0 -
Confusing Moz Crawl?
Hi there, I am not sure if I am missing on something but the moz crawls are rather confusing. After singing in I have received 11 emails with crawls and today I have received again new, When I go to check there to the dashboard it shows 26 pages with issues. When I scroll down I see the pages with issue. Then when I click on the first page listed, to view the issues it says this: Rel Canonical
Moz Bar | | Rebeca1
Using rel=canonical suggests to search engines which URL should be seen as canonical. For this site: http://villasdiani.com/ but we have sorted out the canonical issues a long time ago. Is this a wrong information or is it really true that we do not specify the canonical for our site? Then the second page with issue is there listed http://villasdiani.com/beach-villas/ and it says: Duplicate Page Title
You should use unique titles for your different pages to ensure that they describe each page uniquely and don't compete with each other for keyword relevance. But it does not point out which page is duplicate with this one! I do not have any other page named the same way. It also says in Issues overview 26pages with issues, but it shows on the bottom only 5 under and when I click on view more it brings me to high priority issues where is 0. The most is freaking me out this report: When I click on links, there are listed on the bottom the pages with highest authority among which I found this http://villasdiani.com/db I have never created this kind of page! Funny enough when I click on it it really open that page! How this can be??? In issues overview it also shows on the bottom, right corner 11 page with duplicate content but when I click on it to review it it brings me to high priority issues windows where is not displayed anything Can somebody advice me regarding of this. I have sign up here to learn and sort out the problems with the site but so far I am only getting more confused here. Thank you very much for looking into this.0