Will robots.txt override a Firewall for Rogerbot?
-
Hey everybody.
Our server guy, who is sorta difficult, has put these ridiculous security measures in place which lock people out of our website all the time. Basically if I ping the website too many times I get locked out, and that's just on my own, doing general research.
Regardless, all of our audits are coming back with 5xx errors and I asked if we could add rogerbot to the robots.txt. He seems to be resistant to the idea and just wants to adjust the settings to his firewall...
Does anybody know if putting that in the Robots.txt will override his firewall/ping defense he has put in place? I personally think what he has done is WAY too overkill, but that is besides the point.
Thanks everybody.
-
So I spoke with our host. Basically he has been adjusting the port flood settings because of a DDoS attack we had roughly 9 months ago.
We have roughly 1000 domains on the same server, all with wordpress. I went through and changed the nameserver on around 800 of them to bring us down. In the long run, I want to bring us to 1 website. There is no reason for us to have 200, or 5 for that matter. They are redundant websites that were build simply to bolster our main website by blackhat tactics.
Our host stated that the only way to keep things kosher would be to switch all 1,000 domains to a new server every 2 years because once the "hackers" find out that there is a cluster of 1,000 domains in the same place, they will blast it.
Anyway, I'm working on cutting the domains in the safest way possible, and switching servers as soon as possible!
-
Yes it does thank you!
When I asked out dev (who also hosts our domains) to adjust the settings for the rogerbot he said
"3 pages per second is basically me undoing the portflood setting completely, thus rending the site very insecure to brute force attempts, which would inevitably drive the server load very high in anywhere from 3-24 hours."
I am glad that he is concerned about the security of our website. At the same time, I find it hard to believe we need anything near this intense. We do not have any online store, we do not collect credit card data or anything like that.
It seems overkill...
-
Unfortunately, no. The security he has in place will block the crawler access before it ever gets a chance to see the robots file.
If you have a dev who's making business-limiting decisions, you have a major problem and need to address that first.
Hope that helps?
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How Much Time It Will Take To Lower the Spam Score?
I'm facing an issue with my website. Due to little to no knowledge about link building and backlink, i created backlinks without checking the quality and spam score of the sites. Now there are many sites linking to my website but the overall spam score is very high of my website and my domain is reviewheart.com that i'm talking about. I have created a list and disavowed all he poor linking domains but still no improvement shown. Can anyone have the idea how much time moz will take to show the updated spam score as i have disavowed all the low quality spammed linking site?
Moz Bar | | rajas20192 -
605 : Page banned by robots.txt
Hello everyone, I need experts help here, Please suggest, I am receiving crawl errors for my site that is , X-Robots-Tag: header, or tag. my robots.txt file is: User-agent: * Disallow:
Moz Bar | | bhomes0 -
Cannot Crawl ... 612 : Page banned by error response for robots.txt.
I tried to crawl www.cartronix.com and I get this error: 612 : Page banned by error response for robots.txt. I have a robots.txt file and it does not appear to be blocking anything www.cartronix.com/robots.txt Also, Search Console is showing "allowed" in the robots.txt test... I've crawled many of our other sites that are similarly set up without issue. What could the problem be?
Moz Bar | | 1sixty80 -
MozBar > General Attributes > Meta Robots > noindex
I'm having a hard time figuring out where the noindex value for Meta Robots is coming from so I can fix it. Can anybody spot the issue or point me to some docs that show were the MozBar finds this information http://www.produnkhoops.com
Moz Bar | | tatermarketing0 -
I am doing the On-page SEO for a website that's never had any SEO done before. I will start with the Pages. Is it necessary to do SEO/Keywords for older Posts?
Using the On-Page Grader to perform SEO on pages of WordPress website. This website never had any SEO done before. Should I go back and perform the same work on the Post Pages?
Moz Bar | | Joseph.Lusso0 -
We Launched a new site and Rogerbot is still reporting on links/errors from the old site, is there a way to clear those out?
We are mostly a Branding agency, and have not put a lot of effort into SEO for ourselves... SEO tends to take a backseat to design most of the time, making it a little difficult for me at times when it comes to SEO. We recently launched a new site, http://Roninadv.com/ and the developer and I have done quite a bit of work to make it work well for Google. I was really looking forward to a new crawl report from Roger, but alas, It's like Roger crawled the old site? The new site has been up since last Monday. Is there a way to clear out the old errors? Do I just need to give roger more time?
Moz Bar | | PaulRonin0 -
Where's Rogerbot?
Could someone please tell me where Rogerbot lives now?! Unless I'm having distorted memories, I used to be able to crawl websites with Rogerbot (that are not set-up as Campaigns). Could someone please let me know where to find this now? p.s. I didn't really want to Q&A this, but after a while clicking around moz I'm now even questioning myself!
Moz Bar | | GregDixson0 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0