Zendesk robots.txt
-
Hello!
We have a Zendesk support site at support.zspace.com - our Moz crawl report is saying that there is 85 temporary redirect issues, mostly coming from our support site.
My question is, does the Moz crawler use robots.txt?
We have currently Disallow: /search in our robots.txt, but not /search/*
Most of the temporary redirect URL's in the crawl error report look like this...
https://support.zspace.com/hc/en-us/search/click?data=BAh7CjoHaWRp
So would we need to add "Disallow: /hc/en-us/search/*" to get Moz crawler to ignore these?
-
Thanks Yossi, this is kind of what I expected I think. I guess the question should have been "has anyone had Moz crawl issues with their Zendesk support site"?
The main issue with our support site is that Zendesk does not allow access to the robots.txt file so there is no way to add regular expressions like the wildcard search/* to it.
I will re-post the question as above.
-
Hi
Mozs crawler aka the RogerBot does obeys the robots.txt and Moz states that under the help section here.
Looks like the problem you are having is connected to your robots.txt statements/directives.
In order to block a directory **and its content,**you need to use the category name following with a forward slash (even if you have a redirect from xxx.com/search to xxx.com/search/)
(Check out Google robots.txt guidelines)if you want to block the directory "search" and its content, use Disallow : /search/ (no need for asterisk at the end)
if you use "Disallow: /hc/en-us/search/" (again, no need for asterisk) you will block all the content under xxx.com/hc/en-us/search/
So, for example if you have content that you want to block under xxx.com/hc/fr-FR/search/ it will not be blocked because your statement/directive limits it to a very specific "search" directory which is located under "en-US".What is it that you want to block exactly?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Limit MOZ crawl rate on Shopify or when you don't have access to robots.txt
Hello. I'm wondering if there is a way to control the crawl rate of MOZ on our site. It is hosted on Shopify which does not allow any kind of control over the robots.txt file to add a rule like this: User-Agent: rogerbot Crawl-Delay: 5 Due to this, we get a lot of 430 error codes -mainly on our products- and this certainly would prevent MOZ from getting the full picture of our shop. Can we rely on MOZ's data when critical pages are not being crawled due to 430 errors? Is there any alternative to fix this? Thanks
Moz Bar | | AllAboutShapewear2 -
How do can the crawler not access my robots.txt file but have 0 crawler issues?
So I'm getting this errorOur crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.https://www.evernote.com/l/ADOmJ5AG3A1OPZZ2wr_ETiU2dDrejywnZ8kHowever, Moz is saying I have 0 Crawler issues. Have I hit an edge case? What can I do to rectify this situation? I'm looking at my robots.txt file here: http://www.dateideas.net/robots.txt however, I don't see anything that woudl specifically get in the way.I'm trying to build a helpful resource from this domain, and getting zero organic traffic, and I have a sinking suspicion this might be the main culprit.I appreciate your help!Thanks! 🙂
Moz Bar | | will_l0 -
Is the update site crawl feature following robot.txt rules?
I noticed that most of the errors would not be occurring if Moz's tool followed the rules implemented in sites robots.txt. Has anyone else seen this problem and do you know if Moz will fix this?
Moz Bar | | jamestown0 -
Has anyone had to deal with Moz crawl issues on their Zendesk support site?
If so - how did you end up resolving them? For instance we have 85 "temporary redirect" errors from our Zendesk support site in our crawl error report and we don't have access to the robots.txt file through Zendesk.
Moz Bar | | zspace0 -
MozBar > General Attributes > Meta Robots > noindex
I'm having a hard time figuring out where the noindex value for Meta Robots is coming from so I can fix it. Can anybody spot the issue or point me to some docs that show were the MozBar finds this information http://www.produnkhoops.com
Moz Bar | | tatermarketing0 -
Meta Robots "Index, Follow"
In my MozBar under "General Attributes" it says "index, follow" next to Meta Roberts for one of our client's websites. I've never seen "index, follow" before. I've seen it say "not found." What does index, follow mean and is that a bad thing? I know the reason should be obvious but this site has had a lot of problems and I'm wondering if this is related.
Moz Bar | | SEOhughesm1 -
Will robots.txt override a Firewall for Rogerbot?
Hey everybody. Our server guy, who is sorta difficult, has put these ridiculous security measures in place which lock people out of our website all the time. Basically if I ping the website too many times I get locked out, and that's just on my own, doing general research. Regardless, all of our audits are coming back with 5xx errors and I asked if we could add rogerbot to the robots.txt. He seems to be resistant to the idea and just wants to adjust the settings to his firewall... Does anybody know if putting that in the Robots.txt will override his firewall/ping defense he has put in place? I personally think what he has done is WAY too overkill, but that is besides the point. Thanks everybody.
Moz Bar | | HashtagHustler0 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0