Unsolved What would the exact text be for robots.txt to stop Moz crawling a subdomain?
-
I need Moz to stop crawling a subdomain of my site, and am just checking what the exact text should be in the file to do this.
I assume it would be:
User-agent: Moz
Disallow: /But just checking so I can tell the agency who will apply it, to avoid paying for their time with the incorrect text!
Many thanks.
-
To disallow Moz from crawling a specific subdomain, you would need to add a robots.txt file to the root directory of that subdomain with the following content:
User-agent: rogerbot
Disallow: /This will disallow Moz's web crawler, Rogerbot, from crawling any page or file within the subdomain. Keep in mind that this will only prevent Moz from crawling the subdomain - other search engines or bots may still be able to access it unless you add specific disallow rules for them as well.
-
@Simon-Plan No, when you put just slash / you will disallow everything.
Instead you need to put /foo/ where foo is your subdomain. Please see here for a reference to some relevant examples: https://searchfacts.com/robots-txt-allow-disallow-all/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Crawler was not able to access the robots.txt
I'm trying to setup a campaign for jessicamoraninteriors.com and I keep getting messages that Moz can't crawl the site because it can't access the robots.txt. Not sure why, other crawlers don't seem to have a problem and I can access the robots.txt file from my browser. For some additional info, it's a SquareSpace site and my DNS is handled through Cloudflare. Here's the contents of my robots.txt file: # Squarespace Robots Txt User-agent: GPTBot User-agent: ChatGPT-User User-agent: CCBot User-agent: anthropic-ai User-agent: Google-Extended User-agent: FacebookBot User-agent: Claude-Web User-agent: cohere-ai User-agent: PerplexityBot User-agent: Applebot-Extended User-agent: AdsBot-Google User-agent: AdsBot-Google-Mobile User-agent: AdsBot-Google-Mobile-Apps User-agent: * Disallow: /config Disallow: /search Disallow: /account$ Disallow: /account/ Disallow: /commerce/digital-download/ Disallow: /api/ Allow: /api/ui-extensions/ Disallow: /static/ Disallow:/*?author=* Disallow:/*&author=* Disallow:/*?tag=* Disallow:/*&tag=* Disallow:/*?month=* Disallow:/*&month=* Disallow:/*?view=* Disallow:/*&view=* Disallow:/*?format=json Disallow:/*&format=json Disallow:/*?format=page-context Disallow:/*&format=page-context Disallow:/*?format=main-content Disallow:/*&format=main-content Disallow:/*?format=json-pretty Disallow:/*&format=json-pretty Disallow:/*?format=ical Disallow:/*&format=ical Disallow:/*?reversePaginate=* Disallow:/*&reversePaginate=* Any ideas?
Getting Started | | andrewrench0 -
Good to use disallow or noindex for these?
Hello everyone, I am reaching out to seek your expert advice on a few technical SEO aspects related to my website. I highly value your expertise in this field and would greatly appreciate your insights.
Technical SEO | | williamhuynh
Below are the specific areas I would like to discuss: a. Double and Triple filter pages: I have identified certain URLs on my website that have a canonical tag pointing to the main /quick-ship page. These URLs are as follows: https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black+fabric Considering the need to optimize my crawl budget, I would like to seek your advice on whether it would be advisable to disallow or noindex these pages. My understanding is that by disallowing or noindexing these URLs, search engines can avoid wasting resources on crawling and indexing duplicate or filtered content. I would greatly appreciate your guidance on this matter. b. Page URLs with parameters: I have noticed that some of my page URLs include parameters such as ?variant and ?limit. Although these URLs already have canonical tags in place, I would like to understand whether it is still recommended to disallow or noindex them to further conserve crawl budget. My understanding is that by doing so, search engines can prevent the unnecessary expenditure of resources on indexing redundant variations of the same content. I would be grateful for your expert opinion on this matter. Additionally, I would be delighted if you could provide any suggestions regarding internal linking strategies tailored to my website's structure and content. Any insights or recommendations you can offer would be highly valuable to me. Thank you in advance for your time and expertise in addressing these concerns. I genuinely appreciate your assistance. If you require any further information or clarification, please let me know. I look forward to hearing from you. Cheers!0 -
slug Link redirect to subdomain?
Hi !
Link Building | | Leviiii
Im Levi new here and new in the world of SEO, please dont judge if my questions are silly. Back on the days when the site was built we thought it is a good ideea to have subdomains that together with the domain name represent our main keywords.
ex. https://stansted.tonorwich.uk, https://heathrow.tonorwich.uk, https://luton.tonorwich.uk, https://gatwick.tonorwich.uk. There is content on this subdomains, would it make any difference from SEO perspective if we create slugs that redirect to these subdomains? for example creating https://tonorwich.uk/taxi-minibus-vip-tesla-norwich-to-stansted that redirects to https://stansted.tonorwich.uk ? Or better create these slugs with slightly different content?
Any ideeas would be appreciated.
Thanks in advance!0 -
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
Unsolved error in crawling
hello moz . my site is papion shopping but when i start to add it an error appears that it cant gather any data in moz!! what can i do>???
Moz Tools | | valigholami13860 -
MOZ point
I don't know why, but since a week ago I'm not receiving moz point for my activity on moz forum. Example
Getting Started | | Roman-Delcarmen
Today I posted 3 answer in the Question section but in my moz profile does not show the 3 moz point that normally I receive for that. I week ago suddenly I received 20 moz point, why I dont have any idea, maybe someone mark one of my asnwer as good answer. So my point is where I cant found the exact tracking record of my activity1 -
How to authenticate Moz crawler so that others don't use Rogerbot useragent to scrape data from our site?
Is there any way to authenticate genuine Moz crawler. Because, our website keeps getting scrapping attacks and if there is no way to authenticate Moz crawler, then, any scraper can just set user agent as Rogerbot and scrape all our pages. Is there a fixed IP that can be used or any other customization that will help us authenticate and allow only Moz crawler to crawl our site. Looking forward to a solution to this problem. We haven't been able to use Moz crawler due to this issue.
Getting Started | | longclimber0 -
Why can't I add more than one campaign in my trial version of MOZ?
Hi, I'm on the 30 day trial and I can only run one campaign right now. I thought I could add in 5 campaigns. When I go into Manage Campaigns the 'add a campaign' button is light blue and I can't click it. Is this just because I'm on the trial? Or should I be able to add 4 more campaigns?
Getting Started | | Sophie-Kool0