Robots.txt, does it need preceding directory structure?
-
Do you need the entire preceding path in robots.txt for it to match?
e.g:
I know if i add Disallow: /fish to robots.txt it will block
/fish
/fish.html
/fish/salmon.html
/fishheads
/fishheads/yummy.html
/fish.php?id=anythingBut would it block?:
en/fish
en/fish.html
en/fish/salmon.html
en/fishheads
en/fishheads/yummy.html
**en/fish.php?id=anything(taken from Robots.txt Specifications)** I'm hoping it actually wont match, that way writing this particular robots.txt will be much easier!
As basically I'm wanting to block many URL that have BTS- in such as:
http://www.example.com/BTS-something
http://www.example.com/BTS-somethingelse
http://www.example.com/BTS-thingybobBut have other pages that I do not want blocked, in subfolders that also have BTS- in, such as:
http://www.example.com/somesubfolder/BTS-thingy
http://www.example.com/anothersubfolder/BTS-otherthingyThanks for listening
-
Yes this is what I thought, but wanted some second opinions.
Although I wouldn't actually need a wild card after BTS, as just leaving it open is the same as using a wildcard:
/fish*.......... Equivalent to "/fish" -- the trailing wildcard is ignored. https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt Thanks for the link, I'll take a look
-
You're right in with the **Disallow: /fish **in the robots file blocking all those initial links, but if you wanted to block everything inside the /en/ folder, you would need to do disallow: /en/fish
You could use a wildcard in the robots.txt file to do something along the lines of Disallow: /BTS-*
This _'should' _work, but it's always worth checking using a tool to make sure it's all implemented correctly. Distilled did a post a while back about a JS tool which allows you to test if robots.txt files work correctly which can be found here - http://www.distilled.net/blog/seo/js-bookmarklet-for-checking-if-a-page-is-blocked-by-robots-txt/
In addition to this, you could also use the 'blocked URLs' tool in GWT to see if the pages are successfully blocked once you've implemented the code.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I need help on how best to do a complicated site migration. Replacing certain pages with all new content and tools, and keeping the same URL's. The rest just need to disappear safely. Somehow.
I'm completely rebranding a website but keeping the same domain. All content will be replaced and it will use a different theme and mostly new plugins. I've been building the new site as a different site in Dev mode on WPEngine. This means it currently has a made-up domain that needs to replace the current site. I know I need to somehow redirect the content from the old version of the site. But I'm never going to use that content again. (I could transfer it to be a Dev site for the current domain and automatically replace it with the click of a button - just as another option.) What's the best way to replace blahblah.com with a completely new blahblah.com if I'm not using any of the old content? There are only about 4 URL'st, such as blahblah.com/contact hat will remain the same - with all content replaced. There are about 100 URL's that will no longer be in use or have any part of them ever used again. Can this be done safely?
Intermediate & Advanced SEO | | brickbatmove1 -
Structured data? Confused
I understand the basic concept of structured data (guiding engines onto how to view the content) but how do I implement this? After creating a product page with images, content, links, etc. What do I do to make sure we are good on all the different structured content types? Is there a tool to make it happen? Sorry for not understanding it...
Intermediate & Advanced SEO | | Jamesmcd030 -
Would you pursue this paid directory link?
Hi all, I KNOW the hard and true answer to this, but I'm looking for deeper insights regarding Links like those contained on this page. I understand the by-the-book answer to this would be only pursue a paid link if it is "nofollowed" OR if it has the potential to bring in new business and traffic. My question is ....does a link like this actually pass SEO value? I see businesses killing it from an SEO standpoint with link profiles full of paid directory links like this. I also thought this conversation was more interested now that Google appears to devaluing links like this instead of issue penalties. Thoughts??
Intermediate & Advanced SEO | | RickyShockley0 -
Need to change 1 million page URLs
Hey all, I have a community site where users are uploading photos and videos. Launched in 2003, back then it wasn't such a bad idea to use keywords/tags in the URLs, so I did that. All my content pages (individual photo/video) are looking like this: www.domain.com/12345-kw1-kw2-kw3-k4-k5 and so on. Where the 12345 is the unique content ID and the rest are keywords/tags added by the uploader. I would like to get rid of of the keywords after the ID in the URL. My site is well coded, so this can be easily done by changing a simple function, so my content page URLs become this: www.domain.com/ID What is the best course of action? 301 the KW URLs to non-KW version? Canonical? I really want to do this the proper way. Any advice is highly appreciated. Thanks in advance.
Intermediate & Advanced SEO | | mlqsko0 -
Meta canonical or simply robots.txt other domain names with same content?
Hi, I'm working with a new client who has a main product website. This client has representatives who also sells the same products but all those reps have a copy of the same website on another domain name. The best thing would probably be to shut down the other (same) websites and redirect 301 them to the main, but that's impossible in the minding of the client. First choice : Implement a conical meta for all the URL on all the other domain names. Second choice : Robots.txt with disallow for all the other websites. Third choice : I'm really open to other suggestions 😉 Thank you very much! 🙂
Intermediate & Advanced SEO | | Louis-Philippe_Dea0 -
How long need paid links to get punished?
Hi, I just discover a competitor which is dominating the first page of google with 4 websites. I analyse them and all was backed up on a private domain network, with subdomains , maybe a 100 subdomains of the main domain. He have on all 4 domains 3500 links about, with same anchor text and 40 -50 with different. all are exact match domain. So I just reported him and this one who is selling him the links. How much time need google to take action and how long need to get punished this 4 websites? Plus what will happen to them? I reported them yesterday., how much time Google need to punish him , if does... Thanks
Intermediate & Advanced SEO | | nyanainc0 -
Help me choose a new URL structure
Good morning SEOMoz. I have a huge website, with hundreds of thousands of pages. The websites theme is mobile phone downloads. I want to create a better URL structure. Currently an example url is /wallpaper/htc-wildfire-wallpapers.html My issue with this, first and foremost is it's a little spammy, for example the fact it's in a wallpaper folder, means I shouldn't really need to be explicit with the filename, as it's implied. Another issue arises with the download page. For example /wallpaper/1234/file-name-mobile-wallpaper.html Again it's spammy but also the file ID, is at folder level, rather than within the filename. Making the file deeper and loses structure. I am considering creating sub domains, based on model, to ensure a really tight silo. i.e htc.domain.com/wallpaper/wildfire/ and the download page would be htc.domain.com/wallpaper/file-name-id/ But due to restrictions with the CMS, this would involve a lot of work and so I am considering just cleaning up the url structure without sub domains. /wallpaper/htc/wildfire/ and the download page would be /wallpaper/file-name-id/ What are your thoughts? Somebody suggested having the downloads in no folder at all, but surely it makes sense for a wallpaper, to be in a wallpaper folder and an app to be in an app folder? If they were not in a folder, I'd need to be more explicit in the naming of the files. Any advice would be awesome.
Intermediate & Advanced SEO | | seo-wanna-bs0 -
Robots
I have just noticed this in my code name="robots" content="noindex"> And have noticed some of my keywords have dropped, could this be the reason?
Intermediate & Advanced SEO | | Paul780