Robots.txt query
-
Quick question, if this appears in a clients robots.txt file, what does it mean?
Disallow: /*/_/
Does it mean no pages can be indexed? I have checked and there are no pages in the index but it's a new site too so not sure if this is the problem.
Thanks
Karen
-
Thank you so much, that is a great help!
-
That blocks all spiders from viewing those pages. I am not sure what and who did the /* /_/, but unless there is something there they don't want indexed then it is not necessary to keep it.
One thing you mind want to keep in mind as well, just because you block it on robots txt, doesn't mean a spider can't still go there.
Sometimes they don't listen to the robots txt(looking at you baidu)
-
User-agent: *
Thanks for your response.
-
What is the user agent?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using one robots.txt for two websites
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this: User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
Technical SEO | | ciehmoz0 -
Handling Pages with query codes
In Moz my client's site is getting loads of error messages for no follow tags on pages. This is down to the query codes on the E-commerce site so the URLs can look like this https://www.lovebombcushions.co.uk/?bskt=31d49bd1-c21a-4efa-a9d6-08322bf195af Clearly I just want the URL before the ? to be crawled but what can I do in the site to ensure that these errors for nofollow are removed? Is there something I should do in the site to fix this? In the back of my mind I'm thinking rel-conanical tag but I'm not sure. Can you help please?
Technical SEO | | Marketing_Optimist1 -
Can I Block https URLs using Host directive in robots.txt?
Hello Moz Community, Recently, I have found that Google bots has started crawling HTTPs urls of my website which is increasing the number of duplicate pages at our website. Instead of creating a separate robots.txt file for https version of my website, can I use Host directive in the robots.txt to suggest Google bots which is the original version of the website. Host: http://www.example.com I was wondering if this method will work and suggest Google bots that HTTPs URLs are the mirror of this website. Thanks for all of the great responses! Regards,
Technical SEO | | TJC.co.uk
Ramendra0 -
Yet Another, Yet Important URL structure query.
Massive changes to our stock media site and structure here. While we have an extensive category system previously our category pages have only been our search pages with ID numbers for sorting categories. Now we have individual category pages. We have about 600 categories with about 4 max tiers. We have about 1,000,000 total products and issues with products appearing to be duplicate. Our current URL structure for producta looks like this: http://example.com/main-category/12345/product-name.htm Here is how I was planning on doing the new structure: Cat tier 1: http://example.com/category-one/ Cat tier 2: http://example.com/category-one/category-two/ Cat tier 3: http://example.com/category-one-category-two/category-three Cat tier 4: http://example.com/category-one-category-two-category-three/category-four/ Product: http://example.com/category-one-category-two-category-three/product-name-12345.htm Thoughts? Thanks! Craig
Technical SEO | | TheCraig0 -
Can I rely on just robots.txt
We have a test version of a clients web site on a separate server before it goes onto the live server. Some code from the test site has some how managed to get Google to index the test site which isn't great! Would simply adding a robots text file to the root of test simply blocking all be good enough or will i have to put the meta tags for no index and no follow etc on all pages on the test site also?
Technical SEO | | spiralsites0 -
Confirming Robots.txt code deep Directories
Just want to make sure I understand exactly what I am doing If I place this in my Robots.txt Disallow: /root/this/that By doing this I want to make sure that I am ONLY blocking the directory /that/ and anything in front of that. I want to make sure that /root/this/ still stays in the index, its just the that directory I want gone. Am I correct in understanding this?
Technical SEO | | cbielich0 -
Help needed with robots.txt regarding wordpress!
Here is my robots.txt from google webmaster tools. These are the pages that are being blocked and I am not sure which of these to get rid of in order to unblock blog posts from being searched. http://ensoplastics.com/theblog/?cat=743 http://ensoplastics.com/theblog/?p=240 These category pages and blog posts are blocked so do I delete the /? ...I am new to SEO and web development so I am not sure why the developer of this robots.txt file would block pages and posts in wordpress. It seems to me like that is the reason why someone has a blog so it can be searched and get more exposure for SEO purposes. IS there a reason I should block any pages contained in wodrpress? Sitemap: http://www.ensobottles.com/blog/sitemap.xml User-agent: Googlebot Disallow: /*/trackback Disallow: /*/feed Disallow: /*/comments Disallow: /? Disallow: /*? Disallow: /page/
Technical SEO | | ENSO
User-agent: * Disallow: /cgi-bin/ Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/plugins/ Disallow: /wp-content/themes/ Disallow: /trackback Disallow: /commentsDisallow: /feed0 -
Is it terrible to not have robots.txt ?
I was under the impression that you really should have a robots.txt page, and not having one is pretty bad. However, hubspot (which I'm not impressed with) does not have the capability of properly implementing one. Will this hurt the site?
Technical SEO | | StandUpCubicles1