Robots.txt file question? NEver seen this command before
-
Hey Everyone!
Perhaps someone can help me. I came across this command in the robots.txt file of our Canadian corporate domain. I looked around online but can't seem to find a definitive answer (slightly relevant).
the command line is as follows:
Disallow: /*?*
I'm guessing this might have something to do with blocking php string searches on the site?. It might also have something to do with blocking sub-domains, but the "?" mark puzzles me
Any help would be greatly appreciated!
Thanks, Rob
-
I don't think this is correct.
? is an attempt at using a RegEx in Robots file which I don't think works.
Further, if it was a properly formed regex, it would be ?
- is a special character for the user agent to mean all. For the disallow line, I believe you have to use a specific directory or page.
http://www.robotstxt.org/robotstxt.html
I could be wrong, but the info on this site has been my understanding from the past too.
-
It depends on how your site is structured.
For example if you have a page at
http://www.yourdomain.com/products.php
and this shows different things based on the parameter, like:
http://www.yourdomain.com/products.php?type=widgets
You will want to get rid of this line in your robots.txt
However if the parameter(s) doesn't change the content on the page, you can leave it in.
-
Thanks Ryan and Ryan! I'm just unfamiliar with this command set in the robots file, and getting settled into the company (5 weeks).. so I am still learning the site's structure and arch. With it all being new to me with limitations I am seeing from the CMS side, I was wondering if this might have been causing crawl issues for Bing and or Yahoo... I'm trying to gauge where we might be experiencing problems with the sites crawl functions.
-
Its not a bad idea in the robots.txt, but unless you are 100% confidant that you wont block something that you really want, i would consider just handling unwanted parameters and pages through the new Google Webmaster url handling toolset. that way you have more control over which ones do and dont get blocked.
-
So, for this parameter, should I keep it in the robots file?
-
Its preventing spiders from crawling pages with parameters in the URL. For example when you search on google you'll see a URL like so:
http://www.google.com/search?q=seo
This passes the parameter of q with a value of 'seo' to the page at google.com for it to work its magic with. This is almost definitely a good thing, unless the only way to access some content on your site is via URL parameters.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Questions on switching domain name of my site
Hey guys, I purchased a domain name on May 1st and have been running my site on it since then. At the time when I added the domain name to the shopping cart, I felt that it was the best domain name a human could possibly come up with. But now, after two months, I am not so sure about it. And that means I am thinking of getting a new domain name and redirecting the old domain to the new one. Give that the domain is only two months old, there is not much to lose. However, my domain name does have some valuable backlinks, and has built some credibility in the eyes of search engines. According to MOZ Open Site Explorer, its Domain Authority is 10 and Page Authority (homepage) is 20. It's not much, but I fear that if I switch the domain name, I will reset the clock and the new domain name will have to earn the rankings from scratch That said, after searching this forum, I come across articles like this one from google and this 2012 article from MOZ that gave me some confidence. The MOZ article is from 2102, and there have been several changes since then. Is there an updated version of that article, or another legit article on the internet that you can point me to? As you can tell, I want to be more than sure that we don't take any hit during the migration process. Also want to mention that the site is on WordPress -- in case that is relevant. Thank you for your help.
Technical SEO | | deathbyseo0 -
301 redirect question
Hi Everyone When doing 301 redirects for a large site, if a page has 0 inbound links would you still redirect it or just leave it? Im just curious on the best practice for this Thanks in advance
Technical SEO | | TheZenAgency0 -
Is it important to include image files in your sitemap?
I run an ecommerce business that has over 4000 product pages which, as you can imagine, branches off into thousands of image files. Is it necessary to include those in my sitemap for faster indexing? Thanks for you help! -Reed
Technical SEO | | IceIcebaby0 -
Submitting a new sitemap index file. Only one file is getting read. What is the error?
Hi community, I am working to submit a new a new sitemap index files, where about 5 50,000 sku files will be uploaded. Webmasters is reporting that only 50k skus have been submitted. Google Webmasters is accepting the index, however only the first file is getting read. I have 2 errors and need to know if this is the reason that the multiple files are not getting uploaded. Errors: | 1 | | Warnings | Invalid XML: too many tags | Too many tags describing this tag. Please fix it and resubmi | | 2 | | Warnings | Incorrect namespace | Your Sitemap or Sitemap index file doesn't properly declare the namespace. | 1 | Here is the url I am submitting: http://www.westmarine.com/sitemap/wm-sitemap-index.xml | 1 | | | | |
Technical SEO | | mm9161570 -
Canonical Question
Our site has thousands of items, however using the old "Widgets" analogy we are unsure on how to implement the canonical tag, and if we need to at all. At the moment our main product pages lists all different "widget" products on one page, however the user can visit other sub pages that filter out the different versions of the product. I.e. glass widgets (20 products)
Technical SEO | | Corpsemerch
glass blue widgets (15 products)
glass red widgets (5 products)
etc.... I.e. plastic widgets (70 products)
plastic blue widgets (50 products)
plastic red widgets (20 products)
etc.... As the sub pages are repeating products from the main widgets page we added the canonical tag on the sub pages to refer to the main widget page. The thinking is that Google wont hit us with a penalty for duplicate content. As such the subpages shouldnt rank very well but the main page should gather any link juice from these subpages? Typically once we added the canonical tag it was coming up to the penguin update, lost a 20%-30% of our traffic and its difficult not to think it was the canonical tag dropping our subpages from the serps. Im tempted to remove the tag and return to how the site used to be repeating products on subpages.. not in a seo way but to help visitors drill down to what they want quickly. Any comments would be welcome..0 -
Robots.txt question
Hello, What does the following command mean - User-agent: * Allow: / Does it mean that we are blocking all spiders ? Is Allow supported in robots.txt ? Thanks
Technical SEO | | seoug_20050 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0