Robots.txt file question? NEver seen this command before
-
Hey Everyone!
Perhaps someone can help me. I came across this command in the robots.txt file of our Canadian corporate domain. I looked around online but can't seem to find a definitive answer (slightly relevant).
the command line is as follows:
Disallow: /*?*
I'm guessing this might have something to do with blocking php string searches on the site?. It might also have something to do with blocking sub-domains, but the "?" mark puzzles me
Any help would be greatly appreciated!
Thanks, Rob
-
I don't think this is correct.
? is an attempt at using a RegEx in Robots file which I don't think works.
Further, if it was a properly formed regex, it would be ?
- is a special character for the user agent to mean all. For the disallow line, I believe you have to use a specific directory or page.
http://www.robotstxt.org/robotstxt.html
I could be wrong, but the info on this site has been my understanding from the past too.
-
It depends on how your site is structured.
For example if you have a page at
http://www.yourdomain.com/products.php
and this shows different things based on the parameter, like:
http://www.yourdomain.com/products.php?type=widgets
You will want to get rid of this line in your robots.txt
However if the parameter(s) doesn't change the content on the page, you can leave it in.
-
Thanks Ryan and Ryan! I'm just unfamiliar with this command set in the robots file, and getting settled into the company (5 weeks).. so I am still learning the site's structure and arch. With it all being new to me with limitations I am seeing from the CMS side, I was wondering if this might have been causing crawl issues for Bing and or Yahoo... I'm trying to gauge where we might be experiencing problems with the sites crawl functions.
-
Its not a bad idea in the robots.txt, but unless you are 100% confidant that you wont block something that you really want, i would consider just handling unwanted parameters and pages through the new Google Webmaster url handling toolset. that way you have more control over which ones do and dont get blocked.
-
So, for this parameter, should I keep it in the robots file?
-
Its preventing spiders from crawling pages with parameters in the URL. For example when you search on google you'll see a URL like so:
http://www.google.com/search?q=seo
This passes the parameter of q with a value of 'seo' to the page at google.com for it to work its magic with. This is almost definitely a good thing, unless the only way to access some content on your site is via URL parameters.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Questions about canonicals
Howdy Moz community, I had a question regarding canonicals. I help a business with their SEO, and they are a service company. They have one physical location, but they serve multiple cities in the state. My question is in regards to canonicals and unique content. I hear that a page with slightly differing content for each page won't matter as much, if most of the content is relevantly the same. This business wants to create service pages for at least 10 other cities they service. The site currently only have pages that are targeting one city location. I was wondering if it was beneficial to use a template to service each city and then put a canonical there to say that it is an identical page to the main city page? Example: our first city was san francisco, we want to create city pages for santa rosa, novato, san jose and etc. If the content for the 2nd, 3rd, 4th, city were the same content as the 1st city, but just had the slight change with the city name would that hurt? Would putting a canonical help this issue, if i alert that it is the same as the 1st page? The reason I want to do this, is because I have been getting concerns from my copywriter that after the 5th city, they can't seem to make the services pages that much different from the first 4 cities, in terms of wording of the content and its structure. I want to know is there a simpler way to target multiple cities for local SEO reasons like geo targeted terms without having to think of a completely new way to write out the same thing for each city service page, as this is very time consuming on my end. Main questions? Will making template service pages, changing the city name to target different geographic locations and putting a canonical tag for the new pages created, and referring back to the main city page going to be effective in terms of me wanting to rank for multiple cities. Will doing this tell google my content is thin or be considered a duplicate? Will this hurt my rankings? Thanks!
Technical SEO | | Ideas-Money-Art0 -
Subdomain/subfolder question
Hi community, Let's say I have a men's/women's clothing website. Would it be better to do clothing.com/mens and clothing.com/womens OR mens.clothing.com and womens.clothing.com? I understand Moz's stance on blogs that it should be clothing.com/blog, but wanted to ask for this different circumstance. Thanks for your help!
Technical SEO | | IceIcebaby0 -
Log in, sign up, user registration and robots
Hi all, We have an accommodation site that asks users only to register when they want to book a room, in the last step. Though this is the ideal situation when you have tons of users, nowadays we are having around 1500 - 2000 per day and making tests we found out that if we ask for a registration (simple, 1 click FB) we mail them all and through a good customer service we are increasing our sales. That is why, we would like to ask users to register right after the home page ie Home/accommodation or and all the rest. I am not sure how can I make to make that content still visible to robots.
Technical SEO | | Eurasmus.com
Will the authentication process block google crawling it? Maybe something we can do? We are not completely sure how to proceed so any tip would be appreciated. Thank you all for answering.3 -
No descripton on Google/Yahoo/Bing, updated robots.txt - what is the turnaround time or next step for visible results?
Hello, New to the MOZ community and thrilled to be learning alongside all of you! One of our clients' sites is currently showing a 'blocked' meta description due to an old robots.txt file (eg: A description for this result is not available because of this site's robots.txt) We have updated the site's robots.txt to allow all bots. The meta tag has also been updated in WordPress (via the SEO Yoast plugin) See image here of Google listing and site URL: http://imgur.com/46wajJw I have also ensured that the most recent robots.txt has been submitted via Google Webmaster Tools. When can we expect these results to update? Is there a step I may have overlooked? Thank you,
Technical SEO | | adamhdrb
Adam 46wajJw0 -
Robots.txt checker
Google seems to have discontinued their robots.txt checker. Is there another tool that I can use to check my text instead? Thanks!
Technical SEO | | theLotter0 -
X-cart page crawling question.
I have an x-cart site and it is showing only 1 page being crawled. I'm a newbie, is this common? Can it be changed? If so, how? Thanks.
Technical SEO | | SteveLMCG0 -
Blog question
If i set up a blog like this - http://www.abccompany.com/blog ? ( in a folder ), will each link to http://www.abccompany.com/blog carry more value to the main site than if the blog were set up like this- http://www.blog.abccompany.com
Technical SEO | | seoug_20050 -
Duplicate content question with PDF
Hi, I manage a property listing website which was recently revamped, but which has some on-site optimization weaknesses and issues. For each property listing like http://www.selectcaribbean.com/property/147.html there is an equivalent PDF version spidered by google. The page looks like this http://www.selectcaribbean.com/pdf1.php?pid=147 my question is: Can this create a duplicate content penalty? If yes, should I ban these pages from being spidered by google in the robots.txt or should I make these link nofollow?
Technical SEO | | multilang0