Robots.txt Question
-
For our company website faithology.com we are attempting to block out any urls that contain a ? mark to keep google from seeing some pages as duplicates.
Our robots.txt is as follows:
User-Agent: * Disallow: /*? User-agent: rogerbot Disallow: /community/ Is the above correct? We are wanting them to not crawl any url with a "?" inside, however we don't want to harm ourselves in seo. Thanks for your help!
-
You can use wild-cards, in theory, but I haven't tested "?" and that could be a little risky. I'd just make sure it doesn't over-match.
Honestly, though, Robots.txt isn't as reliable as I'd like. It can be good for preventing content from being indexed, but once that content has been crawled, it's not great for removing it from the index. You might be better off with META NOINDEX or using the rel=canonical tag.
It depends a lot on what parameters you're trying to control, what value these pages have, whether they have links, etc. A wholesale block of everything with "?" seems really dangerous to me, IMO.
If you want to give a few example URLs, maybe we could give you more specific advice.
-
if I were you I would want to be 100% sure I got it right. This tool has never let me down and the way you have Roger bot he may be blocked.
Why not use a free tool from a very reputable company to make your robot text perfect
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
http://www.searchenginepromotionhelp.com/m/robots-text-tester/
then lastly to make sure everything is perfect I recommend one of my favorite free tools up to 500 pages is as many times as you want that costs I believe $70 a year
http://www.screamingfrog.co.uk/seo-spider/
his one of the best tools on the planet
while you're at Internet marketing ninjas website look for other tools they have loads of excellent tools that are recommend here.
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
Sincerely,
Thomas
-
Yes you can
Robots.txt Wildcard Matching
Google and Microsoft's Bing allow the use of wildcards in robots.txt files.
To block access to all URLs that include a question mark (?), you could use the following entry:
User-agent: *
Disallow: /*?You can use the $ character to specify matching the end of the URL. For instance, to block an URLs that end with .asp, you could use the following entry:
User-agent: Googlebot
Disallow: /*.asp$More background on wildcards available from Google and Yahoo! Search.
More
http://tools.seobook.com/robots-txt/
hope I was of help,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow URLs ENDING with certain values in robots.txt?
Is there any way to disallow URLs ending in a certain value? For example, if I have the following product page URL: http://website.com/category/product1, and I want to disallow /category/product1/review, /category/product2/review, etc. without disallowing the product pages themselves, is there any shortcut to do this, or must I disallow each gallery page individually?
Intermediate & Advanced SEO | | jmorehouse0 -
Metatags on drupal question
Hi Im quite inexperienced on drupal (normally an umbraco user!) and im having some difficulty with the Metatags on the CMS. I have been applying Meta Title and descriptions to the individual pages however they only appear when i preview the page and not when the page is saved. When i go into the metatag section located at /admin/config/search/metatags i am given a list of settings including Global: Front Page and Node. Im sure the reason it keeps defaulting the metatags back is to do with this but im not sure what to change to apply my own Thanks in advance
Intermediate & Advanced SEO | | TheZenAgency1 -
Citation/Business Directory Question...
A company I work for has two numbers... one for the std call centre and one for tracking SEO. Now, if local citation/business directory listings have the same address but different numbers, will this affect local/other SEO results? Any help is greatly appreciated! 🙂
Intermediate & Advanced SEO | | geniusenergyltd0 -
Subdomain SEO question (php script on domain + wordpress on subdomain)
Hi Moz fellows, I am doing my first website which is entirely .php scripted. But I would like to have a wordpress blog to create content and blog posts, while the .php side of the website is more for sales pages and user generated listings.The only way to do this is to install wordpress on a subdomain "blog.website.com" QUESTION: If all my keywords targeted content is on the subdomain's Wordpress blog, but all my guest blogging efforts link to my main website, which one will rank? The subdomain or the domain? I need the domain to rank well as it is a Fiverr-like script, so if tons of people land on my "blog.website.com" subdomain, they will not convert into users... Let me know if you have experience with such a scenario, and thank you all in advance for your help! -Marc
Intermediate & Advanced SEO | | marcandre0 -
XML sitemaps questions
Hi All, My developer has asked me some questions that I do not know the answer to. We have both searched for an answer but can't find one.... So, I was hoping that the clever folk on Moz can help!!! Here is couple questions that would be nice to clarify on. What is the actual address/name of file for news xml. Can xml site maps be generated on request? Consider following scenario: spider requests http://mypage.com/sitemap.xml which permanently redirects to extensionless MVC 4 page http://mypage.com/sitemapxml/ . This page generates xml. Thank you, Amelia
Intermediate & Advanced SEO | | CommT0 -
Internal Site Structure Question (URL Formation and Internal Link Design)
Hi, I have an e-commerce website that has an articles section: There is an articles.aspx file that can be reached from the top menu and it holds links to all of the articles as follows: xxx.com/articles/article1.aspx
Intermediate & Advanced SEO | | BeytzNet
xxx.com/articles/article2.aspx I want to add several new articles under a new sections, for example a complete set of articles under the title of "buying guide" and the question is what would be the best way? I was thinking of adding a "computers-buying-guides.aspx" accessible from the top menu / footer and from it linking to: xxx.com/computer-buying-ghudes/what-to-check-prior-to-buying-a-laptop.aspx
xxx.com/computer-buying-ghudes/weight-vs-performance.aspx
etc. Any thoughts / recommendations? Thanks0 -
Robots
I have just noticed this in my code name="robots" content="noindex"> And have noticed some of my keywords have dropped, could this be the reason?
Intermediate & Advanced SEO | | Paul780 -
Robots.txt 404 problem
I've just set up a wordpress site with a hosting company who only allow you to install your wordpress site in http://www.myurl.com/folder as opposed to the root folder. I now have the problem that the robots.txt file only works in http://www.myurl./com/folder/robots.txt Of course google is looking for it at http://www.myurl.com/robots.txt and returning a 404 error. How can I get around this? Is there a way to tell google in webmaster tools to use a different path to locate it? I'm stumped?
Intermediate & Advanced SEO | | SamCUK0