Robots.txt Question
-
For our company website faithology.com we are attempting to block out any urls that contain a ? mark to keep google from seeing some pages as duplicates.
Our robots.txt is as follows:
User-Agent: * Disallow: /*? User-agent: rogerbot Disallow: /community/ Is the above correct? We are wanting them to not crawl any url with a "?" inside, however we don't want to harm ourselves in seo. Thanks for your help!
-
You can use wild-cards, in theory, but I haven't tested "?" and that could be a little risky. I'd just make sure it doesn't over-match.
Honestly, though, Robots.txt isn't as reliable as I'd like. It can be good for preventing content from being indexed, but once that content has been crawled, it's not great for removing it from the index. You might be better off with META NOINDEX or using the rel=canonical tag.
It depends a lot on what parameters you're trying to control, what value these pages have, whether they have links, etc. A wholesale block of everything with "?" seems really dangerous to me, IMO.
If you want to give a few example URLs, maybe we could give you more specific advice.
-
if I were you I would want to be 100% sure I got it right. This tool has never let me down and the way you have Roger bot he may be blocked.
Why not use a free tool from a very reputable company to make your robot text perfect
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
http://www.searchenginepromotionhelp.com/m/robots-text-tester/
then lastly to make sure everything is perfect I recommend one of my favorite free tools up to 500 pages is as many times as you want that costs I believe $70 a year
http://www.screamingfrog.co.uk/seo-spider/
his one of the best tools on the planet
while you're at Internet marketing ninjas website look for other tools they have loads of excellent tools that are recommend here.
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
Sincerely,
Thomas
-
Yes you can
Robots.txt Wildcard Matching
Google and Microsoft's Bing allow the use of wildcards in robots.txt files.
To block access to all URLs that include a question mark (?), you could use the following entry:
User-agent: *
Disallow: /*?You can use the $ character to specify matching the end of the URL. For instance, to block an URLs that end with .asp, you could use the following entry:
User-agent: Googlebot
Disallow: /*.asp$More background on wildcards available from Google and Yahoo! Search.
More
http://tools.seobook.com/robots-txt/
hope I was of help,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Questions about websites coupon codes
Hi guys, i have 2 questions about my website of coupon codes: should i do redirect people of google to mywebsite, e.g. Someone is looking coupons for Sony or LG and arrive to brand Sony in my website but i show him offers for Sony in Amazon when he click in some offer, ¿that is correct? Footer images links. I saw many sites that put their logos in footer of online stores to get authority, ¿should i do that? Thank you so much.
Intermediate & Advanced SEO | | pompero990 -
Question about using abbreviation
Hello, I have this abbreviation inside my domain name, ok? now for a page URL name, do you recommend me to use the actual word (which shortened form of it is inside domain name) in a page name? Or when have abbreviation in domain name, then using its actual word in a page name is not good? It's all about how much google recognize abbreviation as the actual word and gives the same value of word to it? do I risk not using the actual word? Hope made myself clear ) thanks.
Intermediate & Advanced SEO | | mdmoz0 -
Slug construction question
Hi there, question about what constitutes an optimal slug. I work for a Theater news site. An article we recently wrote announced the opening of the musical "Holler if you hear me," which features the music of Tupac Shakur. We considered a few options, including holler-if-you-hear-me-opens-on-broadway and tupac-musical-opens-on broadway. Any suggestions? Also, if the full URL reads something like theatermania.com/broadway/news/06-2014/[slug], should we try to ensure that the term 'broadway' never appears in the slug to reduce redundancy? Keep in mind that the term 'broadway' is a pretty popular search term.
Intermediate & Advanced SEO | | TheaterMania0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Whole site blocked by robots in webmaster tools
My URL is: www.wheretobuybeauty.com.auThis new site has been re-crawled over last 2 weeks, and in webmaster tools index status the following is displayed:Indexed 50,000 pagesblocked by robots 69,000Search query 'site:wheretobuybeauty.com.au' returns 55,000 pagesHowever, all pages in the site do appear to be blocked and over the 2 weeks, the google search query site traffic declined from significant to zero (proving this is in fact the case ).This is a Linux php site and has the following: 55,000 URLs in sitemap.xml submitted successfully to webmaster toolsrobots.txt file existed but did not have any entries to allow or disallow URLs - today I have removed robots.txt file completely URL re-direction within Linux .htaccess file - there are many rows within this complex set of re-directions. Developer has double checked this file and found that it is valid.I have read everything that google and other sources have on this topic and this does not help. Also checked webmaster crawl errors, crawl stats, malware and there is no problem there related to this issue.Is this a duplicate content issue - this is a price comparison site where approx half the products have duplicate product descriptions - duplicated because they are obtained from the suppliers through an XML data file. The suppliers have the descriptions from the files in their own sites.Help!!
Intermediate & Advanced SEO | | rrogers0 -
2013 Panda Update Question
Hi everyone, I'm new here 🙂 So far I've had wonderful success seo wise and none of the updates (Penguin nor Panda) affected any sites, until this one. For example, one site has 7 keywords I'm optimizing for. Out of those 7, all but 2 (and variations of the 2 - one word vs long-tail) completely tanked. These keywords were all on page 2/3. One of the two survivors never budged from page 2 (it's a brand keyword so I was sooo happy to finally get it to page 2) Now when I check rankings, the other terms show up in the 200-400 spots, but NOT for the URL I was optimizing for (category page) but instead for random products in the category. The only thing I've done differently with the 2 keywords that are still doing well, was focus - we did more link-building for those, but not an extreme amount. Never over-optimize. My question is, how did 2 survive and 5 are still floating up and down. Last night I saw one go up 122 spots, now today down 14. I'm really struggling with this. Thank you
Intermediate & Advanced SEO | | Freelancer130 -
301 Redirect question
Which is the best way to set up the 301 redirect on my main home page? http://horsebuggy.com to http://www.horsebuggy.com Or does it make a difference? Boodreaux
Intermediate & Advanced SEO | | Boodreaux0 -
Robots.txt disallow subdomain
Hi all, I have a development subdomain, which gets copied to the live domain. Because I don't want this dev domain to get crawled, I'd like to implement a robots.txt for this domain only. The problem is that I don't want this robots.txt to disallow the live domain. Is there a way to create a robots.txt for this development subdomain only? Thanks in advance!
Intermediate & Advanced SEO | | Partouter0