Robots.txt Question
-
For our company website faithology.com we are attempting to block out any urls that contain a ? mark to keep google from seeing some pages as duplicates.
Our robots.txt is as follows:
User-Agent: * Disallow: /*? User-agent: rogerbot Disallow: /community/ Is the above correct? We are wanting them to not crawl any url with a "?" inside, however we don't want to harm ourselves in seo. Thanks for your help!
-
You can use wild-cards, in theory, but I haven't tested "?" and that could be a little risky. I'd just make sure it doesn't over-match.
Honestly, though, Robots.txt isn't as reliable as I'd like. It can be good for preventing content from being indexed, but once that content has been crawled, it's not great for removing it from the index. You might be better off with META NOINDEX or using the rel=canonical tag.
It depends a lot on what parameters you're trying to control, what value these pages have, whether they have links, etc. A wholesale block of everything with "?" seems really dangerous to me, IMO.
If you want to give a few example URLs, maybe we could give you more specific advice.
-
if I were you I would want to be 100% sure I got it right. This tool has never let me down and the way you have Roger bot he may be blocked.
Why not use a free tool from a very reputable company to make your robot text perfect
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
http://www.searchenginepromotionhelp.com/m/robots-text-tester/
then lastly to make sure everything is perfect I recommend one of my favorite free tools up to 500 pages is as many times as you want that costs I believe $70 a year
http://www.screamingfrog.co.uk/seo-spider/
his one of the best tools on the planet
while you're at Internet marketing ninjas website look for other tools they have loads of excellent tools that are recommend here.
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
Sincerely,
Thomas
-
Yes you can
Robots.txt Wildcard Matching
Google and Microsoft's Bing allow the use of wildcards in robots.txt files.
To block access to all URLs that include a question mark (?), you could use the following entry:
User-agent: *
Disallow: /*?You can use the $ character to specify matching the end of the URL. For instance, to block an URLs that end with .asp, you could use the following entry:
User-agent: Googlebot
Disallow: /*.asp$More background on wildcards available from Google and Yahoo! Search.
More
http://tools.seobook.com/robots-txt/
hope I was of help,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content Question With New Domain
Hey Everyone, I hope your day is going well. I have a question regarding duplicate content. Let's say that we have Website A and Website B. Website A is a directory for multiple stores & brands. Website B is a new domain that will satisfy the delivery niche for these multiple stores & brands (where they can click on a "Delivery" anchor on Website A and it'll redirect them to Website B). We want Website B to rank organically when someone types in " <brand>delivery" in Google. Website B has NOT been created yet. The Issue Website B has to be a separate domain than Website A (no getting around this). Website B will also pull all of the content from Website A (menus, reviews, about, etc). Will we face any duplicate content issues on either Website A or Website B in the future? Should we rel=canonical to the main website even though we want Website B to rank organically?</brand>
Intermediate & Advanced SEO | | imjonny0 -
HTTPS 301 Redirect Question
Hi, I've just migrated our previous site (siteA) to our new url (siteB) and I've setup 301 redirects from the old url (siteA) to the new (siteB). However, the old url operated on https and users who try to go to the old url with https (https://siteA.com) receive a message that the server cannot be reached, while the users who go to http://siteA.com are redirected to siteB. Is there a way to 301 redirect https traffic? Also, from an SEO perspective if the site and all the references on Google search are https://siteA.com does a 301 redirect of http pass the domain authority, etc. or is https required? Thanks.
Intermediate & Advanced SEO | | opstart0 -
To nofollow or follow internal links, that is the question...
"...Whether 'tis Nobler in the mind to suffer the slings and arrows of outrageous fortune or..." Okay, I'll drop the Hamlet riff. I'm working on a site with a forum. Top pages may have 20 to 30 answers. Each answer is by a member with an image/link and a name link to their member profile. A member profile may contain alot of info or none. We've noiondexed memeber profile pages, yet we still have these links to member profile pages. Is it better to nofollow these internal links to profile pages or what? Again, with 25 answers on a page and two links per answer to each member profile (image and name), that's a ton of internal links to noindexed pages. Thanks! Best... Darcy
Intermediate & Advanced SEO | | 945010 -
Baidu Spider appearing on robots.txt
Hi, I'm not too sure what to do about this or what to think of it. This magically appeared in my companies robots.txt file (literally magically appeared/text is below) User-agent: Baiduspider
Intermediate & Advanced SEO | | IceIcebaby
User-agent: Baiduspider-video
User-agent: Baiduspider-image
Disallow: / I know that Baidu is the Google of China, but I'm not sure why this would appear in our robots.txt all of a sudden. Should I be worried about a hack? Also, would I want to disallow Baidu from crawling my companies website? Thanks for your help,
-Reed0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
Google Places Question: Two Businesses, Same Address
I am working with a client who runs a personal training business. He shares a fitness studio with another personal trainer to minimise costs. My issue is that the other guy has 'taken' the Google Places listing address as his business, rather than my client's. The gym itself is not a business, it is simply a shared workspace by two personal trainers - in the same way as a shared office space might be the address of several businesses. This presents a bit of a problem with Google Places verification. Is it best to: 'Alter' the address slightly so it appears to be a separate premises (e.g. 51 Something Street --> 51A Something Street) then use that address in all my citations Leave the address itself the same, but rely on the fact that there are separate domains, phone numbers and business names Any thoughts on this?
Intermediate & Advanced SEO | | Pretty-Klicks1 -
Diversifying anchor text question
Hi, I've seen a new article by Dr. Pete on diversifying links for 2013 (http://www.seomoz.org/blog/top-1-seo-tips-for-2013), now my question is this: Dr. Pete talks about mixing up the anchor text for links, is so we don't get caught out by Google or actually mixing it has a better impact? For example: 1. 20 anchor text links targeting just the target term. 2. 20 anchor text links targeting 4 variations of the target term. Is number 2 recommended so things look natural or does it actually have a better impact on SEO. Thanks
Intermediate & Advanced SEO | | activitysuper0 -
XML Sitemap instruction in robots.txt = Worth doing?
Hi fellow SEO's, Just a quick one, I was reading a few guides on Bing Webmaster tools and found that you can use the robots.txt file to point crawlers/bots to your XML sitemap (they don't look for it by default). I was just wondering if it would be worth creating a robots.txt file purely for the purpose of pointing bots to the XML sitemap? I've submitted it manually to Google and Bing webmaster tools but I was thinking more for the other bots (I.e. Mozbot, the SEOmoz bot?). Any thoughts would be appreciated! 🙂 Regards, Ash
Intermediate & Advanced SEO | | AshSEO20110