Robots.txt Question
-
For our company website faithology.com we are attempting to block out any urls that contain a ? mark to keep google from seeing some pages as duplicates.
Our robots.txt is as follows:
User-Agent: * Disallow: /*? User-agent: rogerbot Disallow: /community/ Is the above correct? We are wanting them to not crawl any url with a "?" inside, however we don't want to harm ourselves in seo. Thanks for your help!
-
You can use wild-cards, in theory, but I haven't tested "?" and that could be a little risky. I'd just make sure it doesn't over-match.
Honestly, though, Robots.txt isn't as reliable as I'd like. It can be good for preventing content from being indexed, but once that content has been crawled, it's not great for removing it from the index. You might be better off with META NOINDEX or using the rel=canonical tag.
It depends a lot on what parameters you're trying to control, what value these pages have, whether they have links, etc. A wholesale block of everything with "?" seems really dangerous to me, IMO.
If you want to give a few example URLs, maybe we could give you more specific advice.
-
if I were you I would want to be 100% sure I got it right. This tool has never let me down and the way you have Roger bot he may be blocked.
Why not use a free tool from a very reputable company to make your robot text perfect
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
http://www.searchenginepromotionhelp.com/m/robots-text-tester/
then lastly to make sure everything is perfect I recommend one of my favorite free tools up to 500 pages is as many times as you want that costs I believe $70 a year
http://www.screamingfrog.co.uk/seo-spider/
his one of the best tools on the planet
while you're at Internet marketing ninjas website look for other tools they have loads of excellent tools that are recommend here.
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
Sincerely,
Thomas
-
Yes you can
Robots.txt Wildcard Matching
Google and Microsoft's Bing allow the use of wildcards in robots.txt files.
To block access to all URLs that include a question mark (?), you could use the following entry:
User-agent: *
Disallow: /*?You can use the $ character to specify matching the end of the URL. For instance, to block an URLs that end with .asp, you could use the following entry:
User-agent: Googlebot
Disallow: /*.asp$More background on wildcards available from Google and Yahoo! Search.
More
http://tools.seobook.com/robots-txt/
hope I was of help,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
A question related to domain authority and Page Authority.
What are the factors that matter to increase domain or page authority because we know domain authority is crucial, it's the best and easy way to tell someone about your website that how worthy your site is.
Intermediate & Advanced SEO | | hfameraya198
Is backlinks only the metric to increase domain authority?0 -
10 quick questions for SEO experts!
Hey guys! I'm working to build something to make technical SEO audit less painful and I'd like to hear from other SEO experts. Can I ask you to answer this quick survey: https://mykoto.typeform.com/to/R5Gvyr THANKS!
Intermediate & Advanced SEO | | jbrisebois0 -
SSL and robots.txt question - confused by Google guidelines
I noticed "Don’t block your HTTPS site from crawling using robots.txt" here: http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html Does this mean you can't use robots.txt anywhere on the site - even parts of a site you want to noindex, for example?
Intermediate & Advanced SEO | | McTaggart0 -
International SEO Question
_The company I work for has a website www.example.com that ranks very well in English speaking countries - US, UK, CA. For legal reasons, we now need to create www.example.co.uk to be accessible and rank in google.co.uk. Obviously we want this change to be as smooth as possible with little effect on rankings in the UK. We have two options that we're talking through at the moment - Use the hreflang tag on both the .com and the .co.uk to tell Google which site to rank in each country. My worry with this is that we might lose our rankings in the UK as it will be a brand new site with little to no links pointing to it. 301 redirect to the .co.uk based on UK IP addresses. I'm skeptical about this. As a 301 passes most of the link juice, I'm not sure how Google would treat this type of thing - would the .com lose ranking? So my questions are - would we lose ranking in the UK if we use option 1? Would option 2 work? What would you do? Any help is appreciated._
Intermediate & Advanced SEO | | awestwood0 -
Can't find X-Robots tag!
Hi all. I've been checking out http://www.unthankbooks.com/ as it seems to have some indexing problems. I ran a server header check, and got a 200 response. However, it also shows the following: X-Robots-Tag:
Intermediate & Advanced SEO | | Blink-SEO
noindex, nofollow It's not in the page HTML though. Could it be being picked up from somewhere else?0 -
Infographic question
I am about to post my first Infographic and have a question. The graphic is fairly long and was wondering, is it better to split this graphic up in to chunks? So that it loads in stages? I am new to this and would be great if someone could point me to the latest and best practices for infographics. I have seen a few articles but they appear to be old. Thanks for your help
Intermediate & Advanced SEO | | JohnPeters0 -
SEOMOZ Diagram question
Hi, On this SEOMOZ help page (http://www.seomoz.org/learn-seo/internal-link) the diagram explaining the optimal link structure (image also attached) has me a little confused. From the homepage, if the bot crawls down the right-hand link first, will it not just hit a dead end where it cant crawl any further and disappear? OR... will it hit the end of the structure and then crawl backwards to the homepage again and follow down another link and then just repeat the process until all pages are indexed? Cheers pyramid.jpg
Intermediate & Advanced SEO | | activitysuper0 -
Can you use more than one meta robots tag per page?
If you want to add both "noindex, follow" and "noopd" should you add two meta robots tags or is there a way to combine both into one?
Intermediate & Advanced SEO | | nicole.healthline0