Using Robots.txt
-
I want to Block or prevent pages being accessed or indexed by googlebot. Please tell me if googlebot will NOT Access any URL that begins with my domain name, followed by a question mark,followed by any string by using Robots.txt below. Sample URL http://mydomain.com/?example User-agent: Googlebot Disallow: /?
-
Not sure if that would work, but you can test by changing your robots.txt and running a test in GWT > Health > Blocked URLs
You might also be interested in specifying specific URL paraments (e.g. /?sort=name&order=asc > can block sort and order parameters) from within GWT (Configuration > URL Parameters)
Learn more about parameters - https://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to Target Other Countries Using TLDs?
I would like to know if it is possible (and beneficial) to target other countries using country-based TLDs? When visiting a company website for instance, you often get redirected to your country's site. For instance, when you visit cafepress.com from Canada, you get redirected to cafepress.ca. Since both websites (cafepress.com and cafepress.ca) have the same content, how they get away with it with no duplicate content issues?
Technical SEO | | sbrault740 -
Is there any value in having a blank robots.txt file?
I've read an audit where the writer recommended creating and uploading a blank robots.txt file, there was no current file in place. Is there any merit in having a blank robots.txt file? What is the minimum you would include in a basic robots.txt file?
Technical SEO | | NicDale0 -
Robots.txt anomaly
Hi, I'm monitoring a site thats had a new design relaunch and new robots.txt added. Over the period of a week (since launch) webmaster tools has shown a steadily increasing number of blocked urls (now at 14). In the robots.txt file though theres only 12 lines with the disallow command, could this be occurring because a line in the command could refer to more than one page/url ? They all look like single urls for example: Disallow: /wp-content/plugins
Technical SEO | | Dan-Lawrence
Disallow: /wp-content/cache
Disallow: /wp-content/themes etc, etc And is it normal for webmaster tools reporting of robots.txt blocked urls to steadily increase in number over time, as opposed to being identified straight away ? Thanks in advance for any help/advice/clarity why this may be happening ? Cheers Dan0 -
Using Rel Nofollow on Duplicate Pages
Hi there, I have a rather large site that has duplicate content on many pages due to how it's being spidered by google. I was hoping I could set the internal link to this page as "nofollow." My question is that I have hundreds of other sites with backlinks to these duplicate content pages.. will this affect me negatively if I tell google not to index the duplicated pages?
Technical SEO | | trialminecraftserverfinder0 -
Best use of robots.txt for "garbage" links from Joomla!
I recently started out on Seomoz and is trying to make some cleanup according to the campaign report i received. One of my biggest gripes is the point of "Dublicate Page Content". Right now im having over 200 pages with dublicate page content. Now.. This is triggerede because Seomoz have snagged up auto generated links from my site. My site has a "send to freind" feature, and every time someone wants to send a article or a product to a friend via email a pop-up appears. Now it seems like the pop-up pages has been snagged by the seomoz spider,however these pages is something i would never want to index in Google. So i just want to get rid of them. Now to my question I guess the best solution is to make a general rule via robots.txt, so that these pages is not indexed and considered by google at all. But, how do i do this? what should my syntax be? A lof of the links looks like this, but has different id numbers according to the product that is being send: http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167 I guess i need a rule that grabs the following and makes google ignore links that contains this: view=send_friend
Technical SEO | | teleman0 -
What factors matters the most when using a 301 permanent redirect?
Hi SEOmozzers, I have a client that has couple of duplicates but I am debating if i should just kill those pages or use 301 Permanent redirects. I know SEO moz provides 2 important factors to look at which are PA and link root domain. 1.Which one matters the most? or which one should I look at first to make a decision? 2. I have empty pages creating duplicate content with a PA of 14 and 1 linking root domain. my thought is to kill the page by inserting a meta NO INDEX. If you don't agree and think I should 301 to an existing page that needs link juice, let me know. Thank you mozzers 🙂
Technical SEO | | Ideas-Money-Art0 -
Removing robots.txt on WordPress site problem
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap. Checked source code and the robots instruction has gone so a little lost. Any ideas please?
Technical SEO | | Wallander0 -
Robots.txt
Hi there, My question relates to the robots.txt file. This statement: /*/trackback Would this block domain.com/trackback and domain.com/fred/trackback ? Peter
Technical SEO | | PeterM220