Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
-
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
-
Great job. I just wanted to add this from Google Webmasters
http://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html
and this from Google Developers
https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt
-
Yup wildcard syntax is indeed still valid. However I can only confirm that the big 3 (Google, Yahoo and Bing) actively observe it. Other secondary search engines may not.
In your case you are probably looking for a syntax along the lines of:
User-agent: *
Disallow: /*.pdf$ This would set that any user agent should be blocked from any file name that ends in .pdf (a $ ties it to the end so pdf.txt would not be blocked in this case)Keep an eye on how you block them. Missing a trailing slash could block a directory rather than a file, or not appending a strict symbol ($) could mean that phrases throughout a directory could be blocked rather than just a filename.
Also keep in mind if you are using URL re-writing this may play into how you need to block things; and you may also want to remember that disallowing access in a robot.txt does NOT prevent search engines from indexing the data, it is up to them if they honor the request. So if it is very important to block the file access from search engines then robots.txt may not be the way to do it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Setting Up Ecommerce Functionalty for the First Time
Morning Mozers!
Technical SEO | | CheapyPP
We are running up against a technical url structure issue with the addition of eCommerce pages . We are hoping you can point us in the right direction. We operate a printing company so all our current product info page are structured like: website/printing/business-cards
website/printing/rackcards
website/printing/etc The ecommerce functionality needs to go into a sub folder but the question is what should we name it? this how the urls would look like in the main category and product pages
/business-cards
/business-cards/full-uv-coaing-both-sides we were thinking either going with /order
website/order/business-cards
website/order/business-cards/full-uv-coaing-both-sides or maybe shop/ or/print-order/ etc Any ideas or suggestions?0 -
Utilizing one robots.txt for two sites
I have two sites that are facilitated hosting in similar CMS. Maybe than having two separate robots.txt records (one for every space), my web office has made one which records the sitemaps for the two sites, similar to this:
Technical SEO | | eulabrant0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
Rel="publisher" validation error in html5
Using HTML5 I am getting a validation error on in my HTML Validation error: Bad value publisher for attribute rel on element link: Not an absolute IRI. The string publisher is not a registered keyword or absolute URL. This just started showing up on Tuesday in validation errors. Never showed up in the past. Has something changed?
Technical SEO | | RoxBrock0 -
Setting up addon domains properly (bonus duplicate content issue inside)
A new client of mine is using 1and1 hosting from back in the dark ages. Turns out, her primary domain and her main website (different domain) are exactly the same. She likes to have the domains names of her books, but her intention is to have it redirect to her main site. Unfortunately, 1and1's control panel is light years behind cpanel, so when she set up her new domains it just pointed everything to the same directory. I just want to make sure I don't make this up, so please correct me if I'm wrong about something. I'm assuming this is a major duplicate content deal, so I plan to create a new directory for each add-on domain. Since her main site is an add-on itself, I'll have to move all the files into it's new home directory. Then I'll create an htaccess file for each domain and redirect it to her main site. Right so far? My major concern is with the duplicate content. She's had two sites being exactly the same for years. Will there be any issues leftover after I set everything up properly? Is there anything else I need to do? Thanks for the help guys! I'm fairly new to this community and love the opportunity to learn from the best!
Technical SEO | | Mattymar0 -
My blog homepage deindexed, other pages indexing, still traffic not changed.
Hello! Today when I check my blog site search on Google, I can't see my blog home page. Though all my posts and pages are still on the Google results. Today I published a test post, then it also indexed by the Google less than 3 minutes. Still I can't see any traffic changes. 10th of April (yesterday) when I perform a site search (site:mydomain.com), I saw my site on the Google search result. Today I installed the Ulitmate SEO plug-in and deactivated WordPress SEO plug-in. After a few hours I saw this issue. (I'm not saying this is the issue, I just mentioned it). In addition to that I never used any black hat SEO methods to improve my ranking. my site:- http://goo.gl/6mvQT Any help really appreciate!
Technical SEO | | Godad0 -
Why are pages still showing in SERPs, despite being NOINDEXed for months?
We have thousands of pages we're trying to have de-indexed in Google for months now. They've all got . But they simply will not go away in the SERPs. Here is just one example.... http://bitly.com/VutCFiIf you search this URL in Google, you will see that it is indexed, yet it's had for many months. This is just one example for thousands of pages, that will not get de-indexed. Am I missing something here? Does it have to do with using content="none" instead of content="noindex, follow"? Any help is very much appreciated.
Technical SEO | | MadeLoud0 -
Do search engines still index/crawl private content?
If you have a membership site, which requires a payment to access specific content/images/videos, do search engines still use that content as a ranking/domain authority factor? Is it worth optimizing these "private" pages for SEO?
Technical SEO | | christinarule1