Question about Syntax in Robots.txt
-
So if I want to block any URL from being indexed that contains a particular parameter what is the best way to put this in the robots.txt file?
Currently I have-
Disallow: /attachment_idWhere "attachment_id" is the parameter. Problem is I still see these URL's indexed and this has been in the robots now for over a month. I am wondering if I should just do
Disallow: attachment_id or Disallow: attachment_id= but figured I would ask you guys first.
Thanks!
-
That's excellent Chris.
Use the Remove Page function as well - it might help speed things up for you.
-Andy
-
I don't know how but I completely forgot I could just pop those URL's in GWT and see if they were blocked or not and sure enough, Google says they are. I guess this is just a matter of waiting.... Thanks much!
-
I have previously looked into both of those documents and the issue remains that they don't exactly address how best to block parameters. I could do this through GWT but just am curious about the correct and preferred syntax for the robots.txt as well. I guess I could just look at sites like Amazon or other big sites to see what the common practices are. Thanks though!
-
Problem is I still see these URL's indexed and this has been in the robots now for over a month. I am wondering if I should just do
It can take Google some time to remove pages from the index.
The best way to test if this has worked is hop into Webmaster Tools and use the Test Robots.txt function. If it has blocked the required pages, then you know it's just a case of waiting - you can also remove pages from within Webmaster Tools as well, although this isn't immediate.
-Andy
-
Hi there
Take a look at Google's resource on robots.txt, as well as Moz's. You can get all the information you need there. You can also let Google know about what URLs to exclude from it's crawls via Search Console.
Hope this helps! Good luck!
-
Im not a robots.txt expert by a long shot, but I found this, which is a little dated, which explained it to me in terms i could understand.
https://sanzon.wordpress.com/2008/04/29/advanced-usage-of-robotstxt-w-querystrings/
there is also a feature in Google Webmaster tools called URL parameters that lets you block URLs with set parameters for all sorts of reason to avoid duplicate content etc. I havn't used it myself but may be work looking into
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Portfolio Image Landing Page Question/Issue
Hello, We have a client with a very image heavy website. They have Portfolio pages with a large number of images. We are currently working on adding more copy to the site but wanted to confirm we are taking the right approach for the images on the site. Under the current structure each image has its own landing page (with no copy) and is fed in (or generated on) to a Portfolio Page. While we know this is not ideal as it would be best to have the images on the Portfolio Page directly or even fill out the landing pages with copy; due to the amount of images and the fact these are only images (and not a 'targeted' page) that would not really be feasible. Aside from the thin content concern these individual landing pages were being indexed so they are showing hundreds of pages on their sitemap.xml and in GSC even though they only have a few actual pages. In the meantime we went into each image-page and placed a canonical tag back to the main Portfolio Page (with the hopes to add content to that page and have it as the ‘overarching’ page). Would this be the right approach? – We considered ‘noindex-follow’ tags but would want the images to be crawled; the issue is because the pages are not on the actual page are we canonicalizing these images to nothing? Any insight would really be appreciated. Thank you in advance.
Intermediate & Advanced SEO | | Ben-R0 -
How to handle a blog subdomain on the main sitemap and robots file?
Hi, I have some confusion about how our blog subdomain is handled in our sitemap. We have our main website, example.com, and our blog, blog.example.com. Should we list the blog subdomain URL in our main sitemap? In other words, is listing a subdomain allowed in the root sitemap? What does the final structure look like in terms of the sitemap and robots file? Specifically: **example.com/sitemap.xml ** would I include a link to our blog subdomain (blog.example.com)? example.com/robots.xml would I include a link to BOTH our main sitemap and blog sitemap? blog.example.com/sitemap.xml would I include a link to our main website URL (even though it's not a subdomain)? blog.example.com/robots.xml does a subdomain need its own robots file? I'm a technical SEO and understand the mechanics of much of on-page SEO.... but for some reason I never found an answer to this specific question and I am wondering how the pros do it. I appreciate your help with this.
Intermediate & Advanced SEO | | seo.owl0 -
High level rel=canonical conceptual question
Hi community. Your advice and perspective is greatly appreciated. We are doing a site replatform and I fear that serious SEO fundamentals were overlooked and I am not getting straight answers to a simple question: How are we communicating to search engines the single URL we want indexed? Backstory: Current site has major duplicate content issues. Rel-canonical is not used. There are currently 2 versions of every category and product detail page. Both are indexed in certain instances. A 60 page audit has recommends rel=canonical at least 10 times for the similar situations an ecommerce site has with dupe urls/content. New site: We are rolling out 2 URLS AGAIN!!! URL A is an internal URL generated by the systerm. We have developed this fancy dynamic sitemap generator which looks/maps to URL A and creates a SEO optimized URL that I call URL B. URL B is then inserted into the site map and the sitemap is communicated externally to google. URL B does an internal 301 redirect back to URL A...so in an essence, the URL a customer sees is not the same as what we want google to see. I still think there is potential for duplicate indexing. What do you think? Is rel=canonical the answer? In my research on this site, past projects and google I think the correct solution is this on each customer facing category and pdp: The head section (With the optimized Meta Title and Meta Description) needs to have the rel-canonical pointing to URL B
Intermediate & Advanced SEO | | mm916157
example of the meta area of URL A: What do you think? I am open to all ideas and I can provide more details if needed.0 -
Duplicate Content Question
We are getting ready to release an integration with another product for our app. We would like to add a landing page specifically for this integration. We would also like it to be very similar to our current home page. However, if we do this and use a lot of the same content, will this hurt our SEO due to duplicate content?
Intermediate & Advanced SEO | | NathanGilmore0 -
If i disallow unfriendly URL via robots.txt, will its friendly counterpart still be indexed?
Our not-so-lovely CMS loves to render pages regardless of the URL structure, just as long as the page name itself is correct. For example, it will render the following as the same page: example.com/123.html example.com/dumb/123.html example.com/really/dumb/duplicative/URL/123.html To help combat this, we are creating mod rewrites with friendly urls, so all of the above would simply render as example.com/123 I understand robots.txt respects the wildcard (*), so I was considering adding this to our robots.txt: Disallow: */123.html If I move forward, will this block all of the potential permutations of the directories preceding 123.html yet not block our friendly example.com/123? Oh, and yes, we do use the canonical tag religiously - we're just mucking with the robots.txt as an added safety net.
Intermediate & Advanced SEO | | mrwestern0 -
Anyone have an hour right now to cover some SEO questions
Hi folks, I need someone to Skype with me today, on some seo questions, for a multi wordpress set up I`m in middle of developing for franchise local sites. Will pay you $95 for the hour. Thanks Brent Sky pe me: cyberbrent (Brent H, Richmond, BC)
Intermediate & Advanced SEO | | MenInKilts1 -
Rel Alternate tag and canonical tag implementation question
Hello, I have a question about the correct way to implement the canoncial and alternate tags for a site supporting multiple languages and markets. Here's our setup. We have 3 sites, each serving a specific region, and each available in 3 languages. www.example.com : serves the US, default language is English www.example.ca : serves Canada, default language is English www.example.com.mx : serves Mexico, default language is Spanish In addition, each sites can be viewed in English, French or Spanish, by adding a language specific sub-directory prefix ( /fr , /en, /es). The implementation of the alternate tag is fairly straightforward. For the homepage, on www.example.com, it would be: -MX” href=“http://www.example.com.mx/index.html” /> -MX” href=”http://www.example.com.mx/fr/index.html“ />
Intermediate & Advanced SEO | | Amiee
-MX” href=”http://www.example.com.mx/en/index.html“ />
-US” href=”http://www.example.com/fr/index.html” />
-US” href=”http://www.example.com/es/index.html“ />
-CA” href=”http://www.example.ca/fr/index.html” />
-CA” href=”http://www.example.ca/index.html” />
-CA” href=”http://www.example.ca/es/index.html” /> My question is about the implementation of the canonical tag. Currently, each domain has its own canonical tag, as follows: rel="canonical" href="http://www.example.com/index.html"> <link rel="canonical" href="http: www.example.ca="" index.html"=""></link rel="canonical" href="http:>
<link rel="canonical" href="http: www.example.com.mx="" index.html"=""></link rel="canonical" href="http:> I am now wondering is I should set the canonical tag for all my domains to: <link rel="canonical" href="http: www.example.com="" index.html"=""></link rel="canonical" href="http:> This is what seems to be suggested on this example from the Google help center. http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077 What do you think?0 -
Old pages still crawled by SE returning 404s. Better to put 301 or block with robots.txt ?
Hello guys, A client of ours has thousand of pages returning 404 visibile on googl webmaster tools. These are all old pages which don't exist anymore but Google keeps on detecting them. These pages belong to sections of the site which don't exist anymore. They are not linked externally and didn't provide much value even when they existed What do u suggest us to do: (a) do nothing (b) redirect all these URL/folders to the homepage through a 301 (c) block these pages through the robots.txt. Are we inappropriately using part of the crawling budget set by Search Engines by not doing anything ? thx
Intermediate & Advanced SEO | | H-FARM0