How to Disallow Tag Pages With Robot.txt
-
Hi i have a site which i'm dealing with that has tag pages for instant -
http://www.domain.com/news/?tag=choice
How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt
Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed.
Any suggestions?
Cheers,
Mark
-
Hi Nakul, its Drupal
Mark
-
What CMS is it Mark ?
-
Thanks, is there a way to test it out before actually implementing it with the site.
The site is non-wordpress aswell.
Cheers,
Mark
-
I agree. I would suggest adding the noindex on the pages and letting the bots crawl them. Blocking them would prevent future crawl of these pages, but I am guessing you would also want to remove the existing pages.
Therefore add the noindex first, wait a few days and then add the disallow (Although technically if they are noindex, you don't really need the disallow).
-
Hi Mark
If your using Wordpress then I would recommend SEO Yoast to resolve the tag issue. If not then I suggest you amend the robots.txt file to resolve.
Here is an example:
Disallow: /?tag=
Disallow: /?subcats=
Disallow: /*?features_hash=NOTE:
Be very careful when blocking search engines. Test and test again!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is robots met tag a more reliable than robots.txt at preventing indexing by Google?
What's your experience of using robots meta tag v robots.txt when it comes to a stand alone solution to prevent Google indexing? I am pretty sure robots meta tag is more reliable - going on own experiences, I have never experience any probs with robots meta tags but plenty with robots.txt as a stand alone solution. Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart1 -
How do we decide which pages to index/de-index? Help for a 250k page site
At Siftery (siftery.com) we have about 250k pages, most of them reflected in our sitemap. Though after submitting a sitemap we started seeing an increase in the number of pages Google indexed, in the past few weeks progress has slowed to a crawl at about 80k pages, and in fact has been coming down very marginally. Due to the nature of the site, a lot of the pages on the site likely look very similar to search engines. We've also broken down our sitemap into an index, so we know that most of the indexation problems are coming from a particular type of page (company profiles). Given these facts below, what do you recommend we do? Should we de-index all of the pages that are not being picked up by the Google index (and are therefore likely seen as low quality)? There seems to be a school of thought that de-indexing "thin" pages improves the ranking potential of the indexed pages. We have plans for enriching and differentiating the pages that are being picked up as thin (Moz itself picks them up as 'duplicate' pages even though they're not. Thanks for sharing your thoughts and experiences!
Intermediate & Advanced SEO | | ggiaco-siftery0 -
Should Schema.org Tags go on every page?
Happy Monday Moz World! I am just wondering what are some best practices when using Schema.org Tags. For Example, I have a client who provides multiple services and provides unique content on each webpage. The design of each of the webpagesare unique, and conveys information differently. My question is: If each page of a company's website has unique content that describes a service or product, could I essentially change the url & description of the Schema Tag so that each of my pages are indexable by relationship to that page's content? Thanks ahead of time for the great responses! B/R Will
Intermediate & Advanced SEO | | MarketingChimp100 -
Date of page first indexed or age of a page?
Hi does anyone know any ways, tools to find when a page was first indexed/cached by Google? I remember a while back, around 2009 i had a firefox plugin which could check this, and gave you a exact date. Maybe this has changed since. I don't remember the plugin. Or any recommendations on finding the age of a page (not domain) for a website? This is for competitor research not my own website. Cheers, Paul
Intermediate & Advanced SEO | | MBASydney0 -
How to associate content on one page to another page
Hi all, I would like associate content on "Page A" with "Page B". The content is not the same, but we want to tell Google it should be associated. Is there an easy way to do this?
Intermediate & Advanced SEO | | Viewpoints1 -
I have two sitemaps which partly duplicate - one is blocked by robots.txt but can't figure out why!
Hi, I've just found two sitemaps - one of them is .php and represents part of the site structure on the website. The second is a .txt file which lists every page on the website. The .txt file is blocked via robots exclusion protocol (which doesn't appear to be very logical as it's the only full sitemap). Any ideas why a developer might have done that?
Intermediate & Advanced SEO | | McTaggart0 -
Parent pages
Hi guys, A website has many venue pages, for example: www.example.com/venue/paris For some reason the parent www.example.com/venue/ is 301 redirecting to a minor page elsewhere on the website. Should I remove the 301 redirect and then create www.example.com/venue/ page that then links to all the venues? My thinking is: Google will expect there to be a /venue/ 'parent' page So if the parent page is redirecting to a minor page elsewhere within the website its telling Google all the venues like paris must be even less important. Should I do it? Any suggestions from fellow SEOMoz's would be appreciated! All the best Richard
Intermediate & Advanced SEO | | Richard5550 -
301 Redirect or Canonical Tag or Leave Them Alone? Different Pages - Similar Content
We currently have 3 different versions of our State Business-for-Sale listings pages - the versions are: **Version 1 -- Preferred Version: ** http://www.businessbroker.net/State/California-Businesses_For_Sale.aspx Title = California Business for Sale Ads - California Businesses for Sale & Business Brokers - Sell a Business on Business Broker Version 2: http://www.businessbroker.net/Businesses_For_Sale-State-California.aspx Title = California Business for Sale | 3124 California Businesses for Sale | BusinessBroker.net Version 3: http://www.businessbroker.net/listings/business_for_sale_california.ihtml Title = California Businesses for Sale at BusinessBroker.net - California Business for Sale While the page titles and meta data are a bit different, the bulk of the page content (which is the listings rendered) are identical. We were wondering if it would make good sense to either (A) 301 redirect Versions 2 and 3 to the preferred Version 1 page or (B) put Canonical Tags on Versions 2 and 3 labeling Version 1 as the preferred version. We have this issue for all 50 U.S. States -- I've mentioned California here but the same applies for Alabama through Wyoming - same issue. Given that there are 3 different flavors and all are showing up in the Search Results -- some on the same 1st page of results -- which probably is a good thing for now -- should we do a 301 redirect or a Canonical Tag on Versions 2 and 3? Seems like with Google cracking down on duplicate content, it might be wise to be proactive. Any thoughts or suggestions would be greatly appreciated! Thanks. Matt M
Intermediate & Advanced SEO | | MWM37720