Should I robots block this directory?
-
There's about 43k pages indexed in this directory, and while helpful to end users, I don't see it being a great source of unique content for search engines.
Would you robots block or meta noindex nofollow these pages in the /blissindex/ directory?
ie.
http://www.careerbliss.com/blissindex/petsmart-index-980481/
http://www.careerbliss.com/blissindex/att-index-1043730/
http://www.careerbliss.com/blissindex/facebook-index-996632/
-
Totally agree with Ryan Kent. You should write a paragraph of content that is unique to the company featured. The chart is not unique enough and you will get flagged as having a high ratio of duplicate content. You should also look at all the other SEO elements on this page, understand what keyphrases you are targeting and modify the title, meta and H1 tags.
-
Should I robots block this directory?
I wouldn't.
Robots.txt in general should only be used when there is no other alternate means available to block content. An example is when your site is created by a CMS or e-commerce platform which does not offer the flexibility to noindex individual pages.
By blocking your site's content, you are preventing search engines not only from indexing the pages, but from following any links on those pages. You are restricting the way a crawler can travel on your site, which is generally a bad idea.
Additionally, I would suggest those pages offer value. "Petco salary comparison", "Target wages" and other search queries could generate results for those pages. Those pages contain helpful information which is otherwise not easily found on the internet. If that was my site, I would work to improve the optimization of those pages, not block them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SSL and robots.txt question - confused by Google guidelines
I noticed "Don’t block your HTTPS site from crawling using robots.txt" here: http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html Does this mean you can't use robots.txt anywhere on the site - even parts of a site you want to noindex, for example?
Intermediate & Advanced SEO | | McTaggart0 -
Blocking Certain Site Parameters from Google's Index - Please Help
Hello, So we recently used Google Webmaster Tools in an attempt to block certain parameters on our site from showing up in Google's index. One of our site parameters is essentially for user location and accounts for over 500,000 URLs. This parameter does not change page content in any way, and there is no need for Google to index it. We edited the parameter in GWT to tell Google that it does not change site content and to not index it. However, after two weeks, all of these URLs are still definitely getting indexed. Why? Maybe there's something we're missing here. Perhaps there is another way to do this more effectively. Has anyone else ran into this problem? The path we used to implement this action:
Intermediate & Advanced SEO | | Jbake
Google Webmaster Tools > Crawl > URL Parameters Thank you in advance for your help!0 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
Robot.txt File Not Appearing, but seems to be working?
Hi Mozzers, I am conducting a site audit for a client, and I am confused with what they are doing with their robot.txt file. It shows in GWT that there is a file and it is blocking about 12K URLs (image attached). It also shows in GWT that the file was downloaded 10 hours ago successfully. However, when I go to the robot.txt file link, the page is blank. Would they be doing something advanced to be blocking URLs to hide it it from users? It appears to correctly be blocking log-ins, but I would like to know for sure that it is working correctly. Any advice on this would be most appreciated. Thanks! Jared ihgNxN7
Intermediate & Advanced SEO | | J-Banz0 -
Robots.txt Blocked Most Site URLs Because of Canonical
Had a bit of a "Gotcha" in Magento. We had Yoast Canonical Links extension which worked well , but then we installed Mageworx SEO Suite.. which broke Canonical Links. Unfortunately it started putting www.mysite.com/catalog/product/view/id/516/ as the Canonical Link - and all URLs with /catalog/productview/* is blocked in Robots.txt So unfortunately We told Google that the correct page is also a blocked page. they haven't been removed as far as I can see but traffic has certainly dropped. We have also , at the same time had some Site changes grouping some pages & having 301 redirects. Resubmitted site map & did a fetch as google. Any other ideas? And Idea how long it will take to become unblocked?
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
How to Disallow Tag Pages With Robot.txt
Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Why specify robots instead of googlebot for a Panda affected site?
Daniweb is the poster child for sites that have recovered from Panda. I know one strategy she mentioned was de-indexing all of her tagged content, fo rexample: http://www.daniweb.com/tags/database Why do you think more Panda affected sites specifying 'googlebot' rather than 'robots' to capture traffic from Bing & Yahoo?
Intermediate & Advanced SEO | | nicole.healthline0 -
Duplicate Content from Article Directories
I have a small client with a website PR2, 268 links from 21 root domains with mozTrusts 5.5, MozRank 4.5 However whenever I check in google for the amount of link: Google always give the response none. My client has a blog and many articles on the blog. However they have submitted their blog article every time to article directories as well, plain and simle creating duplicate and content. Is this the reason why their link: is coming up as none? Is there something to correct the situation?
Intermediate & Advanced SEO | | danielkamen0