Should I robots block this directory?
-
There's about 43k pages indexed in this directory, and while helpful to end users, I don't see it being a great source of unique content for search engines.
Would you robots block or meta noindex nofollow these pages in the /blissindex/ directory?
ie.
http://www.careerbliss.com/blissindex/petsmart-index-980481/
http://www.careerbliss.com/blissindex/att-index-1043730/
http://www.careerbliss.com/blissindex/facebook-index-996632/
-
Totally agree with Ryan Kent. You should write a paragraph of content that is unique to the company featured. The chart is not unique enough and you will get flagged as having a high ratio of duplicate content. You should also look at all the other SEO elements on this page, understand what keyphrases you are targeting and modify the title, meta and H1 tags.
-
Should I robots block this directory?
I wouldn't.
Robots.txt in general should only be used when there is no other alternate means available to block content. An example is when your site is created by a CMS or e-commerce platform which does not offer the flexibility to noindex individual pages.
By blocking your site's content, you are preventing search engines not only from indexing the pages, but from following any links on those pages. You are restricting the way a crawler can travel on your site, which is generally a bad idea.
Additionally, I would suggest those pages offer value. "Petco salary comparison", "Target wages" and other search queries could generate results for those pages. Those pages contain helpful information which is otherwise not easily found on the internet. If that was my site, I would work to improve the optimization of those pages, not block them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google blocks certain articles on my website ... !
Hello I have a website with more than 350 unique articles, Most of them are crawled by Google without a problem, but I find out certain articles are never indexed by Google. I tried to rewrite them, adding fresh images and optimizing them but it gets me nowhere. Lately, I rewrite an article of those and tried to (fetch and render) through Google Webmasters, and I found this result, can you tell me if there is anything to do to fix that? BMVh4
Intermediate & Advanced SEO | | Evcindex0 -
Best practice for disallowing URLS with Robots.txt
Hi Everybody, We are currently trying to tidy up the crawling errors which are appearing when we crawl the site. On first viewing, we were very worried to say the least:17000+. But after looking closer at the report, we found the majority of these errors were being caused by bad URLs featuring: Currency - For example: "directory/currency/switch/currency/GBP/uenc/aHR0cDovL2NlbnR1cnlzYWZldHkuY29tL3dvcmt3ZWFyP3ByaWNlPTUwLSZzdGFuZGFyZHM9NzEx/" Color - For example: ?color=91 Price - For example: "?price=650-700" Order - For example: ?dir=desc&order=most_popular Page - For example: "?p=1&standards=704" Login - For example: "customer/account/login/referer/aHR0cDovL2NlbnR1cnlzYWZldHkuY29tL2NhdGFsb2cvcHJvZHVjdC92aWV3L2lkLzQ1ODczLyNyZXZpZXctZm9ybQ,,/" My question now is as a novice of working with Robots.txt, what would be the best practice for disallowing URLs featuring these from being crawled? Any advice would be appreciated!
Intermediate & Advanced SEO | | centurysafety0 -
Application & understanding of robots.txt
Hello Moz World! I have been reading up on robots.txt files, and I understand the basics. I am looking for a deeper understanding on when to deploy particular tags, and when a page should be disallowed because it will affect SEO. I have been working with a software company who has a News & Events page which I don't think should be indexed. It changes every week, and is only relevant to potential customers who want to book a demo or attend an event, not so much search engines. My initial thinking was that I should use noindex/follow tag on that page. So, the pages would not be indexed, but all the links will be crawled. I decided to look at some of our competitors robots.txt files. Smartbear (https://smartbear.com/robots.txt), b2wsoftware (http://www.b2wsoftware.com/robots.txt) & labtech (http://www.labtechsoftware.com/robots.txt). I am still confused on what type of tags I should use, and how to gauge which set of tags is best for certain pages. I figured a static page is pretty much always good to index and follow, as long as it's public. And, I should always include a sitemap file. But, What about a dynamic page? What about pages that are out of date? Will this help with soft 404s? This is a long one, but I appreciate all of the expert insight. Thanks ahead of time for all of the awesome responses. Best Regards, Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Eliminate render blocking javascript and css recommendation?
Our site's last Red flag issue is the "eliminate render blocking javascript and css" message. I don't know how to do that, and while I'm not sure if I could spend hours/days cutting and pasting and guessing until I made progress, I'd rather not. Does anyone know of a plugin that will just do this? Or, if not, how much would it cost to get a web developer to do this? Also, if there is not plugin (and it didn't look like there was when I looked) how long do you think this would take someone who knows what they are doing to complete. The site is: www.kempruge.com Thanks for any tips and/or suggestions, Ruben
Intermediate & Advanced SEO | | KempRugeLawGroup0 -
Should all pages on a site be included in either your sitemap or robots.txt?
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
Intermediate & Advanced SEO | | RossFruin1 -
How to optimise a local business directory?
Hey All, I've a new client who is a local business directory designed to promote local businesses across the UK but starting in it's home town of Brighton in the UK. I'm a little stumped to know how I can start to optimise this site as most of the pages being created are from very different businesses and focus on different subjects. I realise this is going to be a long haul but wondered if there are any tips you guys know of. My client domain is: call-us-first.co.uk Thanks Steve
Intermediate & Advanced SEO | | stevecounsell0 -
What directories are best for health or health related products?
I am trying to find out if there are any reputable directories related to health supplements and general health information.
Intermediate & Advanced SEO | | DonovanHarrell0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1