Medium sizes forum with 1000's of thin content gallery pages. Disallow or noindex?
-
I have a forum at http://www.onedirection.net/forums/ which contains a gallery with 1000's of very thin-content pages. We've currently got these photo pages disallowed from the main googlebot via robots.txt, but we do all the Google images crawler access.
Now I've been reading that we shouldn't really use disallow, and instead should add a noindex tag on the page itself.
It's a little awkward to edit the source of the gallery pages (and keeping any amends the next time the forum software gets updated).
Whats the best way of handling this?
Chris.
-
Hey Chris,
I agree that your current implementation, while not ideal, is perfectly adequate for the purposes of ensuring you don't have duplicate content or cannibalisation problems - but still allows Google to index the UCG images.
You're also preventing Googlebot from seeing the user profile pages, which is a good idea, since many of them are very thin and mostly duplicate.
So, from a pure SEO perspective, I think you've done a good job.
However... I think you should also consider the ethical implications of potentially blocking the image googlebot as well. By preventing Google from indexing all those images of young girls fawning over the vacuous runners up of a televised talent show, you would undoubtedly be doing the world a great service.
-
Hi Chris, I second Jarno's opinion in this regard. If it is going to be a huge overhead to add the page level blocking, you can rely on your current robots.txt setup. There is a small catch here though. Even if you block using robots.txt file, if Google finds a reference to the blocked content elsewhere on the Internet, then it would index the blocked content. In situations like this, page level content blocking is the way forward. So to fully restrict Google bot indexing your content, you should ideally be using the page level robots meta tag or x-robots-tag.
Here you go for more: https://support.google.com/webmasters/answer/156449?hl=en
Hope it helps.
Best,
Devanur Rafi.
-
Chris,
is the disallow meta update is too complicated for you to add due to software issues etc. then I feel that your current method is the right way to go. Normally you would be absolutely right for the simple reason that page level overrules the robots.txt. But if a software update overrules the rules places in your code then you have to manually add it after each and every update and i'm not sure you want to do that.
regards
Jarno
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should i noindex/nofollow a faceted navigation page?
I have an ecommerce website with 4 departments, that share the same categories, For example a bicycle shop would have different products for mountain biking and road cycling, but they would both share the same 'tyres' category. I get around this by having the department as a filter, that changes the products on show, and adds a URL parameter of ?department=1. When this filter is applied, i have a canonical link setup to the non-filtered category. Any filter links are nofollowed. My top menu has 4 different sections, one for each department, and links to these URLs with the department parameter already on, these links are set to allow robots to follow. As i am actively pointing Google at these pages, and it is my main navigation, should the page they go to be noindexed? As its the canonical i want to rank. Hopefully this makes sense. Cheers
Technical SEO | | SEOhmygod0 -
Car Dealership website - Duplicate Page Content Issues
Hi, I am currently working on a large car dealership website. I have just had a Moz crawl through and its flagging a lot of duplicate page content issues, these are mostly for used car pages. How can I get round this as the site stocks many of the same car, model, colour, age, millage etc. Only unique thing about them is the reg plate. How do I get past this duplicate issue if all the info is relatively the same? Anyone experienced this issue when working on a car dealership website? Thank you.
Technical SEO | | karl621 -
Does google know every time you change content on your page
What i mean by the question is, so on our home page www.in2town.co.uk we change the article under lifestyle story of the day, if this changes every hour, will this encourage google to visit that page more often or will then just ignore that and just visit each day would love to hear your thoughts on this
Technical SEO | | ClaireH-1848860 -
Different IP's in one Server
Hi, I just want to ask if there is no bad effect in SEO if we do have different websites that has different IP address but has shared in only 1 server? Thank you
Technical SEO | | TirewebMarketing0 -
Unnatural Link Warning Removed - WMT's
Hi, just a quick one. We had an unnatural link warning for one of our test sites, the message appeared on the WMT's dashboard. The message is no longer there, has it simply expired or could this mean that Google no longer sees an unatural backlink profile? Hoping it's the latter but doubtful as we haven't tried to remove any links.. as I say it's just a test site. Thanks in advance!
Technical SEO | | Webpresence0 -
Duplicate Page Titles and Content
I have a site that has a lot of contact modules. So basically each section/page has a contact person and when you click the contact button it brings up a new window with form to submit and then ends with a thank you page. All of the contact and thank you pages are showing up as duplicate page titles and content. Is this something that needs to be fixed even if I am not using them to target keywords?
Technical SEO | | AlightAnalytics0 -
Should we use Google's crawl delay setting?
We’ve been noticing a huge uptick in Google’s spidering lately, and along with it a notable worsening of render times. Yesterday, for example, Google spidered our site at a rate of 30:1 (google spider vs. organic traffic.) So in other words, for every organic page request, Google hits the site 30 times. Our render times have lengthened to an avg. of 2 seconds (and up to 2.5 seconds). Before this renewed interest Google has taken in us we were seeing closer to one second average render times, and often half of that. A year ago, the ratio of Spider to Organic was between 6:1 and 10:1. Is requesting a crawl-delay from Googlebot a viable option? Our goal would be only to reduce Googlebot traffic, and hopefully improve render times and organic traffic. Thanks, Trisha
Technical SEO | | lzhao0 -
A sitemap... What's the purpose?
Hello everybody, my question is really simple: what's the purpose of a sitemap? It's to help the robots to crawl your website but if you're website has a good architecture, the robots will be able to crawl your site easily! Am I wrong? Thank you for yours answers, Jonathan
Technical SEO | | JonathanLeplang0