Dealing with thin comment
-
Hi again! I've got a site where around 30% of URLs have less than 250 words of copy. It's big though, so that is roughly 5,000 pages. It's an ecommerce site and not feasible to bulk up each one. I'm wondering if noindexing them is a good idea, and then measuring if this has an effect on organic search?
-
Thanks guys! We'e starting to add more content to each page, looks like the only way!
-
Does your competition have more content for these products?
If so, you need to ramp it up.
Either way, no-indexing them is not going to do any good.
-
Hi Blink
What would you be hoping to gain by de indexing these pages?
-
The size of your site is important. The value these pages have as far as bulking your site up is important. If you no index them, you will significantly reduce the size of your site, which can effect your ability to rank on other pages as well. No indexing them is not best practice, and will cause more harm than good.
These pages aren't hurting your site by not ranking. They might rank for terms you aren't tracking also. The pages probably have some authority and links, getting rid of that will definitely be detrimental.
-
Hi! I agree that they won't rank, but most aren't now anyway. I'm more concerned that they are pulling everything else down. By noindexing them, I can at least see if that is the problem/.
-
If you no index the pages they will never rank. The bots will not be able to crawl them so they will essentially be useless.
Are these product pages? The best way to get content on these pages is through user generated reviews. That is the best way to accomplish adding content without spending a ton of time writing copy.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best comments system / plugin for websites
Hi, What is the best comments system / plugin for websites that not harm seo Thanks Roy
Intermediate & Advanced SEO | | kadut0 -
Product search URLs with parameters and pagination issues - how should I deal with them?
Hello Mozzers - I am looking at a site that deals with URLs that generate parameters (sadly unavoidable in the case of this website, with the resource they have available - none for redevelopment) - they deal with the URLs that include parameters with *robots.txt - e.g. Disallow: /red-wines/? ** Beyond that, they userel=canonical on every PAGINATED parameter page[such as https://wine****.com/red-wines/?region=rhone&minprice=10&pIndex=2] in search results.** I have never used this method on paginated "product results" pages - Surely this is the incorrect use of canonical because these parameter pages are not simply duplicates of the main /red-wines/ page? - perhaps they are using it in case the robots.txt directive isn't followed, as sometimes it isn't - to guard against the indexing of some of the parameter pages??? I note that Rand Fishkin has commented: "“a rel=canonical directive on paginated results pointing back to the top page in an attempt to flow link juice to that URL, because “you'll either misdirect the engines into thinking you have only a single page of results or convince them that your directives aren't worth following (as they find clearly unique content on those pages).” **- yet I see this time again on ecommerce sites, on paginated result - any idea why? ** Now the way I'd deal with this is: Meta robots tags on the parameter pages I don't want indexing (nofollow, noindex - this is not duplicate content so I would nofollow but perhaps I should follow?)
Intermediate & Advanced SEO | | McTaggart
Use rel="next" and rel="prev" links on paginated pages - that should be enough. Look forward to feedback and thanks in advance, Luke0 -
How to deal with very similar (thin) content by design?
Hello all, I run a website which lists direct contact details (tel. and email.) of organisations. I have 100s of similar pages which are very thin on content (by design). Each page has a couple of lines of somewhat unique content. People find the site useful since it simply tells them which number to dial in order to speak to a real person at any given organisation. They can't easily find the information elsewhere and I believe it satisfies search intent. Am I at risk for being flagged for duplicate / low quality content? Should I add more text simply to add 'unique' content to each page even though it adds no value to users? That doesn't seem right either! Looking forward to hear where you guys stand on this, Many thanks,
Intermediate & Advanced SEO | | nathandh80 -
301 or 404 Question for thin content Location Pages we want to remove
Hello All, I have a Hire Website with many categories and individual location pages for each of the 70 depots we operate. However, being dynamic pages, we have thousands of thin content pages. We have decided to only concentrate on our best performing locations and get rid of the rest as its physically impossible to write unique content for all our location pages for every categories. Therefore my question is. Would it cause me problems by having to many 301's for the location pages I am going to re-direct ( i was only going to send these back to the parent category page) or should I just 404 all those location pages and at some point in the future when we are in a position to concentrate on these locations then redo them with new content ? in terms of url numbers It would affect a few thousand 301's or 404's depending on people thoughts. Also , does anyone know what percentage of thin content on a site should be acceptable ?.. I know , none is best in an ideal world but it would be easier if there we could get away with a little percentage. We have been affected by Panda , so we are trying to tidy things up as best at possible, Any advice greatly appreciated? thanks Peter
Intermediate & Advanced SEO | | PeteC120 -
Dealing with Redirects and iFrames - getting "product login" pages to rank
One of our most popular products has a very authoritative product page, which is great for marketing purposes, but not so much for current users. When current users search for "product x login" or "product x sign in", instead of getting to the login page, they see the product page - it adds a couple of clicks to their experience, which is not what we want. One of the problems is that the actual login page has barely any content, and the content that it does carry is wrapped around <iframes>. Due to political and security reasons, the web team is reluctant to make any changes to the page, and one of their arguments is that the login page actually ranks #1 for a few other products (at our company, the majority of logins originate from the same domain). </iframes> To add to the challenge - queries that do return the login page as #1 result (for some of our other products) actually do not reference the sign-in domain, but our old domain, which is now a 301 redirect to the sign-in domain. To make that clear - **Google is displaying the origin domain in SERPs, instead of displaying the destination domain. ** The question is - how do we get this popular product's login page to rank higher than the product page for "login" / "sign in" queries? I'm not even sure where we should point links to at this point - the actual sign in domain or the origin domain? I have the redirect chains and domain authority for all of the pages involved, including a few of our major competitors (who follow the same login format), and will be happy to share it privately with a Moz expert. I'd prefer not to make any more information publicly available, so please reach out via private message if you think you can help.
Intermediate & Advanced SEO | | leosaraceni0 -
Dealing with Penguin: Changing URL instead of removing links
I have some links pointing to categories from article directories, web directories, and a few blogs. We are talking about 20-30 links in total. They are less than 5% of the links to my site (counting unique domains). I either haven't been able to make contact with webmasters, or they are asking money to remove the links. If I simply rename the URL (for example changing mysite.com/t-shirt.html to mysite.com/tshirts.html), will that resolve any penguin issues? The link will forward to the homepage since that page no longer exists. I really want to avoid using the disavow tool if possible. I appreciate the feedback. If you have actually done this, please share your experience.
Intermediate & Advanced SEO | | inhouseseo0 -
All Thin Content removed and duplicate content replaced. But still no success?
Good morning, Over the last three months i have gone about replacing and removing all the duplicate content (1000+ page) from our site top4office.co.uk. Now it been just under 2 months since we made all the changes and we still are not showing any improvements in the SERPS. Can anyone tell me why we aren't making any progress or spot something we are not doing correctly? Another problem is that although we have removed 3000+ pages using the removal tool searching site:top4office.co.uk still shows 2800 pages indexed (before there was 3500). Look forward to your responses!
Intermediate & Advanced SEO | | apogeecorp0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180