Dealing with thin comment
-
Hi again! I've got a site where around 30% of URLs have less than 250 words of copy. It's big though, so that is roughly 5,000 pages. It's an ecommerce site and not feasible to bulk up each one. I'm wondering if noindexing them is a good idea, and then measuring if this has an effect on organic search?
-
Thanks guys! We'e starting to add more content to each page, looks like the only way!
-
Does your competition have more content for these products?
If so, you need to ramp it up.
Either way, no-indexing them is not going to do any good.
-
Hi Blink
What would you be hoping to gain by de indexing these pages?
-
The size of your site is important. The value these pages have as far as bulking your site up is important. If you no index them, you will significantly reduce the size of your site, which can effect your ability to rank on other pages as well. No indexing them is not best practice, and will cause more harm than good.
These pages aren't hurting your site by not ranking. They might rank for terms you aren't tracking also. The pages probably have some authority and links, getting rid of that will definitely be detrimental.
-
Hi! I agree that they won't rank, but most aren't now anyway. I'm more concerned that they are pulling everything else down. By noindexing them, I can at least see if that is the problem/.
-
If you no index the pages they will never rank. The bots will not be able to crawl them so they will essentially be useless.
Are these product pages? The best way to get content on these pages is through user generated reviews. That is the best way to accomplish adding content without spending a ton of time writing copy.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Product search URLs with parameters and pagination issues - how should I deal with them?
Hello Mozzers - I am looking at a site that deals with URLs that generate parameters (sadly unavoidable in the case of this website, with the resource they have available - none for redevelopment) - they deal with the URLs that include parameters with *robots.txt - e.g. Disallow: /red-wines/? ** Beyond that, they userel=canonical on every PAGINATED parameter page[such as https://wine****.com/red-wines/?region=rhone&minprice=10&pIndex=2] in search results.** I have never used this method on paginated "product results" pages - Surely this is the incorrect use of canonical because these parameter pages are not simply duplicates of the main /red-wines/ page? - perhaps they are using it in case the robots.txt directive isn't followed, as sometimes it isn't - to guard against the indexing of some of the parameter pages??? I note that Rand Fishkin has commented: "“a rel=canonical directive on paginated results pointing back to the top page in an attempt to flow link juice to that URL, because “you'll either misdirect the engines into thinking you have only a single page of results or convince them that your directives aren't worth following (as they find clearly unique content on those pages).” **- yet I see this time again on ecommerce sites, on paginated result - any idea why? ** Now the way I'd deal with this is: Meta robots tags on the parameter pages I don't want indexing (nofollow, noindex - this is not duplicate content so I would nofollow but perhaps I should follow?)
Intermediate & Advanced SEO | | McTaggart
Use rel="next" and rel="prev" links on paginated pages - that should be enough. Look forward to feedback and thanks in advance, Luke0 -
Site not showing up in search - was hacked - huge comment spam - cannot connect Webmaster tools
Hi Moz Community A new client approached me yesterday for help with their site that used to rank well for their designated keywords, but now is not doing well. Actually, they are not on Google at all. It's like they were removed by Google. There are not reference to them when searching with "site: url". I investigated further and discovered the likely problem . . . 26 000 spam comments! All these comments have been removed now. I clean up this Wordpress site pretty well. However, I want to connect it now to Google webmaster tools. I have admin access to the WP site, but not ftp. So I tried using Yoast to connect. Google failed to verify the site. So the I used a file uploading console to upload the Google html code instead. I check that the code is there. And Google still fails to verify the site. It is as if Google is so angry with this domain that they have wiped it completely from search and refuse to have any dealings with it at all. That said, I did run the "malware" check or "dangerous content" check with them that did not bring back any problems. I'm leaning towards the idea that this is a "cursed" domain in Google and that my client's best course of action is to build her business around and other domain instead. And then point that old domain to the new domain, hopefully without attracting any bad karma in that process (advice on that step would be appreciated). Anyone have an idea as to what is going on here?
Intermediate & Advanced SEO | | AlistairC0 -
How to deal with everscrolling pages?
A website keeps showing more articles when pressing a "load more" button. This loads additional category pages with a page parameter (e.g. ...?page=1, ...?page=2, etc.), as suggested by Google to get all pages indexed. The problem is that this creates thousands of additional, duplicate pages, with a duplicate title, header, and very unfocused content. They also show as duplicate content in Moz. The pages are indexed by Google, but none of them is ranking. What do you guys think: add a no-follow to the load-more button, so search engines will never see them? Thanks for your input!
Intermediate & Advanced SEO | | corusent1 -
Dealing with past events
Hi We have a website which lists both upcoming and past events. Currently everything is indexed by google, with no real issues (usually it finds the most up-to-date events) and we have deprioritised the past events in the sitemap. Do I need to go one step further and noindex events which are past or just leave it as-is? They dont really hold much value, but sometimes will have a number of incoming links and social media shares pointing to them. We want to keep the page active for visitors, just wondering about google (there's no real link between past events and future either, so difficult to 'point' to newer version of an event) We have approx 1M 'past' events and growing so its a big change. Also would you keep them in sitemap with lower priority, or just remove them? EDIT: Just seen a Matt Cutts post from 2014 which indicates than an 'unavailable_after' meta tag might be best?
Intermediate & Advanced SEO | | benseb0 -
"No Index, No Follow" or No Index, Follow" for URLs with Thin Content?
Greetings MOZ community: If I have a site with about 200 thin content pages that I want Google to remove from their index, should I set them to "No Index, No Follow" or to "No Index, Follow"? My SEO firm has advised me to set them to "No Index, Follow" but on a recent MOZ help forum post someone suggested "No Index, No Follow". The MOZ poster said that telling Google the content was should not be indexed but the links should be followed was inconstant and could get me into trouble. This make a lot of sense. What is proper form? As background, I think I have recently been hit with a Panda 4.0 penalty for thin content. I have several hundred URLs with less than 50 words and want them de-indexed. My site is a commercial real estate site and the listings apparently have too little content. Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
What's the deal with significantLinks?
http://schema.org/significantLink Schema.org has a definition for "non-navigation links that are clicked on the most." Presumably this means something like the big green buttons on Moz's homepage. But does anyone know how they affect anything? In http://moz.com/blog/schemaorg-a-new-approach-to-structured-data-for-seo#comment-142936, Jeremy Nelson says " It's quite possible that significant links will pass anchor text as well if a previous link to the page was set in navigation, effictively making obselete the first-link-counts rule, and I am interested in putting that to test." This is a pretty obscure comment but it's one of the only results I could find on the subject. Is this BS? I can't even make out what all of it is saying. So what's the deal with significantLinks and how can we use them to SEO?
Intermediate & Advanced SEO | | NerdsOnCall0 -
UGC Comments - Ditch them?
Site: bussongs.com I have a fairly successful (400k visits/month, 1.2m PV/month) site which features a directory of lyrics-style content. The bulk of the content is not unique and exists in many places across the web. I figured adding commenting would help add more bulk to each page and give some long-tail on-page KWs. The comments are non-AJAX and Google is crawling the information. There's 10k comments across the site. I have some automated filters to remove profanity and junk but otherwise comments are published in real time. Because of the niche I target, the quality of spelling/grammar is very low. I don't have any pagination at the moment so some pages are now quite long and the quantity of UGC content is considerably larger than the main content of the page. Traffic went down post-Panda and has picked up slowly over time - almost back at normal levels. But I feel I could better manage comments and that they do have an impact on SEO. Do these comments have value? Should I use AJAX to stop crawling? Implement pagination to limit page length? Use plus/minus ranking to give prominence to the better comments? Looking forward to hearing your thoughts. Thanks!
Intermediate & Advanced SEO | | kmander0 -
How to deal with 1 product in 1 country and 3 languages?
After reading multiple posts on dealing with multilanguage sites (also checked http://www.google.com/support/forum/p/Webmasters/thread?tid=12a5507889c20461&hl=en), I still haven't got an answer to a very specific question I have. Please allow me to give some background:
Intermediate & Advanced SEO | | TruvoDirectories
I'm working for the official Belgian Yellow Pages (part of Truvo), and as you might know in Belgium, we have to deal with 3 official languages (BE-nl, BE-fr, BE-de | the latter is out of scope for this question) and on top of that we also have a large international audience (BE-en). Furthermore, Belgium is very small, meaning that someone living in the French part of Belgium (ex. Liège) easily might look for information in the Dutch part of Belgium (ex. Antwerpen) without having to switch websites/language. Since 1968 (http://info.truvo.be/en/our-company/) we have established 3 different brands, each brand is adapted to a language, each has a clear language specific connotation:
for the BE-nl market: we have the brand "gouden gids"
for the BE-fr market: we have the brand "pages dor"
for the BE-en market we have the brand "golden pages" Logically, this results in 3 websites: www.goudengids.be, www.pagesdor.be, www.goldenpages.be each serving a specific language and containing specific language messages and functionalities, but, off course, serving a part of the content that is similar for all websites regardless of the language.
So we do have following links ex.
http://www.goudengids.be/united-consultants-nv-antwerpen-2000/
http://www.pagesdor.be/united-consultants-nv-antwerpen-2000/
http://www.goldenpages.be/united-consultants-nv-antwerpen-2000/ When I want to stick with the separate brands for the same content, how do I make sure that Google shows the desired url when searching in resp. google.be (dutch), google.be (french) google.be (english)? Kind Regards0