How to know when do use singular vs plural in anchor text and on-page copy?
-
I'm building out a specific section of our site and I want to make sure I target it correctly.
Is there a rule of thumb when to know how to use "car" vs "cars"? (as an example)
Is there a specific way to research the right approach?
thank you!
-
A lot of results for singular/plural and synonyms are so similar as to be nearly identical for the first page or two, which is what really matters, and which is what Gregory Baka is referring to. You will notice a lot of times if you search for something you'll see synonyms and variants bolded in the description and title in the SERPs. That would be your signal that one is being treated as synonymous with (though not "identical to") the other.
In terms of singular vs plural I tend to include both variations naturally within descriptions and on-page copy. External links tend to contain both versions too unless you're buying the anchor text. I would think, based only on common sense and experience, and not any quantifiable study, that Google looks for a natural variation. If you have two different landing pages, one targeting singular and the other targeting plural, that would not only be wasting effort, money, link equity, etc... but it would seem very unnatural. If I were writing an algorithm I'd probably figure out a way to push such pages lower in the results unless other signals point to really high quality at the page and/or domain level.
ALL of this "common sense" stuff flies out the window though when any ambiguity of intent or results is involved. For example, with "cars" you could be talking about the animated movie, which is why you see IMDB, Disney and Wikipedia in the results. This disambiguation factor is why Google is pushing for semantic markup of the web, and is probably why topic modeling has become increasingly important (e.g. want to rank better for "cars" when the user intent is to find the animation, use words like "Pixar" and "Lightening Steve McQueen" in the copy).
As a rule of thumb, I tend to go with whatever sounds better and makes more sense to the user. For example, on a category page I might write "blue widgets" in the title, but I'd use "blue widget" on a single product page. From there I go with what the data says. Looking at Analytics a few months later I pay attention to traffic and keywords as a follow-up. If the "blue widgets" category page gets 80% of it's traffic from a #3 ranking for "blue widget" when it ranks #1 for "blue widgets" that tells me I should probably change the title to the singular version.
In the end I usually find I get the best results when I don't think too hard about it and just go with my gut when writing. I know that's not scientific or anything, but if it works it works.
-
No research. Just memory of doing searches with and without an S for my own keywords and noticing that the results were fairly similar.
I just checked garden and gardens - many of the page 1 results are the same.
Then I checked tool and tools - very different results because of the band "Tool"
Checking garden tool and garden tools takes it back to many similar page 1 results.
The original poster just asked for a Rule of Thumb. So perhaps the answer is "It depends on the keyword. Google it and see what happens."
-
I did a search for "car" vs "cars" and I see a drastically different number of results.
3.3B vs 1.5B, respectively.
Do you have any research to support your response? Just curious where you're getting your information from.
-
When the plural is made by just adding an S, then Google seems not to differentiate the singular or plural. You can verify it by opening two windows and searching for the term both with and without the S and seeing if the results are ranked differently.
But if the plural is a whole different word, like Goose and Geese or Mouse and Mice, then you will definitely have to makes a decision on which to use.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ranking without use of keywords on page & without use of matching anchor text??
Howdy folks. So, here is a dilemma. One of competitors of ours is somehow ranking for a keyphrase "houston chronicle obituaries" without any usage of these keywords on the page, without any full or partial anchor text match ("chronicle" is not used anywhere). The rest of competitiors' rankings make sense. Any ideas?
Intermediate & Advanced SEO | | DmitriiK0 -
Should you bother disallowing low quality links with brand/non-commercial anchor text?
Hi Guys, Doing a link audit and have come across lots of low quality web directories pointing to the website. Most of the anchor text of these directories are the websites URL and not comercial/keyword focused anchor text. So if thats the case should we even bother doing a link removal request via google webmaster tools for these links, as the anchor text is non-commercial? Cheers.
Intermediate & Advanced SEO | | spyaccounts140 -
Page Authority
Hi We have a large number of pages, all sitting within various categories. I am struggling to rank a level 3 for example, or increase authority of this page. Apart from putting it in the main menu or trying to build quality links to it, are there any other methods I can try? We have so many pages I find it hard to workout what the best way to internal link these pages for authority. At the moment they're classified in their relevant categories, but these go from level 1 down to 4 - is this too many classification levels?
Intermediate & Advanced SEO | | BeckyKey1 -
Using Pagination for eComm Reviews Pages
Hi All, An eComm site has product pages where only 10 customer reviews are found in the source code on the product page, no matter how many reviews the product actually has. ALL reviews (including the 10 displayed on the product page) are located on a subdomain, split into many pages dependong on how many reviews a certain product has (some have well over 100 unique reviews). Reviews page: http://reviews.americanmuscle.com/0065-en_us/charcoalamr-18x8-0512-pirelli-stan/american-muscle-wheels-amr-charcoal-wheel-pirelli-tire-kit-18x8-05-14-all-reviews/reviews.htm Corresponding product page: http://www.americanmuscle.com/charcoalamr-18x8-0512-pirelli-stan.html I'm fearing a Panda related problem here, especially since thousands of products have only 1 or two reviews, duplicated on the reviews.americanmuscle.com page and the corresponding product page. I also do not want to lose the unique content on the second and third reviews pages simply by noindexing/canonicaling them to the product page. My question is whether or not I can paginate the reviews.am pages in a way that the product page is "page 1" and the first reviews page is "page 2," second reviews.am page is "page 3" and so forth. Are there issues associated with domain-to-subdomain pagination? Can I utilize the pagination tab in this manner in the first place? There are currently more than 57,000 of these review.americanmuscle.com pages in the index that I would like to clean up so any/all suggestions are appreciated.
Intermediate & Advanced SEO | | andrewv0 -
Sculpting anchor text percentage through disavow?
Hi there, should less-than-optimal links be preserved, if those links contribute to a more attractive anchor text percentage profile? I'm working on a client who spun a bunch of articles, using keyword word anchor text. No surprise, the strategy worked great up to the penguin update. About 90% of the client's links come from these spun articles. The other 10% of links are naturally occurring, quality links. Furthermore, these quality links are also keyword rich. Now, it occurs to me that if I remove / disavow the links coming from the spun articles, I'm left with the 10% of quality, anchor text rich links. I'm concerned that Google will see this percentage as too high, and lower the rank. Furthermore, I have a vague memory of watching some YouTube video, where an ex-Googler says that your brand name should be about 60% of your anchor text, and everything else lower. Finally, when I examine the anchor text in links coming into the ranking sites, they have 5-15% anchor text density on their keywords. So, I feel a bit of a contradiction: I should clean up all of the crappy links from the spun articles, but then that risks having only the keyword rich anchor text links active? Therefore, I'm considering leaving some of the crappy links active on non-relevant keyword text, such as the good 'ol "click here" link. Also, before answering this, I can already predict some of the answers on philosophical grounds: those crappy links from spun articles are not natural and garbage, so get rid of them. Fair enough, but I'm also interested in an answer on only the dimension of what will produce the highest rank for my client?
Intermediate & Advanced SEO | | ExploreConsulting0 -
Randomly Displayed Text: Hidden text issue?
I want to add some script to my site so that a given page publishes a different paragraph of text every time the page loads. Something like randomly displayed testimonials (but with more text). So, when you look at the page source, you would see all the text (e.g testimonial-1, testimonial-2, etc.), but the user would only see one paragraph randomly. Would this be considered hidden text (one code for search engine, one for use)? Is there a safe number of words you can do this with without setting off red flags? I appreciate the help.
Intermediate & Advanced SEO | | inhouseseo0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0