Google Semantic Search: Now I'm really confused
-
I'm struggling to understand why I rank for some terms and not for other closely related ones. For example:
property in Toytown but NOT properties in toytown
property for sale in Toytown but NOT property for sale Toytown NOR properties for sale Toytown.
My gut instinct is that I don't have enough of the second phrasing as inbound link anchor text -- but didn't Penguin/Panda make all that obsolete?
-
Go do a search for "property" and "properties" on Google. You will see that the results that are returned are different. This is because Google treats singulars and plurals as different keywords. Someone searching for "property" has a different search intent than someone searching for "properties".
Think about the search intent of someone searching for property vs properties. Properties to me suggests someone looking for a list of a bunch of different properties on one page, whereas "property" seems more specific.
If you think that your page is relevant to both queries, then include both keywords on the page and possibly in the title to increase your relevance for those terms. Get backlinks to that page using both keywords as anchor text.
If you don't think your page can adequately serve people searching for both keywords, then it makes sense to create two separate pages targeted at each keyword. Yes, this can be done in a spammy way, but it can also be done in a way that adds value to the user and provides a better fit for what your user is searching for.
-
Singulars and plurals are different keywords in the eyes of Google.
See, I'm not sure about that: If I Google,say, property in spain, some of the results include sites with properties (but not property) in the title tag.
It may also make sense to create a separate page focused on the second keyword.
But surely this is the definition of webspam? If my first page is about "property", then a second page focusing on "properties" brings absolutely nothing new to the table -- apart from attempting to game the search engines.
Just my two pennies'/cents' worth.
-
Singulars and plurals are different keywords in the eyes of Google. For example, property and properties are different (but related) keywords, and can return completely different results.
If you are ranking for one, and not the other, then you may need to improve your on-page optimization or get more backlinks. It may also make sense to create a separate page focused on the second keyword.
Penguin/Panda absolutely DID NOT make anchor text obsolete. Anchor text still matters, Penguin simply penalized sites that abused it on a large scale using spammy links. Panda had nothing to do with links, but instead focused on penalizing low quality/thing content sites.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I better noindex 'scripted' files in our portfolio?
Hello Moz community, As a means of a portfolio, we upload these PowerPoint exports – which are converted into HTML5 to maintain interactivity and animations. Works pretty nicely! We link to these exported files from our products pages. (We are a presentation design company, so they're pretty relevant). For example: https://www.bentopresentaties.nl/wp-content/portfolio/ecar/index.html However, they keep coming up in the Crawl warnings, as the exported HTML-file doesn't contain text (just code), so we get errors in: thin content no H1 missing meta description missing canonical tag I could manually add the last two, but the first warnings are just unsolvable. Therefore I figured we probably better noindex all these files… They appear to don't contain any searchable content and even then; the content of our clients work is not relevant for our search terms etc. They're mere examples, just in the form of HTML files. Am I missing something or should I better noindex these/such files? (And if so: is there a way to include a whole directory to noindex automatically, so I don't have to manually 'fix' all the HTML exports with a noindex tag in the future? I read that using disallow in robots.txt wouldn't work, as we will still link to these files as portfolio examples).
Intermediate & Advanced SEO | | BentoPres0 -
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
Why isn't Google caching our pages?
Hi everyone, We have a new content marketing site that allows anyone to publish checklists. Each checklist is being indexed by Google, but Google is not storing a cached version of any of our checklists. Here's an example:
Intermediate & Advanced SEO | | Checkli
https://www.checkli.com/checklists/ggc/a-girls-guide-to-a-weekend-in-south-beach Missing Cache:
https://webcache.googleusercontent.com/search?q=cache:DfFNPP6WBhsJ:https://www.checkli.com/checklists/ggc/a-girls-guide-to-a-weekend-in-south-beach+&cd=1&hl=en&ct=clnk&gl=us Why is this happening? How do we fix it? Is this hurting the SEO of our website.0 -
NGinx rule for redirecting trailing '/'
We have successfully implemented run-of-the-mill 301s from old URLs to new (there were about 3,000 products). As normal. Like we do on every other site etc. However, recently search console has started to report a number of 404s with the page names with a trailing forward slash at the end of the .html suffix. So, /old-url.html is redirecting (301) to /new-url.html However, now for some reason /old-url.html/ has 'popped up' in the Search Console crawl report as a 404. Is there a 'blobal' rule you can write in nGinx to say redirect *.html/ to */html (without the forward slash) rather than manually doing them all?
Intermediate & Advanced SEO | | AbsoluteDesign0 -
Is there any SEO advantage to sharing links on twitter using google's url shortener goo.gl/
Hi is there any advantage to using <cite class="vurls">goo.gl/</cite> to shorten a URL for Twitter instead of other ones? I had a thought that <cite class="vurls">goo.gl/</cite> might allow google to track click throughs and hence judge popularity.
Intermediate & Advanced SEO | | S_Curtis0 -
Whats the best way to revive a directory that was 301'd and now I want to remove that?
Last year i 301'd one of my directories on my site, pointing everything to a different directory. Long story short I am going to sell this product line again and would like to just remove the 301 to that original directory, but I am reading that the 301s are also cached in most browsers for a long time. Has anyone successfully done this and if you did what was it that you had to do? Thanks Mike
Intermediate & Advanced SEO | | SandyEggo0 -
Is Google's reinclusion request process flawed?
We have been having a bit of a nightmare with a Google penalty (please see http://www.browsermedia.co.uk/2012/04/25/negative-seo-or-google-just-getting-it-painfully-wrong/ or http://econsultancy.com/uk/blog/10093-why-google-needs-to-be-less-kafkaesque for background information - any thoughts on why we have been penalised would be very, very welcome!) which has highlighted a slightly alarming aspect of Google's reinclusion process. As far as I can see (using Google Analytics), supporting material prepared as part of a reinclusion request is basically ignored. I have just written an open letter to the search quality team at http://www.browsermedia.co.uk/2012/06/19/dear-matt-cutts/ which gives more detail but the short story is that the supporting evidence that we prepared as part of a request was NOT viewed by anyone at Google. Has anyone monitored this before and experienced the same thing? Does anyone have any suggestions regarding how to navigate the treacherous waters of resolving a penalty? This no doubt sounds like a sob story for us, but I do think that this is a potentially big issue and one that I would love to explore more. If anyone could contribute from the search quality team, we would love to hear your thoughts! Cheers, Joe
Intermediate & Advanced SEO | | BrowserMediaLtd0 -
How do Google Site Search pages rank
We have started using Google Site Search (via an XML feed from Google) to power our search engines. So we have a whole load of pages we could link to of the format /search?q=keyword, and we are considering doing away with our more traditional category listing pages (e.g. /biology - not powered by GSS) which account for much of our current natural search landing pages. My question is would the GoogleBot treat these search pages any differently? My fear is it would somehow see them as duplicate search results and downgrade their links. However, since we are coding the XML from GSS into our own HTML format, it may not even be able to tell.
Intermediate & Advanced SEO | | EdwardUpton610