Can a page be 100% topically relevant to a search query?
-
Today's YouMoz post, Accidental SEO Tests: When On-Page Optimization Ceases to Matter, explores the theory that there is an on-page optimization saturation point, "beyond which further on-page optimization no longer improves your ability to rank" for the keywords/keyword topics you are targeting. In other words, you can optimize your page for search to the point that it is 100% topically relevant to query and intent.
Do you believe there exists such a thing as a page that is 100% topically relevant? What are your thoughts regarding there being an on-page optimization saturation point, beyond which further on-page optimization no longer improves your ability to rank? Let's discuss!
-
I consider 100% match purely as theoretically possible. In my modest opinion the visitor determines the relevancy of the landingpage. And it is Google's nobel job to serve the visitor with a page that fits his needs. But in this case no page can be fully satisfying to everybody, due to different search intentions with the same keyword.
When you achieve a high conversion on your page you'v probably written a very relevant page. So let the visitor truly find what he is looking for and Google will notice....
-
Well said, Russ, especially for a "mathy" answer. I am curious, though, would this "ideal document" you describe have a specific word count?
-
Warning, mathy answer follows. This is a generic description of what is going on, not exact, but hopefully understandable.
Yes, there is some theoretical page that is 100% topically relevant if you had a copy of the "ideal document" produced by the topical relevancy model. This would not look like a real page, though. It would look like a jumble of words in ideal relation and distance to one another. However, most topic models are built using sampling and, more importantly, the comparative documents that are used to determine the confidence level that your document's relevancy is non-random is also sampled. This means that there is some MoE (Margin of Error).
As you and your competitors approach 100% topical relevancy, that Margin of Error likely covers the difference. If you are 99.98% relevant, and they are 99.45% relevant, but the MoE is 1%, then a topical relevancy system cant conclude with certainty that you are more relevant than they are.
At this point, the search model would need to rely on other metrics, like authority, over relevance to differentiate the two pages.
-
With the pace at which things are changing and throwing in machine learning in to the ranking factor, I would say it's close to impossible to have 100% topically relevancy for any good period of time.
-
100% saturation is impossible to achieve while maintaining any semblance of value. Not only because any proper page inherently has navigation, internal linkage, and myriad other elements, but because to write content about a subject in that amount of detail, one would invariably need to write about sub-topics and related topics. It's just not feasible. But, and here's the kicker, you wouldn't want 100% saturation anyway.
Rich, dynamic content incorporates that which is related to it. Strong pages link out to others, and keep visitors within their media cycle, if not churning them lower down. Good content is content that holds information that's both detailed and general to a topic. I would say, at most, the highest saturation point that still remains within strong SEO and content optimization is about 85-90% when taking into account all page content - and even that's pushing it, really.
-
I would agree to a point. At its heart, Google probably uses some form of numerical score for a page as it relates to a query. If a page is a perfect match, it scores 100%. I would also suggest that attaining a perfect score is a virtual impossibility.
The scoring system, however, is dynamic. The page may be perfect for a particular query only at a particular point in time.
- Google's algorithm changes daily. What's perfect today may not be perfect tomorrow.
- Semantic search must be dynamic. If Google discovers a new Proof Term or Relevant Term related to the query, and the page in question doesn't contain that term, the page is no longer perfect.
These are only a couple of examples.
For practical purposes, the amount of testing, research, etc. to achieve a perfect score at some point delivers diminishing returns. The amount of effort required to push a page from 95% to 100% isn't worth the effort, especially since Google's algorithm is a secret.
Sometimes good is good enough.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Rank on Moz compared to Ahrefs
So there seems to be a huge philosophical difference behind how Moz and Ahrefs calculates page rank (PA). On Moz, PA is very dependent on a site's DA. For instance, any new page or page with no backlinks for a 90DA site on Moz will have around 40PA. However, if a site has around 40 DA, any new page or page with no backlinks will have around 15PA PA. Now if one were to decide to get tons of backlinks to this 40 DA/15PA page, that will raise the PA of the page slightly, but it will likely never go beyond 40PA....which hints that one would rather acquire a backlink from a page on a high DA site even if that page has 0 links back to it as opposed to a backlink from a page on a low DA site with many, many backlinks to it. This is very different from how Ahrefs calculates PA. For Ahrefs, the PA of any new page or page with no backlinks to it will have a PA of around 8-10ish....no matter what the DA of the site is. When a page from a 40DA site begins acquiring a few links to it, it will quickly acquire a higher PA than a page from a 90DA site with no links to it. The big difference here is that for Ahrefs, PA for a given page is far more dependent on how many inbound links that page has. On the other hand, for Moz, PA for a given page is far more dependent on the DA of the site that page is on. If we were to trust Moz's PA calculations, SEOrs should emphasize getting links from high DA sites....whereas if we were to trust Ahref's PA calculations, SEOrs should focus less on that and more on building links to whatever page they want to rank up (even if that page is on a low DA site). So what do you guys think? Do you agree more with Moz or Ahref's valuation of PA. Is PA of a page more dependent on the DA or more dependent on it's total inbound links?
Algorithm Updates | | ButtaC1 -
Does Google considers the cached content of a page if it's redirected to new page?
Hi all, If we redirect an old page to some new page, we know that content relevancy between source page and this new page matters at Google. I just wonder if Google is looking at the content relevancy of old page (from cache) and new page too. Thanks
Algorithm Updates | | vtmoz0 -
Sudden Drop in Organic Traffic through Image Search
We've have been facing a strange issue with our Organic Image Search Traffic since July month, 10th July 2016 to be precise. There is a significant decline in our Organic Image Search Traffic that we can see in our Google Search Console Account, We have noticed a sudden drop (Almost 80%-90%) in our daily clicks through image search in Google Search Console, Does anyone here have any idea why this has happened suddenly though everything is same as it was before and we haven't done any changes in image names and images path. IWPbQ
Algorithm Updates | | tigersohelll0 -
Google Search Subsections
Hi! I want to know how can I put the URL from a page like that: http://i.imgur.com/qK1NLjq.png?1 I mean: "www.calafate.com › El Chaltén" Is it possible? Thanks!!!
Algorithm Updates | | Seomediabros0 -
Lots of dublicate titles and pages on search page
I own a paiting website with a lot of searchable paintings. The "search paintings" feature creates tons of dublicate pages and titles. See here:
Algorithm Updates | | KasperGJ
http://www.maleribasen.dk/soegmaleri.asp I guess the problem is, that the URL can actually be different and still return the same content. First time you click the "Search paintings" the URL will shown as above. But as soon as users
begin to definere they search to the left and use the "Search button" the top URL changes. So, depending on how the top URL looks different results are shown. This is pretty standard in searches. But it returns tons of dublicate pages and titles. How, do you guys cope with that? Is there a clever way to use ref="cannonical" or some other smart way to avoid this? /Kasper0 -
Do we take a SEO hit for having multiple URLs on an infinite scroll page vs a site with many pages/URLs. If we do take a hit, quantify the hit we would suffer.
We are redesigning a preschool website which has over 100 pages. We are looking at 2 options and want to make sure we meet the best user experience and SEO. Option 1 is to condense the site into perhaps 10 pages and window shade the content. For instance, on the curriculum page there would be an overview and each age group program would open via window shade. Option 2 is to have an overview and then each age program links to its own page. Do we lose out on SEO if there are not unique URLS? Or is there a way using metatags or other programming to have the same effect?
Algorithm Updates | | jgodwin0 -
Can I only submit a reconsideration request if I have a penalty?
Hey guys, One of the sites I'm looking after took a hit with their rankings (particularly for one keyword that went from 6/7 to 50+) post-Penguin in May. Although, after cleaning-up the link profile somewhat we started to see some slow and steady progression in positions. The keyword that dropped to 50+ was moving upwards in advance of 20. However, a couple of weeks back, the keyword in question took another slide towards 35-40. I therefore wondered whether it would be best to submit a reconsideration request - even though the site did not receive a manual penalty. The website has a DA of 40 which more than matches a lot of the competitor websites that are ranking on first page for the aforementioned keyword. At this stage, I would have expected the site to have returned to its original ranking - four-and-a-half months after Penguin - but it hasn't. So a reconsideration request seemed logical. That said, when I came to go through the process on Webmaster Tools I was unable to find the option! Has it now been removed for sites that don't receive manual penalties?
Algorithm Updates | | Webrevolve1 -
Why do in-site search result pages rank better than my product pages?
Maybe this is a common SERP for a generic product type but I'm seeing it a lot more often. Here is an example SERP "rolling stools". The top 4 results are dynamic in-site search pages from Sears, ebay and Amazon (among others). I understand their influence and authority but why would a search return a dynamic in-site SERP instead of a solid product page. A better question would be - How do I get my in-site SERPs to rank or how do I get my client's page to rise above the #5 spot is currently ranks at? Thanks
Algorithm Updates | | BenRWoodard0