Google not using meta desription
-
I have seen several posts as to why google may not be using the meta description. The most common reason is that they found other text on the page that is more relevant. While I have seen this to be the case on some, most of the time it is not. Is there any way to alert the google bot that it is taking wrong info that would not be good to the user?
-
Try to use the header tags properly
Only one h1
a few h2
etc -
@zoooky Thanks for the info, I had the same problem.
-
There is no way to tell Google if it is using the wrong meta description as such. However it is possible to influence it.
Basically you have two options- Technical SEO - have a good look after the page layout and the quality of the HTML on the page. You may find it is a simple technical error such as JavaScript hiding text or incorrect use of headings.
- User intent - Review the user intent behind the context of the page, if Google sees it as not relevant you will be fighting an uphill battle.
-
Google rewrites meta descriptions for mainly 2 reasons.
-
The first reason is the poor use of meta descriptions to summarize the page.
-
The second reason is more accurately matching the search query or intent.
Unfortunately, there is not any alert option that tells google bot to show a predefined meta description.
-
-
@pau4ner Thanks. Yeah, that is what I was figuring. The problem is it is way off. For example my description could be:
Kolea at Waikoloa Beach Resort offers two and three bedroom vacation rentals on the beach.
Google is making it:
1BR , 2BR, Houses, Condos, and on and onIt is pulling info that makes zero sense.
-
Hi, there is nothing that can be done to "force" Google to use the metadescription. Just as with titles, Google can choose to honor or not the metadescription that you wrote.
However, I have found that using the exact same keyword you want to rank that post for increases the odds of Google using your metadescription. If the keyword consists of several words, make sure to use them in the same exact order in your metadescriptions.
I hope that helps!
-
It is always advisable to have a meta description of 150-160 characters. Anything beyond that will be truncated by Google.
Try restructuring your content in such a way that it fits within 15-160 characters
All the best!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Images on Bing
I have uploaded images to our Mozlocal account and they are showing up on Google but when we go to Bing not all the images are showing up and there is a png image of our logo that is not on Google. Wondering why this is happening and if it is possible to be fixed?
Image & Video Optimization | | SCMarCom0 -
GSC problem: how to solve?
Hi all, Google Search Console gives me an error on these pages: info:https://www.varamedia.be/?utm_content=bufferbaaa4&utm_medium=social&utm_source=plus.google.com&utm_campaign=buffer info:https://www.varamedia.be/?utm_content=bufferece3f&utm_medium=social&utm_source=plus.google.com&utm_campaign=buffer I see there's an UTM tracking in the URL from Google+. We do have an account there but I don't see how this might give an error. Is this hurting our ranking score? How can we solve this?
Reporting & Analytics | | Varamedia0 -
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
Question regarding international SEO
Hi there, I have a question regarding international SEO and the APAC region in particular. We currently have a website extension .com and offer our content in English. However, we notice that our website hardly ranks in Google in the APAC region, while one of the main languages in that region is also English. I figure one way would be to set up .com/sg/ (or .com/au/ or .com/nz/), but then the content would still be in English. So wouldn't that be counted as duplicate content? Does anyone have experience in improving website rankings for various English-speaking countries, without creating duplicate content? Thanks in advance for your help!
International SEO | | Billywig0 -
Which Web host do you use?
A friend of mine has a successful website which is hosted by the company he used to use for developing his site. As he no longer uses them feels he should use it. Who do you use for hosting a small to medium sized business?
Technical SEO | | Ant710 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
How do I use only one URL
my site can be reach by both www.site.com and site.com. How do I make it only use www?
Technical SEO | | Weblion0 -
Does Google Read Javascript?
I would like to include a list of links in a select type box which I would like google to follow. In order to do this, I will be styling it with the help of javascript, and in turn change the select box into a ul and the options into li's. The li's would each contain a link, but if javascript is disabled it will fallback to a normal css styled select box. My question is would google follow the links made by the javascript? Or would the bot just recognize the select box as a select box and not links. Thanks for any help!
Technical SEO | | BrianJenkins0