Google cached pages and search terms
-
Here's something I noticed. We have a rank A page and it's ranking 10 on Google search results.
When I hover my mouse over our search result, Google gives us a preview, but Google also highlights in red where the search keyword is present on the page.
Reviewing our page, even though we have it as the h1 header and intro paragraph, Google is highlighting it half way down the page. Any ideas why?
I review rank 1 - 5 and Google highlights the keyword on the intro paragraph and h1 header
Have you guys experienced anything like this? It makes me think..Google could be crawling my site and thinking I haven't got it in the h1 or intro paragraph etc..
Thoughts?
-
Thanks AntkitMaheshwari and Danrawk,
I've just downloaded a chrome extention to view text versions of the cached file. Our built in template is spewing multiple h1s but that's something we need to look at separately. We also have a lot of menu links which might be pushing the crawling to review the text at a later stage..But still, it skipped the H1 and first paragraph.
I'll look into this a bit more.
Danrawk, on page grader gives an A and there is no weird formatting. Maybe it's a conflict issue. It's counting the first H1 and skipping the second.
Hmm...
-
Are you doing any other kind of weird formatting (schema.org formatting perhaps?) to that h1 header tag? google's robots might be jumping over it .
run that page through the on page grade evaluation in the seomoz tools page.
-
Check the cached text version of your page in Google and compare it with that of first 5 results. It might be that through you have it in H1 tag however, during crawl process Google is finding it below the content which it is highlighting for your page. Compare it with top ranking sites and you might have to simply fix the code of the page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do i beat my spammy competitors? they are ranking on page 1 of google!
my competitor is ranking on page 1 of google. he has 3.4 million backlinks and 1400 refering domains. he has aquired these backlinks from various websites, but he doesnot have links from his niche . how do i beat my competition with less backlinks because if i follo his technique, it would a takea lot of time and people to build backlinks one of my strategy is to get .edu links second strategy is to have 6000 word content and rank for really low competition keywords related to my website.( my competitors website has 1500 words content!) any other strategy you can suggest?
Intermediate & Advanced SEO | | calvinkj0 -
Client has an inexplicable jump in crawled pages being reported in Google Search Console
Recently a client of mine noticed an inexplicable jump in crawled pages being reported in Google Search Console. We researched the following culprits and found nothing: Rel=canonicals are put in place No SSL/non SSL duplication We used a tool to extrapolate search query page data from Google Search Insights; nothing unusual No dynamic pages being made on the website All necessary landing pages are in the XML sitemap Could this be a glitch in GSC? We are wondering what the heck is going on. 7eaeS
Intermediate & Advanced SEO | | BigChad20 -
Google cache is for a 3rd parties site for HTTP version and correct for HTTPS
If I search Google for my cache I get the following: cache:http://www.saucydates.com -> Returns the cache of netball.org (HTTPS page with Plesk default page) cache:https://www.saucydates.com -> Displays the correct page Prior to this my http cache was the Central Bank of Afghanistan. For most searches at present my index page is not returned and when it is, it’s the Net Ball Plesk page. This is, of course hurting my search traffic considerably. ** I have tried many things, here is the current list:** If I fetch as Google in webmaster tools the HTTPS fetch and render is correct. If I fetch the HTTP version I get a redirect (which is correct as I have a 301 HTTP to HTTPS redirect). If I turn off HTTPS on my server and remove the redirect the fetch and render for HTTP version is correct. The 301 redirect is controlled with the 301 Safe redirect option in Plesk 12.x The SSL cert is valid and with COMODO I have ensured the IP address (which is shared with a few other domains that form my sites network / functions) has a default site I have placed a site on my PTR record and ensured the HTTPS version goes back to HTTP as it doesn’t need SSL I have checked my site in Waybackwhen for 1 year and there are no hacked redirects I have checked the Netball site in Waybackwhen for 1 year, mid last year there is an odd firewall alert page. If you check the cache for the https version of the netball site you get another sites default plesk page. This happened at the same time I implemented SSL Points 6 and 7 have been done to stop the server showing a Plesk Default page as I think this could be the issue (duplicate content) ** Ideas:** Is this a 302 redirect hi-jack? Is this a Google bug? Is this an issue with duplicate content as both servers can have a default Plesk page (like millions of others!) A network of 3 sites mixed up that have plesk could be a clue? Over to the experts at MOZ, can you help? Thanks, David
Intermediate & Advanced SEO | | dmcubed0 -
Google Analytics: how to filter out pages with low bounce rate?
Hello here, I am trying to find out how I can filter out pages in Google Analytics according to their bounce rate. The way I am doing now is the following: 1. I am working inside the Content > Site Content > Landing Pages report 2. Once there, I click the "advanced" link on the right of the filter field. 3. Once there, I define to "include" "Bounce Rate" "Greater than" "0.50" which should show me which pages have a bounce rate higher of 0.50%.... instead I get the following warning on the graph: "Search constraints on metrics can not be applied to this graph" I am afraid I am using the wrong approach... any ideas are very welcome! Thank you in advance.
Intermediate & Advanced SEO | | fablau0 -
How to make Google include our recipe pages in its main index?
We have developed a recipe search engine www.edamam.com and serve the content of over 500+ food bloggers and major recipe websites. Our legal obligations do not allow us to show the actual recipe preparation info (e.g. the most valuable from the content), we can only show a few images, the ingredients and nutrition information. Most of the unique content goes to the source/blog. By submitting XML sitemaps on GWT we now have around 500K pages indexed, however only a few hundred appear in Google's main index and we are looking for a solution to include all of them in the index. Also good to know is that it appears that all our top competitors are in the exactly same situation, so it is a challenging question. Any ideas will be highly appreciated! Thanks, Lily
Intermediate & Advanced SEO | | edamam0 -
I have search result pages that are completely different showing up as duplicate content.
I have numerous instances of this same issue in our Crawl Report. We have pages showing up on the report as duplicate content - they are product search result pages for completely different cruise products showing up as duplicate content. Here's an example of 2 pages that appear as duplicate : http://www.shopforcruises.com/carnival+cruise+lines/carnival+glory/2013-09-01/2013-09-30 http://www.shopforcruises.com/royal+caribbean+international/liberty+of+the+seas We've used Html 5 semantic markup to properly identify our Navigation <nav>, our search widget as an <aside>(it has a large amount of page code associated with it). We're using different meta descriptions, different title tags, even microformatting is done on these pages so our rich data shows up in google search. (rich snippet example - http://www.google.com/#hl=en&output=search&sclient=psy-ab&q=http:%2F%2Fwww.shopforcruises.com%2Froyal%2Bcaribbean%2Binternational%2Fliberty%2Bof%2Bthe%2Bseas&oq=http:%2F%2Fwww.shopforcruises.com%2Froyal%2Bcaribbean%2Binternational%2Fliberty%2Bof%2Bthe%2Bseas&gs_l=hp.3...1102.1102.0.1601.1.1.0.0.0.0.142.142.0j1.1.0...0.0...1c.1.7.psy-ab.gvI6vhnx8fk&pbx=1&bav=on.2,or.r_qf.&bvm=bv.44442042,d.eWU&fp=a03ba540ff93b9f5&biw=1680&bih=925 ) How is this distinctly different content showing as duplicate? Is SeoMoz's site crawl flawed (or just limited) and it's not understanding that my pages are not dupe? Copyscape does not identify these pages as dupe. Should we take these crawl results more seriously than copyscape? What action do you suggest we take? </aside> </nav>
Intermediate & Advanced SEO | | JMFieldMarketing0 -
Help! Optimizing dynamic internal search results pages...
Hi guys, Now I have always been against this, and opted to noindex internal search results pages to stop the waste of link juice, dupe content, and crawl loops... however, I'm in a discussion with somebody who feels there may be a solution, and that the pages could actually be optimized to rank (for different keywords to the landing pages of course). Anybody come across such a thing before? My only solution would be still to noindex and then build static pages with the most popular search results in but that won't suffice in this case. Any recommendations would be much appreciated 🙂 Thanks, Steve 🙂
Intermediate & Advanced SEO | | SteveOllington0 -
My page has fallen off the face of the earth on Google. What happened?
I have checked all of the usual things. My page has not lost any links or authority. It is not black listed or any other obvious sign. What's going on? This has just happened within the past 3 days.
Intermediate & Advanced SEO | | Tormz0