Google cached pages and search terms
-
Here's something I noticed. We have a rank A page and it's ranking 10 on Google search results.
When I hover my mouse over our search result, Google gives us a preview, but Google also highlights in red where the search keyword is present on the page.
Reviewing our page, even though we have it as the h1 header and intro paragraph, Google is highlighting it half way down the page. Any ideas why?
I review rank 1 - 5 and Google highlights the keyword on the intro paragraph and h1 header
Have you guys experienced anything like this? It makes me think..Google could be crawling my site and thinking I haven't got it in the h1 or intro paragraph etc..
Thoughts?
-
Thanks AntkitMaheshwari and Danrawk,
I've just downloaded a chrome extention to view text versions of the cached file. Our built in template is spewing multiple h1s but that's something we need to look at separately. We also have a lot of menu links which might be pushing the crawling to review the text at a later stage..But still, it skipped the H1 and first paragraph.
I'll look into this a bit more.
Danrawk, on page grader gives an A and there is no weird formatting. Maybe it's a conflict issue. It's counting the first H1 and skipping the second.
Hmm...
-
Are you doing any other kind of weird formatting (schema.org formatting perhaps?) to that h1 header tag? google's robots might be jumping over it .
run that page through the on page grade evaluation in the seomoz tools page.
-
Check the cached text version of your page in Google and compare it with that of first 5 results. It might be that through you have it in H1 tag however, during crawl process Google is finding it below the content which it is highlighting for your page. Compare it with top ranking sites and you might have to simply fix the code of the page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing indexed internal search pages from Google when it's driving lots of traffic?
Hi I'm working on an E-Commerce site and the internal Search results page is our 3rd most popular landing page. I've also seen Google has often used this page as a "Google-selected canonical" on Search Console on a few pages, and it has thousands of these Search pages indexed. Hoping you can help with the below: To remove these results, is it as simple as adding "noindex/follow" to Search pages? Should I do it incrementally? There are parameters (brand, colour, size, etc.) in the indexed results and maybe I should block each one of them over time. Will there be an initial negative impact on results I should warn others about? Thanks!
Intermediate & Advanced SEO | | Frankie-BTDublin0 -
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
Thousands of 503 errors in GSC for pages not important to organic search - Is this a problem?
Hi, folks A client of mine now has roughly 30 000 503-errors (found in the crawl error section of GSC). This is mostly pages with limited offers and deals. The 503 error seems to occur when the offers expire, and when the page is of no use anymore. These pages are not important for organic search, but gets traffic from direct and newsletters, mostly. My question:
Intermediate & Advanced SEO | | Inevo
Does having a high number of 503 pages reported in GSC constitute a problem in terms of organic ranking for the domain and the category and product pages (the pages that I want to rank for organically)? If it does, what is the best course of action to mitigate the problem? Looking excitingly forward to your answers to this 🙂 Sigurd0 -
Mobile Search Results Include Pages Meant Only for Desktops/Laptops
When I put in site:www.qjamba.com on a mobile device it comes back with some of my mobile-friendly pages for that site(same url for mobile and desktop-just different formatting), and that's great. HOWEVER, it also shows a whole bunch of the pages (not identified by Google as mobile-friendly) that are fine for desktop users but are not supposed to exist for the mobile users, because they are too slow. Until a few days ago those pages were being redirected for mobile users to the home page. I since have changed that to 404 not founds. Do we know that Google keeps a mobile index separate from the desktop index? If so, I would think that 404 should work.. How can I test whether the 404 not founds will remove a url so they DON'T appear on a mobile device when I put in site:www.qjamba.com (or a user searches) but DO appear on a desktop for the same command.
Intermediate & Advanced SEO | | friendoffood0 -
Why Is Google Indexing These Product Pages On Shopify?
How can we communicate to Google the exact product pages we'd like indexed on our site? We're an apparel company that uses Shopify as our ecommerce platform. Website is sportiqe.com. Currently, Google is indexing all types of different pages on our site. **Example of a product page we want indexed: ** Product Page: sportiqe.com/products/PRODUCT-TITLE (Like This) **Examples of product pages being indexed: ** sportiqe.myshopify.com/products/PRODUCT-TITLE sportiqe.com/collections/COLLECTION-NAME/products/PRODUCT-TITLE See attached for an example of how two different "Boston Celtics Grateful Dead" shirts are being indexed. Any suggestions? We've used both Shopify and Google Webmaster tools to set our preferred domain (sportiqe.com). We've also added this snippet of code to our site three months ago thinking that would do the trick... {% if template == 'product' %}{% if collection %} {% endif %}{% endif %} sKwNZOl
Intermediate & Advanced SEO | | farmiloe0 -
Does Google make continued attempts to crawl an old page one it has followed a 301 to the new page?
I am curious about this for a couple of reasons. We have all dealt with a site who switched platforms and didn't plan properly and now have 1,000's of crawl errors. Many of the developers I have talked to have stated very clearly that the HTacccess file should not be used for 1,000's of singe redirects. I figured If I only needed them in their temporarily it wouldn't be an issue. I am curious if once Google follows a 301 from an old page to a new page, will they stop crawling the old page?
Intermediate & Advanced SEO | | RossFruin0 -
Google Crawl Rate and Cached version - not updated yet :(
Hi, Ive noticed that Google is not recognizing/crawling the latest changes on pages in my site - last update when viewing Cached version in Google Results is over 2 months ago. So, do I Fetch as Googlebot to force an update? Or do I remove the page's cached version in GWT remove urls? Thanks, B
Intermediate & Advanced SEO | | bjs20100 -
Meta Refresh tag on cache pages- GRRR!
Hi guys, All of our product pages originate in a URL with a unique number but it redirects to an SEO url for the user. These product pages have blocks on the page and these blocks are automatically populated with our database of content. Here's an example of the redirect in place: www.example.com/45643/xxxx.html redirects to www.example.com/seo-friendly-url.html The development team did this for 2 reasons. 1) our internal search needs the unique numbered urls for search and 2) it allows quick redirects as pages are cached. The problem I face is this, the redirects from the cached are being tagged with 'meta refresh', yup, they are 302. The development team said they could stop caching and respond dynamically with a 301 but this would bring in a delay. Speed wise, the cached pages load within 22ms and dynamically 530ms, so yeah half a second more. Currently cached pages just do a meta refresh tagged redirect and I want to move away from this. What would you guys recommend in such a situation? I feel like unless I place a 301, I'll be losing out on rank juice.
Intermediate & Advanced SEO | | Bio-RadAbs0