H1 tag found on page, but saying doesn't match keyword
-
We've run a on-page grader test on our home page www.whichledlight.com with the keyword 'led bulbs'
it comes back with saying there is a H1 tag, although the content of the keyword apperently doesn't contain 'led bulbs... which seems a bit odd because the content of the tag is
'UK’s #1 Price Comparison Site for LED Bulbs`
I've used other SEO checkers and some say we don't even have a H1 tag, or H2, H3 and so on for any page.
Screaming Frog seems to think we have a H1 tag though, and can also detect the content of the tag.
Any ideas?
** Update **
The website is a single page app (EmberJS) so we use prerender to create snapshots of the pages.
We were under the impression that MOZ can crawl these prerendered pages fine, so were a bit baffled as to why it would say we have a H1 tag, but think the contents of the tag still doesn't match our keyword. -
I checked the source with my default user agent (in this case Firefox) and did NOT see an H1 tag.
I checked with my user agent set to GoogleBot and DID see an H1 tag, which did have that keyword phrase in it.
I checked again with a default user agent, but this time with JavaScript disabled, and could not see anything at all on the viewable page (blank white page), though the source code was there without the H1 tag.
So it seems to me like you're pre-rendering the page for GoogleBot, and are including the H1 (and other header tags) as part of a fully-rendered page for search engines. However, because that Header tag does not exist if you turn JavaScript off - or if you're not Google - there may be a risk of Google seeing this page as "cloaking".
Pre-rendering is good. It's not a "bad" type of cloaking if you serve the EXACT same page to search engines that you serve to everyone else. Unfortunately, this does not seem to be the case with the way this page is set up. Google sees one thing, other visitors (with or without JavaScript enabled) see something else.
I know developers are head-over-heels for single-page apps and JavaScrpt frameworks, but this stuff is starting to drive me nuts. It's like trying to optimize Flash sites all over again. On the one hand you have Google bragging about how great they are at crawling JavaScript, even going so far as to say pre-rendering is not necessary... And on the other hand there are clear, sustained, organic search traffic drops whenever developers start turning flat HTML/CSS pages into these single-page JavaScript framework applications.
My advice to you is that if you're going to Pre Render a page for Google, to A: make sure the page a user with JavaScript enabled sees is exactly the same as what Google sees, and B: See if you can pre-render pages for visitors without JavaScript enabled as well.
-
Yes, see what you mean.
We get the same if we view source.Inspect element shows it correctly.
I take it you mean SEO checkers are checking the source code.. before JS modifies it?
Do you think this is hurting our SEO?
-
I did a 'View Source' and 'Inspect on your homepage.
On View Source, there was no H1 Tag, however, on Inspect, there is clearly a H1 tag (H2, H3 exist too).
"View Source" typically shows what was received from the server before javascript modifies it. I suspect your developer wrote it this way to optimize for speed (with jQuery).
That being said, when you use the SEO checkers that claims you do not have a H1 tag, they are only reading the document and not the source code.
In short, yes, your website has a H1, H2 and H3 tags.
Just Curious, what results (content of H1) did the on-page grader came out with?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Importance (or lack of) Meta keywords tags and Tags in Drupal
I'm wondering should I put any effort in making Meta Keywords tags for my pages or normal Tags (they're separate in Drupal), since apparently first are not considered by most of search engines, while not sure about normal tags. Obviously SERPS has to determine partial valu of the page by content, thus consider keywords / tags to some extend. What's your opinion on that. Thank you.
Intermediate & Advanced SEO | | Optimal_Strategies1 -
Ridding of taxonomies, so that articles enhance related page's value
Hello, I'm developing a website for a law firm, which offers a variety of services. The site will also feature a blog, which would have similarly-named topics. As is customary, these topics were taxonomies. But I want the articles to enhance the value of the service pages themselves and because the taxonomy url /category/divorce has no relationship to the actual service page url /practice-areas/divorce, I'm worried that if anything, a redundantly-titled taxonomy url would dilute the value of the service page it's related to. Sure, I could show some of the related posts on the service page but if I wanted to view more, I'm suddenly bounced over to a taxonomy page which is stealing thunder away from the more important service page. So I did away with these taxonomies all together, and posts are associatable with pages directly with a custom db table. And now if I visit the blog page, instead of a list of category terms, it would technically be a list of the service pages and so if a visitor clicks on a topic they are directed to /practice-areas/divorce/resources (the subpages are created dynamically) and the posts are shown there. I'll have to use custom breadcrumbs to make it all work. Just wondering if you guys had any thoughts on this. Really appreciate any you might have and thanks for reading
Intermediate & Advanced SEO | | utopianwp0 -
Our client's web property recently switched over to secure pages (https) however there non secure pages (http) are still being indexed in Google. Should we request in GWMT to have the non secure pages deindexed?
Our client recently switched over to https via new SSL. They have also implemented rel canonicals for most of their internal webpages (that point to the https). However many of their non secure webpages are still being indexed by Google. We have access to their GWMT for both the secure and non secure pages.
Intermediate & Advanced SEO | | RosemaryB
Should we just let Google figure out what to do with the non secure pages? We would like to setup 301 redirects from the old non secure pages to the new secure pages, but were not sure if this is going to happen. We thought about requesting in GWMT for Google to remove the non secure pages. However we felt this was pretty drastic. Any recommendations would be much appreciated.0 -
What do you think about this links? Toxic or don't? disavow?
Hi, we are now involved in a google penalty issue (artificial links – global – all links). We were very surprised, cause we only have 300 links more less, and most of those links are from stats sites, some are malware (we are trying to fight against that), and other ones are article portals. We have created a spreadsheet with the links and we have analyzed them using Link Detox. Now we are sending emails, so that they can be removed, or disavow the links what happen is that we have very few links, and in 99% of then we have done nothing to create that link. We have doubts about what to do with some kind of links. We are not sure them to be bad. We would appreciate your opinion. We should talk about two types: Domain stats links Article portals Automatically generated content site I would like to know if we should remove those links or disavow them These are examples Anygator.com. We have 57 links coming from this portal. Linkdetox says this portal is not dangerous http://es.anygator.com/articulo/arranca-la-migracion-de-hotmail-a-outlook__343483 more examples (stats or similar) www.mxwebsite.com/worth/crearcorreoelectronico.es/ and from that website we have 10 links in wmt, but only one works. What do you do on those cases? Do you mark that link as a removed one? And these other examples… what do you think about them? More stats sites: http://alestat.com/www,crearcorreoelectronico.es.html http://www.statscrop.com/www/crearcorreoelectronico.es Automated generated content examples http://mrwhatis.net/como-checo-mi-correo-electronico-yaho.html http://www.askives.com/abrir-correo-electronico-gmail.html At first, we began trying to delete all links, but… those links are not artificial, we have not created them, google should know those sites. What would you do with those sites? Your advices would be very appreciated. Thanks 😄
Intermediate & Advanced SEO | | teconsite0 -
Home Page or Internal Page
I have a website that deals with personalized jewelry, and our main keyword is "Name Necklace".
Intermediate & Advanced SEO | | Tiedemann_Anselm
3 mounth ago i added new page: http://www.onecklace.com/name-necklaces/ And from then google index only this page for my main keyword, and not our home page.
Beacuase the page is new, and we didn't have a lot of link to it, our rank is not so well. I'm considering to remove this page (301 to home page), beacause i think that if google index our home page for this keyword it will be better. I'm not sure if this is a good idea, but i know that our home page have a lot of good links and maybe our rank will be higher. Another thing, because google index this internal page for this keyword, it looks like our home page have no main keyword at all. BTW, before i add this page, google index our main page with this keyword. Please advise... U5S8gyS.png j50XHl4.png0 -
I need help with a local tax lawyer website that just doesn't get traffic
We've been doing a little bit of linkbuilding and content development for this site on and off for the last year or so: http://www.olsonirstaxattorney.com/ We're trying to rank her for "Denver tax attorney," but in all honesty we just don't have the budget to hit the first page for that term, so it doesn't surprise me that we're invisible. However, my problem is that the site gets almost NO traffic. There are days when Google doesn't send more than 2-3 visitors (yikes). Every site in our portfolio gets at least a few hundred visits a month, so I'm thinking that I'm missing something really obvious on this site. I would expect that we'd get some type of traffic considering the amount of content the site has, (about 100 pages of unique content, give or take) and some of the basic linkbuilding work we've done (we just got an infographic published to a few decent quality sites, including a nice placement on the lawyer.com blog). However, we're still getting almost no organic traffic from Google or Bing. Any ideas as to why? GWMT doesn't show a penalty, doesn't identify any site health issues, etc. Other notes: Unbeknownst to me, the client had cut and pasted IRS newsletters as blog posts. I found out about all this duplicate content last November, and we added "noindex" tags to all of those duplicated pages. The site has never been carefully maintained by the client. She's very busy, so adding content has never been a priority, and we don't have a lot of budget to justify blogging on a regular basis AND doing some of the linkbuilding work we've done (guest posts and infographic).
Intermediate & Advanced SEO | | JasonLancaster0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Should I use the main keyword in the title tag for the site on all category pages?
I am pretty excited about changing all my title tags (for the most important 7 pages) since I have seen my rankings jump up in the SERP just by adding the main keyword for my website in the title tag. To make it easier I will explain my business. Simply, I run an online jewelry shop, so basically the keywords I want to use is "Jewelry online" and for the main categories "Necklace", "Rings" and "Bracelets". What I am unsure about is whether to use all the keywords in the main pages title tag or should I just use the main keyword "Jewelry online". I don’t want to create competition between my own pages of course. Jewelry Online - Trendy Fashion Jewelry | Homepage Or Jewelry Online - Necklace, Rings, Bracelets | Homepage And the same goes for the main categories, should I include "jewelry online" or not, like: Bracelets - Fashion Jewelry Online | Homepage Or Bracelets - Trendy_ Bangles_ and Arm Cuffs | Homepage Any suggestions what is the best practice for the title tag on main page and the main categories? Thanks
Intermediate & Advanced SEO | | ikomorin0