Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
H1 and Schema Codes Set Up Correctly?
-
Greetings:
It was pointed out to me that the h1 tags on my website (www.nyc-officespace-leader.com) all had exactly the same text and that duplication may be contributing to the very low page authority for most URLs.
The duplicate h1 appears in line 54-54 (see below) of the home page: www.nyc-officespace-leader.com:
itemscope itemtype="http://schema.org/LocalBusiness" style="position:absolute;top:-9999em;">
<span<br>itemprop="name">Metro Manhattan Office Space</span<br>
<img< p="">But the above refers to schema" so is this really duplicate H1 or is there an exception if the H1 is within a schema?
Also, I was told that the company street address and city and state were set up incorrectly as part of an alt tag. However these items also appear as schema in lines 49-68 shown below:
Dangerous for me to perform surgery on the code without being certain about these key items!! Could ask my developer, however they may be uncomfortable considering that they set this up in the 1st place. So the view of neutral professionals would be highly welcome!
itemprop="address" itemscope itemtype="http://schema.org/PostalAddress">
<span<br>itemprop="streetAddress">347 5th Ave #1008
<span<br>itemprop="addressLocality">New York
<span<br>itemprop="addressRegion">NY
<span<br>itemprop="postalCode">10016<div<br>itemprop="brand" itemscope itemtype="http://schema.org/Organization">
---------------------------------------------------------------------------</div<br></span<br></span<br></span<br></span<br></img<> -
For suggestion 1, I should clarify that you already are using Microdata. Your Microdata is repeating what is already in the page, rather than "tagging" your existing content inline. Microdata is a good tool to use if you are able to tag pieces of content as you are communicating it to a human reader; it should follow the natural flow of what you are writing to be read by humans. This guide walks you through how Microdata can be implemented inline with your content, and it's worth reading through to see what's available and how to step forward with manual implementation of Schema.org with confidence.
Will these solutions remove the duplicate H1 tag?
Whatever CMS or system you are using to produce the hidden microdata markup needs to be changed to remove its attempt entirely. The markup of the content itself is good, but it needs to be combined in with existing content or implemented with JSON+LD so that it is not duplicating the HTML you are showing the user.
Are these options relatively simple for an experienced developer? Is one option superior to the other?
Both should be, but it depends on your strategy. Are you hand-rolling your schema.org markup? Is somebody going into your content and wrapping the appropriate content with the correct microdata? This can be a pain in the butt and time-consuming, especially if they're not tightly embedded with your content production team.
I downloaded the HTML and reviewed the Microdata implementation. I don't mean to sound unkind but it looks like computer-generated HTML and it's pretty difficult to read and manipulate without matching tags properly.
Is one option superior to the other?
Google can read either without issue; they recommend JSON+LD (source).
In your case, I'd also recommend JSON+LD because:
- Your investment in Microdata is not very heavy and appears easy enough to unwind
- The content you want to show users isn't exactly inline with the content you want read by crawlers anyway (for example, your address isn't on the page and visible to readers)
- It's simple enough to write by hand, and there exist myriad options to embed programmatically-generated schema.org content in JSON+LD format
Please review this snippet comparing a Microdata solution and a JSON+LD solution side by side.
PLEASE DO NOT COPY AND PASTE THIS INTO YOUR SITE. It is meant for educational and demonstrative purposes only.
There are comments inline that should explain what's going on: https://gist.github.com/TheDahv/dc38b0c310db7f27571c73110340e4ef
-
Hi Again:
Will option #1 (keeping existing microdata) remove the duplicate h1 tag? Your suggestion listed below:
"So, wherever the
tag with the company name lives that is rendered and shown to the user, ad the "LocalBusiness" itemscope to the parent tag that surrounds it and its content. Basically you'd merge your Schema.org code with the user-facing content"
-
Hi David:
Schema was added to the site discretely provide location data to Google.
You suggested 2 potential solutions:
1. Use Microdata...
2. Use JSON+LD..
Will these solutions remove the duplicate H1 tag?
We are concerned that the low rank of our URLs (80% are 1) are caused by duplicate H1s on each page.
Are these options relatively simple for an experienced developer? Is one option superior to the other?
Thanks for your patience in explaining these options, my programming understanding is limited.
Alan -
I see that you're using CSS to get that markup into the page, but definitely not visible to the user. Am I interpreting that right? If so, it seems like your goal is to get some Schema.org tags into the page to mark up your content as a LocalBusiness.
I have 2 ideas for you:
Use microdata (the markup format you're using now) to mark up your tags inline with your existing content. So, wherever the
tag with the company name lives that is rendered and shown to the user, ad the "LocalBusiness" itemscope to the parent tag that surrounds it and its content. Basically you'd merge your Schema.org code with the user-facing content
Use JSON+LD markup instead. You can get the same information "repeated" but the JSON+LD markup isn't rendered for users. jsonld.com has a great page with a template you can copy and adjust to suit your business. If you go this route, remove the microdata-laden HTML hidden off the page with the inline CSS and replace it with the JSON+LD wrapped in . Google also has some great documentation around the LocalBusiness type.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to find correct schema type
Dear Moz members, I m currently working on schema optimizations of my website casinobesty.com which review online casino websites. I have a doubt which schema itemReviewed type I have to use in the review pages. Currently I m using type as "Game" but I m not sure it is correct. "description": "",
Intermediate & Advanced SEO | | CongthanhThe
"itemReviewed": {
"@type": "Game",
"name": "LeoVegas Casino",
"url": "https://casinobesty.com/casino/leovegas-casino/"
}, Thank you1 -
How to update Schema markup code to all pages of my website ?
Hi all i have a website with 1k+ pages and i have schema markup code for reviews and FAQ's, so need help in knowing how to update code for all pages in one go without using tag manager as updating to all pages manually is similar to impossible, let me know is there any way out to achieve the results and my website is built on word-press, awaiting for earliest reply......... Thanks
Intermediate & Advanced SEO | | atiagr1232 -
Setting A Custom User Agent in Screaming Frog
Hi all, Probably a dumb question, but I wanted to make sure I get this right. How do we set a custom user agent in Screaming Frog? I know its in the configuration settings, but what do I have to do to create a custom user agent specifically for a website? Thanks much! Malika
Intermediate & Advanced SEO | | Malika10 -
Is a 301 Redirect and a Canonical Tag on Uppercase to Lowercase Pages Correct?
We have a medium size site that lost more than 50% of its traffic in July 2013 just before the Panda rollout. After working with a SEO agency, we were advised to clean up various items, one of them being that the 10k+ urls were all mixed case (i.e. www.example.com/Blue-Widget). A 301 redirect was set up thereafter forcing all these urls to go to a lowercase version (i.e. www.example.com/blue-widget). In addition, there was a canonical tag placed on all of these pages in case any parameters or other characters were incorporated into a url. I thought this was a good set up, but when running a SEO audit through a third party tool, it shows me the massive amount of 301 redirects. And, now I wonder if there should only be a canonical without the redirect or if its okay to have tens of thousands 301 redirects on the site. We have not recovered yet from the traffic loss yet and we are wondering if its really more of a technical problem than a Google penalty. Guidance and advise from those experienced in the industry is appreciated.
Intermediate & Advanced SEO | | ABK7170 -
Images Returning 404 Error Codes. 301 Redirects?
We're working with a site that has gone through a lot of changes over the years - ownership, complete site redesigns, different platforms, etc. - and we are finding that there are both a lot of pages and individual images that are returning 404 error codes in the Moz crawls. We're doing 301 redirects for the pages, but what would the best course of action be for the images? The images obviously don't exist on the site anymore and are therefore returning the 404 error codes. Should we do a 301 redirect to another similar image that is on the site now or redirect the images to an actual page? Or is there another solution that I'm not considering (besides doing nothing)? We'll go through the site to make sure that there aren't any pages within the site that are still linking to those images, which is probably where the 404 errors are coming from. Based on feedback below it sounds like once we do that, leaving them alone is a good option.
Intermediate & Advanced SEO | | garrettkite0 -
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code. Is this normal? If so, why? If not, why not? I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening. Thanks guys!
Intermediate & Advanced SEO | | danatanseo0 -
Robots.txt: how to exclude sub-directories correctly?
Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.
Intermediate & Advanced SEO | | fablau1 -
Googlebot HTTP 204 Status Code Handling?
If a user runs a search that returns no results, and the server returns a 204 (No Content), will Googlebot treat that as the rough equivalent of a 404 or a noindex? If not, then it seems one would want to noindex the page to avoid low quality penalties, but that might require more back and forth with the server, which isn't ideal. Kurus
Intermediate & Advanced SEO | | kurus0