Schema address question
-
I have a website that has a contact us page... of course and on that page I have schema info pointing out the address and a few other points of data.
I also have the address to the business location in the footer on every page.
Would it be wiser to point to the schema address data on the footer instead of the contact page? And are there any best practices when it comes down to how many times you can point to the same data, and on which pages?
So should I have schema address on the contact us page and the footer of that page, that would be twice, which could seem spammy.
Haven't been able to find much best practices info on schema out there.
Thanks, Cy
-
Do you have any results yet Cyril? Curious as to how this is going.
-
Excellent. Keep me posted? We do much the same on our site as far as the address, and haven't had any issues (yet).
-
Thanks for the reply Ian,
I'm about to schema the shit out of this site ... use it as a ginney pig and see what happens.
-
My gut says put it on every page. From a local SEO standpoint that has to help, since it means more pages with location data on them.
I don't know any best practices though - I doubt there's a specific number of mentions that's better or worse.
On the contact us page itself, do what makes sense. To me, that means having the schema markup attached to both the footer and the 'main' address.
Because it does make sense, and it's not like you're shoehorning the content in there, you should be fine.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Forced Redirects/HTTP<>HTTPS 301 Question
Hi All, Sorry for what's about to be a long-ish question, but tl;dr: Has anyone else had experience with a 301 redirect at the server level between HTTP and HTTPS versions of a site in order to maintain accurate social media share counts? This is new to me and I'm wondering how common it is. I'm having issues with this forced redirect between HTTP/HTTPS as outlined below and am struggling to find any information that will help me to troubleshoot this or better understand the situation. If anyone has any recommendations for things to try or sources to read up on, I'd appreciate it. I'm especially concerned about any issues that this may be causing at the SEO level and the known-unknowns. A magazine I work for recently relaunched after switching platforms from Atavist to Newspack (which is run via WordPress). Since then, we've been having some issues with 301s, but they relate to new stories that are native to our new platform/CMS and have had zero URL changes. We've always used HTTPS. Basically, the preview for any post we make linking to the new site, including these new (non-migrated pages) on Facebook previews as a 301 in the title and with no image. This also overrides the social media metadata we set through Yoast Premium. I ran some of the links through the Facebook debugger and it appears that Facebook is reading these links to our site (using https) as redirects to http that then redirect to https. I was told by our tech support person on Newspack's team that this is intentional, so that Facebook will maintain accurate share counts versus separate share counts for http/https, however this forced redirect seems to be failing if we can't post our links with any metadata. (The only way to reliably fix is by adding a query parameter to each URL which, obviously, still gives us inaccurate share counts.) This is the first time I've encountered this intentional redirect thing and I've asked a few times for more information about how it's set up just for my own edification, but all I can get is that it’s something managed at the server level and is designed to prevent separate share counts for HTTP and HTTPS. Has anyone encountered this method before, and can anyone either explain it to me or point me in the direction of a resource where I can learn more about how it's configured as well as the pros and cons? I'm especially concerned about our SEO with this and how this may impact the way search engines read our site. So far, nothing's come up on scans, but I'd like to stay one step ahead of this. Thanks in advance!
Technical SEO | | ogiovetti0 -
De-indexing and SSL question
Few days ago Google indexed hundreds of my directories by mistake (error with plugins/host), my traffic dropped as a consequence. Anyway I fixed that and submitted a URL removal request. Now just waiting things to go back to normality. Meantime I was supposed to move my website to HTTPS this week. Question: Should I wait until this indexing error has been fixed or I may as well go ahead with the SSL move?
Technical SEO | | fabx0 -
Page Speed/Website Optimization Question
We recently relaunched our website and after running multiple page speed tests (GT Metrix, Google, etc.) our results aren't great. We would love any suggestions on how to improve our site as we are not experts in what exactly these results mean - https://gtmetrix.com/reports/loyalty360.org/DKRN0hKg. Thanks!
Technical SEO | | carlystemmer0 -
URL Question: Is there any value for ecomm sites in having a reverse "breadcrumb" in the URL?
Wondering if there is any value for e-comm sites to feature a reverse breadcrumb like structure in the URL? For example: Example: https://www.grainger.com/category/anchor-bolts/anchors/fasteners/ecatalog/N-8j5?ssf=3&ssf=3 where we have a reverse categorization happening? with /level2-sub-cat/level1-sub-cat/category in the reverse order as to the actual location on the site. Category: Fasteners
Technical SEO | | ROI_DNA
Sub-Cat (level 1): Anchors
Sub-Cat (level 2): Anchor Bolts0 -
Easy Question: regarding no index meta tag vs robot.txt
This seems like a dumb question, but I'm not sure what the answer is. I have an ecommerce client who has a couple of subdirectories "gallery" and "blog". Neither directory gets a lot of traffic or really turns into much conversions, so I want to remove the pages so they don't drain my page rank from more important pages. Does this sound like a good idea? I was thinking of either disallowing the folders via robot.txt file or add a "no index" tag or 301redirect or delete them. Can you help me determine which is best. **DEINDEX: **As I understand it, the no index meta tag is going to allow the robots to still crawl the pages, but they won't be indexed. The supposed good news is that it still allows link juice to be passed through. This seems like a bad thing to me because I don't want to waste my link juice passing to these pages. The idea is to keep my page rank from being dilluted on these pages. Kind of similar question, if page rank is finite, does google still treat these pages as part of the site even if it's not indexing them? If I do deindex these pages, I think there are quite a few internal links to these pages. Even those these pages are deindexed, they still exist, so it's not as if the site would return a 404 right? ROBOTS.TXT As I understand it, this will keep the robots from crawling the page, so it won't be indexed and the link juice won't pass. I don't want to waste page rank which links to these pages, so is this a bad option? **301 redirect: **What if I just 301 redirect all these pages back to the homepage? Is this an easy answer? Part of the problem with this solution is that I'm not sure if it's permanent, but even more importantly is that currently 80% of the site is made up of blog and gallery pages and I think it would be strange to have the vast majority of the site 301 redirecting to the home page. What do you think? DELETE PAGES: Maybe I could just delete all the pages. This will keep the pages from taking link juice and will deindex, but I think there's quite a few internal links to these pages. How would you find all the internal links that point to these pages. There's hundreds of them.
Technical SEO | | Santaur0 -
Question about duplicate images used within a single site
I understand that using duplicate images across many websites was become an increasingly important duplicate content issue to be aware of. We have a couple dozen geotargeted landing pages on our site that are designed to promote our services to residents from various locations in our area. We've created 400+ word pieces of fresh, original content for each page, some of which talks about the specific region in some detail. However, we have a powerful list of top reasons to choose us that we'd like to use on each page as is, without rewriting them for each page. We'd like to simply present this bulleted list as an image file on each page to get around any duplicate written copy concerns. This image would not appear on any other websites but would appear on about two dozen landing pages for a single site. Is there anything to worry about this strategy from a duplicate content or duplicate image perspective in terms of SEO?
Technical SEO | | LeeAbrahamson0 -
Wordpress question
I was curious when i run an OSE report on certain websites and their name.wordpress.com shows up with a PA of whatever and a DA of 100. But when I created my wordpress site and post on it, it only has a PA and DA of 1. is this because SEOmoz has not indexed it yet? It is a month old. http://shiftinsurance.wordpress.com/ Can anyone help pls?
Technical SEO | | greasy0 -
301 Redirect Questions
I have a site I built on a wisiwig editing platform that will not allow a 301 redirect. The site has already been remade and I need to point it to another domain. To do the redirect, can I change it to another domain host that will allow a 301 or will that make me loose the authority of the site? I may not be able to move the content of the site. Please help.
Technical SEO | | photoseo10