JSON-LD meta data: Do you have any rules/recommendations for using BlogPosting vs Article?
-
Dear Moz Community.
I'm looking at moving from in-line Microdata in the HTML to JSON-LD on the web pages that I manage. Seems a far simpler solution having all the meta data in one place - especially for trouble shooting!
With this in mind I've started to change the page templates on my personal site before I tackle the ones for my eCommerce site. I've made a start, and I'm still working on the templates producing some default values (like if a page doesn't have an associated image) but have been wondering if any of you have any rules/recommendations for using BlogPosting vs Article?
I'd call this type of page an Article:
https://cycling-jersey-collection.com/browse-collection/selle-italia-chinol-seb-bennotto-1982-team-jerseyWhereas this page is from the /blog so that should probably be a BlogPosting:
https://cycling-jersey-collection.com/blog/2017-worldtour-team-jerseysI've used the following resources but it would be great to get a discussion on here.
https://yoast.com/structured-data-schema-ultimate-guide/
https://developers.google.com/search/docs/data-types/data-type-selector
https://search.google.com/structured-data/testing-tool/u/0/I'm keen to get this 100% right as once this is done I'm going to drive through some further changes to get some progress on things like this:
https://moz.com/blog/ranking-zero-seo-for-answers
https://moz.com/blog/what-we-learned-analyzing-featured-snippetsKind Regards
andy
-
Yes, getting the meta data to be perfect is my goal.
Not interestingly only about half the competitors sites have done it anywhere near approaching well.
Perhaps correct implementation of schema.org is given more priority in some sectors rather than others?Andy
-
It's a tough discussion. We're a news site and we're marking up things as NewsArticle, just a little different again from a blogpost and the article. In the end it probably won't have a huge impact as the schemas are in the same category in the end. That's why it's probably best to just make sure the Schema.org is top notch so you can benefit from it. Just having a Schema.org on your page won't have a huge influence as most of your competitors will have that as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicated titles and meta descriptions
Hi, Dealing with both my duplicated titles and meta descriptions i'm wondering if there's a "quick" win I could potentially implement asap. A bit of background:
Technical SEO | | GhillC
Say I've 4 pages structured that way: domain.com/us/productA.html for the US domain.com/gb/productA.html the UK domain.com/fr/productA.html for France domain.com/de/productA.html For Germany At the moment, both my page titles and meta-descriptions are duplicated all over the place for product A.
Title is reading "Product A - company name"
MD is a bit better, being translated in all 3 languages (En, Fr, DE). Therefore being the same for the US and for the UK. Ideally, I would get unique page titles and MD all over the place. However, due to time and resource constraints, I can't make it happen overnight. So my questions are pretty simple:
1. Can I create a rule for page titles to be "Product A - country - company name" or similar? Would that be enough to make the page titles unique? Is there any value doing so?
2. Can I "localize" duplicate MD by simply naming the country? I assume it is not enough in this case as all the rest would be copy/pasted. Ideally speaking, both my page titles and MD would be completely unique but I can't afford doing so in the short term. Thanks!0 -
vs.
I have a site that is based in the US but each page has several different versions for different regions. These versions live in folders (/en-us for the US English version, /en-gb for the UK English version, /fr-fr for the French version, etc.). Obviously, the French pages are in French. However, there are two versions of the site that are in English with little variation of the content. The pages all have a tag to indicate the language the page is in. However, there are no <hreflang>tags to indicate that the pages are the same page in two different languages.</hreflang> My question is, do I need to go through and add the <hreflang>tags to each page to reference each other and identify to Google that these are duplicate content issues, but different language versions of the same content? Or, will Google figure that our from the tag?</hreflang>
Technical SEO | | InterCall0 -
Structure Data Issue
HiI found few errors in Google webmaster tools under structure data. The error shows "Missing: name' but when I click 'Test Live Data' it shows 'All good'. Currently we are using Drupal CMS and please find attached error screenshot.Please advice on this issue.Thanks,SatlaqlGEyp7
Technical SEO | | TrulyTravel0 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
SEO plugin by Yoast messing up my title/meta description
Hey guys, I'm having some issues with my wordpress blog, and I believe SEO plugin by Yoast could be the one causing it. I have set a title for my wordpress blog, and a tagline. This was set in dashboard > settings > general Under "titles and metas" > home in the plugin it says, title: %%sitename%% %%page%% %%sep%% %%sitedesc%%, and meta description is blank. The reports on seomoz says my title is title+meta description - making it to long (to many characters). What could be the issue here? Thanks in advance!
Technical SEO | | danielpett0 -
To 301 redirect or not to 301 redirect? duplicate content problem www.domain.com and www.domain.com/en/
Hello, If your website is getting flagged for duplicate content from your main domain www.domain.com and your multilingual english domain www.domain.com/en/ is it wise to 301 redirect the english multilingual website to the main site? Please advise. We've recently installed the joomish component to one of our joomla websites in an effort to streamline a spanish translation of the website. The translation was a success and the new spanish webpages were indexed but unfortunately one of the web developers enabled the english part of the component and some english webpages were also indexed under the multilingual english domain www.domain.com/en/ and that flagged us for duplicate content. I added a 301 redirect to redirect all visitors from the www.domain/en/ webpages to the main www.domain.com/ webpages. But is that the proper way of handling this problem? Please advise.
Technical SEO | | Chris-CA0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Use of Meta Tag - MSSmartTagsPreventParsing
We've inherited some sites from another developer that had the following tag: All references I can find to it are from 2004. What is the purpose and is it worth including in pages/sites we build?
Technical SEO | | wcksmith0