On-Page Markup: Still a Worthwhile Practice?
-
So I have a question for the community that hopefully someone can help me with.
Previously, whenever I created/worked on a website, when I would create or edit the content, I would bold the keywords, italicize certain items, add internal/external links and generally mark-up the content. More recently, however, I've noticed that both my client and many of their leading competitors have abandoned this practice.
Now, it appears that all the text appears as plain text, there are rarely bold or italicized items and there does not seem to be as much emphasis on inserting internal/external links. While I understand the ladder to still be an effective/holistic approach to SEO, I'm wondering why the former (the bold, italicized, text variation) has gone by the wayside.
So with that, is adding bold/italicized text still a worthwhile SEO technique and is it something I should continue applying to sites I work on? Please advise.
-
If you think about the original use of bold/italicized text, it was to emphasize certain words/phrases/concepts to readers. Search engines would then use the markup as an indicator of what the most important concepts were on the page, and what search queries the page would make the most sense to rank for. Then came SEOs who overused the once-valid editorial tactic to manipulate search engine algos & rankings. The same thing happened with linking - the original purpose was to help users navigate from page to relevant page, but SEOs went overboard focusing too heavily on rankings & links (spam/black hat SEO) and not enough on user experience. All 3 of these tactics are now scrutinized via specific content & link quality algorithms.
I do not recommend bolding, italicizing, or even linking 'just for SEO.' However, if bolding/italicizing certain parts of text will benefit the reader (helping them better understand your point) or if linking to other pages on your site would provide the visitor with more relevant/useful information that will help with their purchasing decision, then by all means, add the markup. In general, I try not to do anything "**just for SEO" **;) Always ask whether or not a tactic would benefit the user - more often than not, user-friendly efforts will also be search-friendly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Anyone suspect that a site's total page count affects SEO?
I've been trying to find out the underlying reason why so many websites are ranked higher than mine despite seemingly having far worse links. I've spent a lot of time researching and have read through all the general advice about what could possibly be hurting my site's SEO, from page speed to h1 tags to broken links, and all the various on-page SEO optimization stuff....so the issue here isn't very obvious. From viewing all of my competitors, they seem to have a much higher number of web pages on their sites than mine does. My site currently has 20 pages or so and most of my competitors are well in the hundreds, so I'm wondering if this could potentially be part of the issue here. I know Google has never officially said that page number matters, but does anyone suspect that perhaps page count matters towards SEO and that competing sites with more total pages than you might have an advantage SEOwise?
Algorithm Updates | | ButtaC1 -
Meta robots at every page rather than using robots.txt for blocking crawlers? How they'll get indexed if we block crawlers?
Hi all, The suggestion to use meta robots tag rather than robots.txt file is to make sure the pages do not get indexed if their hyperlinks are available anywhere on the internet. I don't understand how the pages will be indexed if the entire site is blocked? Even though there are page links are available, will Google really index those pages? One of our site got blocked from robots file but internal links are available on internet for years which are not been indexed. So technically robots.txt file is quite enough right? Please clarify and guide me if I'm wrong. Thanks
Algorithm Updates | | vtmoz0 -
Rel canonical on every page of wordpress CMS website
Can we have rel=canonical across all pages of a wordpress CMS website? I don't know why same page has been as canonical but not for duplicate pages
Algorithm Updates | | vtmoz1 -
Not sure whether to update existing page or create a new one
Hello guys, So far I have found this Q&A to be very helpful. I have a couple of product pages on my website which rank very low in the search results. This was because in the past they were not at all optimized in terms of title, description, etc. I would like to now optimize these pages, get some inbound links to them, etc. I'm not sure whether to do this with the existing product pages or create new ones. I'm afraid that if I optimize the current pages, the low ranking from before will carry over. Would it be better to start fresh? Thank you, Pravin
Algorithm Updates | | goforgreen0 -
Best Practices for Page Titles | RSS Feeds
Good Morning MOZers, Quick question for the community: when creating an RSS feed for one of your websites, how do you title your RSS feed? Currently, the sites I'm managing use the 'rss.xml' for the file name, but I was curious to know whether or not it would, in any way, benefit my SERP if I were to add my domain to precede the 'rss.xml', i.e. 'my-sites-rss.xml' or something of that nature. Beyond that, are there any 'best practices' for creating RSS feed page titles or is there a preferred method of implementation? Anybody have any solutions
Algorithm Updates | | NiallSmith0 -
Trying to figure out why one of my popular pages was de-indexed from Google.
I wanted to share this with everyone for two reasons. 1. To try to figure out why this happened, and 2 Let everyone be aware of this so you can check some of your pages if needed. Someone on Facebook asked me a question that I knew I had answered in this post. I couldn't remember what the url was, so I googled some of the terms I knew was in the page, and the page didn't show up. I did some more searches and found out that the entire page was missing from Google. This page has a good number of shares, comments, Facebook likes, etc (ie: social signals) and there is certainly no black / gray hat techniques being used on my site. This page received a decent amount of organic traffic as well. I'm not sure when the page was de-indexed, and wouldn't have even known if I had't tried to search for it via google; which makes me concerned that perhaps other pages are being de-indexed. It also concerns me that I have done something wrong (without knowing) and perhaps other pages on my site are going to be penalized as well. Does anyone have any idea why this page would be de-indexed? It sure seems like all the signals are there to show Google this page is unique and valuable. Interested to hear some of your thoughts on this. Thanks
Algorithm Updates | | NoahsDad0 -
Stop google indexing CDN pages
Just when I thought I'd seen it all, google hits me with another nasty surprise! I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site? Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages. Anyone got an idea on how to stop that? Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there. It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS? Have you seen this problem and beat it? (Of course the next thing is Roger might look at google results and start crawling them too, LOL) P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
Algorithm Updates | | loopyal0 -
Should I block non-informative pages from Google's index?
Our site has about 1000 pages indexed, and the vast majority of them are not useful, and/or contain little content. Some of these are: -Galleries
Algorithm Updates | | UnderRugSwept
-Pages of images with no text except for navigation
-Popup windows that contain further information about something but contain no navigation, and sometimes only a couple sentences My question is whether or not I should put a noindex in the meta tags. I think it would be good because the ratio of quality to low quality pages right now is not good at all. I am apprehensive because if I'm blocking more than half my site from Google, won't Google see that as a suspicious or bad practice?1