Numbers vs #'s For Blog Titles
-
For your blog post titles, is it "better" to use numbers or write them out? For example, 3 Things I love About People Answering My Constant Questions or Three Things I Love About People Answering My Constant Questions?
I could see this being like the attorney/lawyer, ecommerce/e-commerce and therefore not a big deal. But, I also thought you should avoid using #'s in your url's.
Any thoughts,
Ruben
-
Thanks everyone.
Happy Friday,
Ruben
-
When it comes to people, '3' is easier for them to process than 'three'. You may even get a minor speed boost when the CMS queries the DB, but I haven't tested that. It may only apply to example.com/2014/08/28/sample-post and example.com/?p=123. Basically, the numeric version is easier for machines to process as well.
However, Michael Gray at Gray Wolf SEO has one good reason why you might want to avoid using the numeric equivalent. He definitely has a point. So keep the number in the title and avoid using it in the URL, unless the information in the post - or the post itself - will never change.
There are plenty of other effective ways to increase page load speed, and that wasn't your concern anyway.
-
I would try to use both, maybe use one in the title then the other or both in the blogs posts so that you can try to rank for both, I've done that a few times.
-
I would use the number: '3'
When searching, most people will use the number instead of writing it out.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What does it mean to build a 'good' website.
Hi guys. I've heard a lot of SEO professionals, Google, (and Rand in a couple of whiteboard Friday's) say it's really important to build a 'good' website if you want to rank well. What does this mean in more practical terms? (Context... I've found some sites rank much better than they 'should' do based on the competition. However, when I built my own site (well-optimised (on-page) based on thorough keyword research) it was nowhere to be found (not even top 50 after I'd 'matched' the backlink profile of others on page 1). I can only put this down to there being 'good quality website' signals lacking in the latter example. I'm not a web developer so the website was the pretty basic WordPress site.)
Algorithm Updates | | isaac6630 -
SEO Myth-Busters -- Isn't there a "duplicate content" penalty by another name here?
Where is that guy with the mustache in the funny hat and the geek when you truly need them? So SEL (SearchEngineLand) said recently that there's no such thing as "duplicate content" penalties. http://searchengineland.com/myth-duplicate-content-penalty-259657 by the way, I'd love to get Rand or Eric or others Mozzers aka TAGFEE'ers to weigh in here on this if possible. The reason for this question is to double check a possible 'duplicate content" type penalty (possibly by another name?) that might accrue in the following situation. 1 - Assume a domain has a 30 Domain Authority (per OSE) 2 - The site on the current domain has about 100 pages - all hand coded. Things do very well in SEO because we designed it to do so.... The site is about 6 years in the current incarnation, with a very simple e-commerce cart (again basically hand coded). I will not name the site for obvious reasons. 3 - Business is good. We're upgrading to a new CMS. (hooray!) In doing so we are implementing categories and faceted search (with plans to try to keep the site to under 100 new "pages" using a combination of rel canonical and noindex. I will also not name the CMS for obvious reasons. In simple terms, as the site is built out and launched in the next 60 - 90 days, and assume we have 500 products and 100 categories, that yields at least 50,000 pages - and with other aspects of the faceted search, it could create easily 10X that many pages. 4 - in ScreamingFrog tests of the DEV site, it is quite evident that there are many tens of thousands of unique urls that are basically the textbook illustration of a duplicate content nightmare. ScreamingFrog has also been known to crash while spidering, and we've discovered thousands of URLS of live sites using the same CMS. There is no question that spiders are somehow triggering some sort of infinite page generation - and we can see that both on our DEV site as well as out in the wild (in Google's Supplemental Index). 5 - Since there is no "duplicate content penalty" and there never was - are there other risks here that are caused by infinite page generation?? Like burning up a theoretical "crawl budget" or having the bots miss pages or other negative consequences? 6 - Is it also possible that bumping a site that ranks well for 100 pages up to 10,000 pages or more might very well have a linkuice penalty as a result of all this (honest but inadvertent) duplicate content? In otherwords, is inbound linkjuice and ranking power essentially divided by the number of pages on a site? Sure, it may be some what mediated by internal page linkjuice, but what's are the actual big-dog issues here? So has SEL's "duplicate content myth" truly been myth-busted in this particular situation? ??? Thanks a million! 200.gif#12
Algorithm Updates | | seo_plus0 -
Google indexing https sites by default now, where's the Moz blog about it!
Hello and good morning / happy Friday! Last night an article from of all places " Venture Beat " titled " Google Search starts indexing and letting users stream Android apps without matching web content " was sent to me, as I read this I got a bit giddy. Since we had just implemented a full sitewide https cert rather than a cart only ssl. I then quickly searched for other sources to see if this was indeed true, and the writing on the walls seems to indicate so. Google - Google Webmaster Blog! - http://googlewebmastercentral.blogspot.in/2015/12/indexing-https-pages-by-default.html http://www.searchenginejournal.com/google-to-prioritize-the-indexing-of-https-pages/147179/ http://www.tomshardware.com/news/google-indexing-https-by-default,30781.html https://hacked.com/google-will-begin-indexing-httpsencrypted-pages-default/ https://www.seroundtable.com/google-app-indexing-documentation-updated-21345.html I found it a bit ironic to read about this on mostly unsecured sites. I wanted to hear about the 8 keypoint rules that google will factor in when ranking / indexing https pages from now on, and see what you all felt about this. Google will now begin to index HTTPS equivalents of HTTP web pages, even when the former don’t have any links to them. However, Google will only index an HTTPS URL if it follows these conditions: It doesn’t contain insecure dependencies. It isn’t blocked from crawling by robots.txt. It doesn’t redirect users to or through an insecure HTTP page. It doesn’t have a rel="canonical" link to the HTTP page. It doesn’t contain a noindex robots meta tag. It doesn’t have on-host outlinks to HTTP URLs. The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL. The server has a valid TLS certificate. One rule that confuses me a bit is : **It doesn’t redirect users to or through an insecure HTTP page. ** Does this mean if you just moved over to https from http your site won't pick up the https boost? Since most sites in general have http redirects to https? Thank you!
Algorithm Updates | | Deacyde0 -
Who's doing canonical tags right, The Gap or Kohls?
Hi Moz, I'm working on an ecommerce site with categories, filter options, and sort options – teacherexpress.scholastic.com. Should I have canonical tags from all filter and sort options point to the category page like gap.com and llbean.com? or have all sort options point to the filtered page URL like kohls.com? I was under the impression that to use a canonical tag, the pages have to have the same content, meaning that Gap and L.L. Bean would be using canonical tags incorrectly. Using a filter changes the content, whereas using a sort option just changes the order. What would be the best way to deal with duplicate content for this site? Thanks for reading!
Algorithm Updates | | DA20130 -
Don't use an h1 and just use h2's?
We just overhauled our site and as I was auditing the overhaul I noticed that there were no h1's on any of the pages. I asked the company that does our programming why and he responded that h1's are spammed so much so he doesn't want to put them in. Instead he put in h2's. I can't find anything to back this up. I can find that h1's are over-optimized but nothing that says to skip them altogether. I think he's crazy. Anyone have anything to back him up?
Algorithm Updates | | Dave_Whitty0 -
Should I block non-informative pages from Google's index?
Our site has about 1000 pages indexed, and the vast majority of them are not useful, and/or contain little content. Some of these are: -Galleries
Algorithm Updates | | UnderRugSwept
-Pages of images with no text except for navigation
-Popup windows that contain further information about something but contain no navigation, and sometimes only a couple sentences My question is whether or not I should put a noindex in the meta tags. I think it would be good because the ratio of quality to low quality pages right now is not good at all. I am apprehensive because if I'm blocking more than half my site from Google, won't Google see that as a suspicious or bad practice?1 -
New Blog Post Ranking Fluctuation
I wrote a recent blog post on Friday. It was indexed and ranked on the first page on Monday. On Wednesday, it was nowhere to be found. I noticed that, after a few more recent posts, it was on page two of my blog. So I expanded my results so that it was back on my first page. Today, it is back on the first page - same spot as before. Was that my problem, or could it be something else I am unaware of?
Algorithm Updates | | BMac540 -
Google changing the casing in SERPs of our domain name in Title tag!
I've added NOODP and NOYDIR metas to our pages... but Google is still somehow showing the correct title tag that is on the page, but is changing the CASING of the | Domain.com portion. In some instances, they are still showing a different title tag all together. Why would they be ignoring the <title>tag on the page and placing an uncased version of our domain name at the end?</p> <p> </p> <a download="MxQjo" class="imported-anchor-tag" href="http://imgur.com/MxQjo" target="_blank">MxQjo</a></title>
Algorithm Updates | | CareerBliss0