Why crawl error "title missing or empty" when there is already "title and meta desciption" in place?
-
I've been getting 73 "title missing or empty" warnings from SEOMOZ crawl diagnostic.
This is weird as I've installed yoast wordpress seo plugin and all posts do have title and meta description. But why the results here.. can anyone explain what's happening? Thanks!!
Here are some of the links that are listed with "title missing, empty". Almost all our blog posts were listed there.
http://www.gan4hire.com/blog/2011/are-you-here-for-good/
-
I see. Thanks so much for the effort to explain in detail.
So, is it because of the yoast wordpress seo plugin i used? Are you using that for your site? Do you have such problem? Because I just installed it prior to the crawl. I was using All In One SEO earlier and the crawl didn't come back with such error.
Google and Bing seems to have no problem getting my title though. Should I fix it or just ignore the problem?
Thanks so much again!
-
Jason,
Go in and turn off your twitter, G+1, plug in and then re run the app. My guess is you will then see title tags through any moz tool. If so, you can choose a different widget or move placement. (when you deactivate the plug in make sure you clear the cache before running crawl).
Hope it helps
-
Thanks Alan,
I like a little mystery hunt
-
Well picked up Sha.
impressed with you level of detail.
-
Hi Jason,
There is obviously something going on with this that is affecting what some crawlers are seeing on your pages.
I ran the Screaming Frog Tool and it shows that the majority of your pages have empty Titles even though I can see that there are Titles loading in the browser.
On checking your code I see that you are using the pragma directive meta element , but it actually appears below the Title element in the code.
Example from your code:
<head> <title>Are You Socially Awkward? | Branding Blog | The Bullettitle> **<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />**
So I ran the page through the W3C Markup Validation Service and it also indicates that it sees no character encoding declaration:
No Character encoding declared at document level
No character encoding information was found within the document, either in an HTML
meta
element or an XML declaration.So, I believe the issue here may be related to the fact that the pragma directive should appear as close as possible to the top of the head element ie before the Title element.
The following is from the W3.org documentation on declaring character encoding. You will see that there is specific reference to the fact that the use of the pragma directive is required in the case of XHTML 1.x documents as yours is:
For XHTML syntax, you should, of course, have " />" after the content attribute, rather than just ">".
The encoding of the document is specified just after charset=. In this case the specified encoding is the Unicode encoding, UTF-8.
The pragma directive should be used for pages written in HTML 4.01. It should also be used for XHTML 1.x documents served as HTML, since the HTML parser will not pick up encoding information from the XML declaration.
In HTML5 you can either use this approach for declaring the encoding, or the newly specified meta charset attribute, but not both in the same page. The encoding declaration should also fit within the first 1024 bytes of the document, so you should generally put it immediately after the opening tag of the head element.
Hope that helps,
Sha
-
Cool. Thanks for reminding, Keri. I thought the help desk will reply to this thread.
Sure, I'll post more information back on this thread once I get the answer.
-
Thanks for accessing the site. I hope the next crawl, which will be next week, will be good. Will update you guys.
-
That's an interesting one. I'd email that to the help desk at help@seomoz.org to let them know about it. If there's some kind of cause of it that would be helpful for others to know, it'd be great if you could post more information back on this thread.
-
I just did a cral on your site using Bings ToolKit, and i did not find any errors concerneing tittle.
In fact your site has the best score i have ever got from a wordpress site. Usely a wordpress site is a mess, especialy with un-necasary 301's
I found only 2 html errors, 1 un-necessary redirect and multiple h1.
Wait to next crawl it may come good.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
[Organization schema] Which Facebook page should be put in "sameAs" if our organization has separate Facebook pages for different countries?
We operate in several countries and have this kind of domain structure:
Technical SEO | | Telsenome
example.com/us
example.com/gb
example.com/au For our schemas we've planned to add an Organization schema on our top domain, and let all pages point to it. This introduces a problem and that is that we have a separate Facebook page for every country. Should we put one Facebook page in the "sameAs" array? Or all of our Facebook pages? Or should we skip it altogether? Only one Facebook page:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
], All Facebook pages:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
"https://www.facebook.com/xxx_gb"
"https://www.facebook.com/xxx_au"
], Bonus question: This reasoning springs from the thought that we only should have one Organization schema? Or can we have a multiple sub organizations?0 -
"Equity sculpting" with internal nofollow links
I’ve been trying a couple of new site auditor services this week and they have both flagged the fact that I have some nofollow links to internal pages. I see this subject has popped up from time to time in this community. I also found a 2013 Matt Cutts video on the subject: https://searchenginewatch.com/sew/news/2298312/matt-cutts-you-dont-have-to-nofollow-internal-links At a couple of SEO conferences I’ve attended this year, I was advised that nofollow on internal links can be useful so as not to squander link juice on secondary (but necessary) pages. I suspect many websites have a lot of internal links in their footers and are sharing the love with pages which don’t really need to be boosted. These pages can still be indexed but not given a helping hand to rank by strong pages. This “equity sculpting” (I made that up) seems to make sense to me, but am I missing something? Examples of these secondary pages include login pages, site maps (human readable), policies – arguably even the general contact page. Thoughts? Regards,
Technical SEO | | Warren_Vick
Warren1 -
Duplicate Page Titles Issue in Campaign Crawl Error Report
Hello All! Looking at my campaign I noticed that I have a large number of 'duplicate page titles' showing up but all they are the various pages at the end of the URL. Such as, http://thelemonbowl.com/tag/chocolate/page/2 as a duplicate of http://thelemonbowl.com/tag/chocolate. Any suggestions on how to address this? Thanks!
Technical SEO | | Rich-DC0 -
How do I get my pages to go from "Submitted" to "Indexed" in Google Webmaster Tools?
Background: I recently launched a new site and it's performing much better than the old site in terms of bounce rate, page view, pages per session, session duration, and conversions. As suspected, sessions, users, and % new sessions are all down. Which I'm okay with because the the old site had a lot of low quality traffic going to it. The traffic we have now is much more engaged and targeted. Lastly, the site was built using Squarespace and was launched the middle of August. **Question: **When reviewing Google Webmaster Tools' Sitemaps section, I noticed it says 57 web pages Submitted, but only 5 Indexed! The sitemap that's submitted seems to be all there. I'm not sure if this is a Squarespace thing or what. Anyone have any ideas? Thanks!!
Technical SEO | | Nate_D0 -
Webmaster Crawl errors caused by Joomla menu structure.
Webmaster Tools is reporting crawl errors for pages that do not exist due to how my Joomla menu system works. Example, I have a menu item named "Service Area" that stores 3 sub items but no actual page for Service Area. This results in a URL like domainDOTcom/service-area/service-page.html Because the Service Area menu item is constructed in a way that shows the bot it is a link, I am getting a 404 error saying it can't find domainDOTcom/service-area/ (The link is to "javasript:;") Note, the error doesn't say domainDOTcom/service-area/javascript:; it just says /service-area/ What is the best way to handle this? Can I do something in robots.txt to tell the bot that this /service-area/ should be ignored but any page after /service-area/ is good to go? Should I just mark them as fixed as it's really not a 404 a human will encounter or is it best to somehow explain this to the bot? I was advised on google forums to try this, but I'm nervous about it. Disallow: /service-area/*
Technical SEO | | dwallner
Allow: /service-area/summerlin-pool-service.
Allow: /service-area/north-las-vegas
Allow: /service-area/centennial-hills-pool-service I tried a 301 redirect of /service-area to home page but then it pulls that out of the url and my landing pages become 404's. http://www.lvpoolcleaners.com/ Thanks for any advice! Derrick0 -
Impact of "restricted by robots" crawler error in WT
I have been wondering about this for a while now with regards to several of my sites. I am getting a list of pages that I have blocked in the robots.txt file. If I restrict Google from crawling them, then how can they consider their existence an error? In one case, I have even removed the urls from the index. And do you have any idea of the negative impact associated with these errors. And how do you suggest I remedy the situation. Thanks for the help
Technical SEO | | phogan0 -
How is my competition causing bad crawl errors and links on my site
We have a compeditor who we are in a legal dispute at the moment, and they are using under hand tactics to cause us to have bad links and crawl errors and i do not know how they are doing it or how to stop it. The crawl errors we are getting is the site having two urls together, for example www.testsite.com/www.testsite.com and other errors are pages that we do not even have or pages that are spelt wrong or have a dot after the page name. We have been told off a number of people in our field that this has also happened to them and i would like to know how they are doing it so we can have this stopped Since they have been doing this our traffic has gone down by half
Technical SEO | | ClaireH-1848860 -
"Too Many On-Page Links" Issue
I'm being docked for too many on page links on every page on the site, and I believe it is because the drop down nav has about 130 links in it. It's because we have a few levels of dropdowns, so you can get to any page from the main page. The site is here - http://www.ibethel.org/ Is what I'm doing just a bad practice and the dropdowns shouldn't give as much information? Or is there something different I should do with the links? Maybe a no-follow on the last tier of dropdown?
Technical SEO | | BethelMedia0