Why crawl error "title missing or empty" when there is already "title and meta desciption" in place?
-
I've been getting 73 "title missing or empty" warnings from SEOMOZ crawl diagnostic.
This is weird as I've installed yoast wordpress seo plugin and all posts do have title and meta description. But why the results here.. can anyone explain what's happening? Thanks!!
Here are some of the links that are listed with "title missing, empty". Almost all our blog posts were listed there.
http://www.gan4hire.com/blog/2011/are-you-here-for-good/
-
I see. Thanks so much for the effort to explain in detail.
So, is it because of the yoast wordpress seo plugin i used? Are you using that for your site? Do you have such problem? Because I just installed it prior to the crawl. I was using All In One SEO earlier and the crawl didn't come back with such error.
Google and Bing seems to have no problem getting my title though. Should I fix it or just ignore the problem?
Thanks so much again!
-
Jason,
Go in and turn off your twitter, G+1, plug in and then re run the app. My guess is you will then see title tags through any moz tool. If so, you can choose a different widget or move placement. (when you deactivate the plug in make sure you clear the cache before running crawl).
Hope it helps
-
Thanks Alan,
I like a little mystery hunt
-
Well picked up Sha.
impressed with you level of detail.
-
Hi Jason,
There is obviously something going on with this that is affecting what some crawlers are seeing on your pages.
I ran the Screaming Frog Tool and it shows that the majority of your pages have empty Titles even though I can see that there are Titles loading in the browser.
On checking your code I see that you are using the pragma directive meta element , but it actually appears below the Title element in the code.
Example from your code:
<head> <title>Are You Socially Awkward? | Branding Blog | The Bullettitle> **<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />**
So I ran the page through the W3C Markup Validation Service and it also indicates that it sees no character encoding declaration:
No Character encoding declared at document level
No character encoding information was found within the document, either in an HTML
meta
element or an XML declaration.So, I believe the issue here may be related to the fact that the pragma directive should appear as close as possible to the top of the head element ie before the Title element.
The following is from the W3.org documentation on declaring character encoding. You will see that there is specific reference to the fact that the use of the pragma directive is required in the case of XHTML 1.x documents as yours is:
For XHTML syntax, you should, of course, have " />" after the content attribute, rather than just ">".
The encoding of the document is specified just after charset=. In this case the specified encoding is the Unicode encoding, UTF-8.
The pragma directive should be used for pages written in HTML 4.01. It should also be used for XHTML 1.x documents served as HTML, since the HTML parser will not pick up encoding information from the XML declaration.
In HTML5 you can either use this approach for declaring the encoding, or the newly specified meta charset attribute, but not both in the same page. The encoding declaration should also fit within the first 1024 bytes of the document, so you should generally put it immediately after the opening tag of the head element.
Hope that helps,
Sha
-
Cool. Thanks for reminding, Keri. I thought the help desk will reply to this thread.
Sure, I'll post more information back on this thread once I get the answer.
-
Thanks for accessing the site. I hope the next crawl, which will be next week, will be good. Will update you guys.
-
That's an interesting one. I'd email that to the help desk at help@seomoz.org to let them know about it. If there's some kind of cause of it that would be helpful for others to know, it'd be great if you could post more information back on this thread.
-
I just did a cral on your site using Bings ToolKit, and i did not find any errors concerneing tittle.
In fact your site has the best score i have ever got from a wordpress site. Usely a wordpress site is a mess, especialy with un-necasary 301's
I found only 2 html errors, 1 un-necessary redirect and multiple h1.
Wait to next crawl it may come good.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
WMT "Index Status" vs Google search site:mydomain.com
Hi - I'm working for a client with a manual penalty. In their WMT account they have 2 pages indexed.If I search for "site:myclientsdomain.com" I get 175 results which is about right. I'm not sure what to make of the 2 indexed pages - any thoughts would be very appreciated. google-1.png google-2.png
Technical SEO | | JohnBolyard0 -
Meta tags
Hello, Does anyone know how long it takes for the meta descriptions to show up in Google? This because I just updated my meta descriptions for the whole website, but while moz toolbar is showing it correctly, google is still showing the old ones, even if i used the see as googlebot tool from webmaster tools. Thanks for a reply
Technical SEO | | socialengaged
Eugenio | Social Engagement0 -
404 error
Both SEOmoz and Google webmaster tools are returning over 4000 error 404.The majority or returned error URLs are for images, and all URLs end up with %20target=as shown belowimages/products/detail/AD9058RoundGlassTableChairs.jpg%20target=images/products/detail/BM921ModernRoundDiningTable.jpg%20target=images/products/detail/CR701506CappuccinoCoffeeTableSet.jpg%20target=any suggestions?RegardsTony
Technical SEO | | OCFurniture0 -
Will I still get Duplicate Meta Data Errors with the correct use of the rel="next" and rel="prev" tags?
Hi Guys, One of our sites has an extensive number category page lsitings, so we implemented the rel="next" and rel="prev" tags for these pages (as suggested by Google below), However, we still see duplicate meta data errors in SEOMoz crawl reports and also in Google webmaster tools. Does the SEOMoz crawl tool test for the correct use of rel="next" and "prev" tags and not list meta data errors, if the tags are correctly implemented? Or, is it necessary to still use unique meta titles and meta descriptions on every page, even though we are using the rel="next" and "prev" tags, as recommended by Google? Thanks, George Implementing rel=”next” and rel=”prev” If you prefer option 3 (above) for your site, let’s get started! Let’s say you have content paginated into the URLs: http://www.example.com/article?story=abc&page=1
Technical SEO | | gkgrant
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4 On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the section: On the second page, http://www.example.com/article?story=abc&page=2: On the third page, http://www.example.com/article?story=abc&page=3: And on the last page, http://www.example.com/article?story=abc&page=4: A few points to mention: The first page only contains rel=”next” and no rel=”prev” markup. Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup. The last page only contains markup for rel=”prev”, not rel=”next”. rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL. rel=”next” and rel=”prev” only need to be declared within the section, not within the document . We allow rel=”previous” as a syntactic variant of rel=”prev” links. rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain: rel=”prev” and rel=”next” act as hints to Google, not absolute directives. When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.0 -
How unique does a page need to be to avoid "duplicate content" issues?
We sell products that can be very similar to one another. Product Example: Power Drill A and Power Drill A1 With these two hypothetical products, the only real difference from the two pages would be a slight change in the URL and a slight modification in the H1/Title tag. Are these 2 slight modifications significant enough to avoid a "duplicate content" flagging? Please advise, and thanks in advance!
Technical SEO | | WhiteCap0 -
Site Crawl
I was wondering if there was a way to use SEOmoz's tool to quickly and easily find all the URLs on you site and not just the ones with errors. The site that I am working on does not have a site map. What I am trying to do is find all the URLs along with their titles and description tags. Thank you very much for your help
Technical SEO | | pakevin0 -
Impact of "restricted by robots" crawler error in WT
I have been wondering about this for a while now with regards to several of my sites. I am getting a list of pages that I have blocked in the robots.txt file. If I restrict Google from crawling them, then how can they consider their existence an error? In one case, I have even removed the urls from the index. And do you have any idea of the negative impact associated with these errors. And how do you suggest I remedy the situation. Thanks for the help
Technical SEO | | phogan0 -
Am I missing something?
Hey guys, First of all, a big thanks to SEOmoz and the community. I've been an avid reader for about a year now and have seen some great improvements. I'm always focusing my efforts on strategies that work well for my niche. Although I've come accross one of my competitors that doesn't seem to have much going for him, although he ranks very well. His root domain is ****E and the URL where (seemingly) spammed links point to is ***. If you do a site: command he has 1000+ pages although most consist of "events calendar" (empty). Also, I ran some of his content through copyscape and there seems to be multiple versions of it throughout the web. After all this, he ranks very well for money keywords in our niche, although his on-page is horrible so there are many opporunities I've capitalized on. Is there something I'm missing? I'm trying to find the value in his website but its not very clear to me since his backlink profile is (seemingly) junk and his on-page goes against all I was told to implement.
Technical SEO | | reegs0