Google stripping down Page Titles
-
When viewing pages indexed by Google, I've noticed the Page Titles have be down stripped as follows:
Actual Page Title: CITY Keyword - STATE keyword
Google Indexed Page Title: 1 - Domain.comNone of the keywords in the actual PAGE TITLE are present; all words have been replaced with a random digit -domain.com. We launched a new version of the site several months back.. Any Idea on what can be causing this?
-
There are many reasons why Google do this, but it may be that they think you are keyword stuffing.
-
From what I've seen, Google does what they want. You can try to influence this with noodp, noydir tags but they may not follow that either. Also, make your title
city keyword | state keyword and see if that helps. Having a dash basically makes "one sentence" and having the pipe has (for us) seemed to split it in two - so you aren't having the same keyword twice but once, two times.
-
Hi,
Google says: we know better, you can try and write a title we like, but we reserve to do whatever to make people click on your result.
More on this:
https://yoast.com/google-page-title/
http://www.wordtracker.com/academy/google-changing-title-tags
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How long will old pages stay in Google's cache index. We have a new site that is two months old but we are seeing old pages even though we used 301 redirects.
Two months ago we launched a new website (same domain) and implemented 301 re-directs for all of the pages. Two months later we are still seeing old pages in Google's cache index. So how long should I tell the client this should take for them all to be removed in search?
Intermediate & Advanced SEO | | Liamis0 -
Why isn't Google caching our pages?
Hi everyone, We have a new content marketing site that allows anyone to publish checklists. Each checklist is being indexed by Google, but Google is not storing a cached version of any of our checklists. Here's an example:
Intermediate & Advanced SEO | | Checkli
https://www.checkli.com/checklists/ggc/a-girls-guide-to-a-weekend-in-south-beach Missing Cache:
https://webcache.googleusercontent.com/search?q=cache:DfFNPP6WBhsJ:https://www.checkli.com/checklists/ggc/a-girls-guide-to-a-weekend-in-south-beach+&cd=1&hl=en&ct=clnk&gl=us Why is this happening? How do we fix it? Is this hurting the SEO of our website.0 -
Should I use noindex or robots to remove pages from the Google index?
I have a Magento site and just realized we have about 800 review pages indexed. The /review directory is disallowed in robots.txt but the pages are still indexed. From my understanding robots means it will not crawl the pages BUT if the pages are still indexed if they are linked from somewhere else. I can add the noindex tag to the review pages but they wont be crawled. https://www.seroundtable.com/google-do-not-use-noindex-in-robots-txt-20873.html Should I remove the robots.txt and add the noindex? Or just add the noindex to what I already have?
Intermediate & Advanced SEO | | Tylerj0 -
Does >70 character title tag affect a pages ranking in search?
We are a publication that puts out hundreds of articles a month. We have +5000 medium priority errors showing that our title element tags are too long. The title tag is structured like this: [Headine] | [Publication Name that is 23 characters] . However, since we are a publication, it's not practical for us to try to limit the length of our title tags to 70 characters or less because doing so would make the titles of our content seem very unnatural. We also don't want to remove the branding because we want it to go with the article when it's shared (and to appear when some titles are short enough to allow room in SERPs). I understand the reasons for limiting characters to 70 or less with regard to SERP friendliness. We try to keep key phrases in the front. People are more likely to click on a page if they know what it's about etc etc. My question is, do the longer titles affect the ability for the page to rank in search? To put it a different way, if we altered all the +5000 of the title tags to fit within 70 characters, would the page authorities and our site's domain authority increase? I'd like to avoid needed to clean up 5000 pages if the medium priority errors aren't really hurting us. Any input is appreciated. Thanks!
Intermediate & Advanced SEO | | CatBrain1 -
How to avoid content canibalizm? How do I control which page is the landing page?
Hi All, To clarify my question I will give an example. Let's assume that I have a laptop e-commerce site and that one of my main categories is Samsung Laptops. The category page shows lots of laptops and a small section of text. On the other hand, in my article section I have a HUGE article about Samsung Laptops. If we consider the two word phrases each page is targeting then the answer is the same - Samsung Laptops. On the article i point to the category page using anchor such as "buy samsung laptops" or "samsung laptops" and on the category page (my wishful landing page) I point to the article with "learn about samsung laptops" or "samsung laptops pros and cons". Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Does rel=canonical fix duplicate page titles?
I implemented rel=canonical on our pages which helped a lot, but my latest Moz crawl is still showing lots of duplicate page titles (2,000+). There are other ways to get to this page (depending on what feature you clicked, it will have a different URL) but will have the same page title. Does having rel=canonical in place fix the duplicate page title problem, or do I need to change something else? I was under the impression that the canonical tag would address this by telling the crawler which URL was the URL and the crawler would only use that one for the page title.
Intermediate & Advanced SEO | | askotzko0 -
Duplicate page titles Wordpress SEO/Yoast
Hi I have a Wordpress site using the Wordpress SEO plugin by Yoast. Everything appears to be fine except that on 1 Feb SEOMoz crawl suddenly picked up a bunch of errors. The errors are duplicate page titles, and these exist only for the mysite.com/page/X pages. I can't find any setting in Yoast that looks wrong or tells me how to fix this. The pages are also dynamically canonicalizing to themselves - not sure if this makes any difference although I don't know how this is happening. Does anyone know how to fix this duplicate title error? Alex
Intermediate & Advanced SEO | | alextanner0 -
Google bot vs google mobile bot
Hi everyone 🙂 I seriously hope you can come up with an idea to a solution for the problem below, cause I am kinda stuck 😕 Situation: A client of mine has a webshop located on a hosted server. The shop is made in a closed CMS, meaning that I have very limited options for changing the code. Limited access to pagehead and can within the CMS only use JavaScript and HTML. The only place I have access to a server-side language is in the root where a Defualt.asp file redirects the visitor to a specific folder where the webshop is located. The webshop have 2 "languages"/store views. One for normal browsers and google-bot and one for mobile browsers and google-mobile-bot.In the default.asp (asp classic). I do a test for user agent and redirect the user to one domain or the mobile, sub-domain. All good right? unfortunately not. Now we arrive at the core of the problem. Since the mobile shop was added on a later date, Google already had most of the pages from the shop in it's index. and apparently uses them as entrance pages to crawl the site with the mobile bot. Hence it never sees the default.asp (or outright ignores it).. and this causes as you might have guessed a huge pile of "Dub-content" Normally you would just place some user-agent detection in the page head and either throw Google a 301 or a rel-canon. But since I only have access to JavaScript and html in the page head, this cannot be done. I'm kinda running out of options quickly, so if anyone has an idea as to how the BEEP! I get Google to index the right domains for the right devices, please feel free to comment. 🙂 Any and all ideas are more then welcome.
Intermediate & Advanced SEO | | ReneReinholdt0