Long Meta Descriptions
-
I want to create a template for Meta titles, descriptions and keywords on my website for old news and minor pages in order to get some long tail traffic from them.
The only template I can think to use for the descriptions takes the first sentence of the news article (which often if above 160 characters). Since these are minor pages, how big of a problem is that?
Thanks!
-
The key is to make the meta descriptions unique - if they are not important then the rule which grabs the first sentance would work fine (as you suggest).
Another rule (so it reads better) is to limit it to say, 135 chars, have some smarts in place to ensure you don't end the sentance mid word & then append the end with "... Read more here" or something relevant to your site.
So the template could look like:
"<first 135="" chars="" of="" first sentence not="" ending="" mid-word="">... Read more here."
(even if it does end mid word it would be better than having duplicated descs since the pages aren't important) </first>
hope that helps!
-
According to Google you don't need to use the meta KW tag.
I would make sure my description tag is limited to about 125 characters. NOT More.
The length of the page should not impact the description tag. As long as it is descriptive of the content of the page your good.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why Google ranks a page with Meta Robots: NO INDEX, NO FOLLOW?
Hi guys, I was playing with the new OSE when I found out a weird thing: if you Google "performing arts school london" you will see w w w . mountview . org. uk at the 3rd position. The point is that page has "Meta Robots: NO INDEX, NO FOLLOW", why Google indexed it? Here you can see the robots.txt allows Google to index the URL but not the content, in article they also say the meta robots tag will properly avoid Google from indexing the URL either. Apparently, in my case that page is the only one has the tag "NO INDEX, NO FOLLOW", but it's the home page. so I said to myself: OK, perhaps they have just changed that tag therefore Google needs time to re-crawl that page and de-index following the no index tag. How long do you think it will take to don't see that page indexed? Do you think it will effect the whole website, as I suppose if you have that tag on your home page (the root domain) you will lose a lot of links' juice - it's totally unnatural a backlinks profile without links to a root domain? Cheers, Pierpaolo
Technical SEO | | madcow780 -
Meta Description VS Rich Snippets
Hello everyone, I have one question: there is a way to tell Google to take the meta description for the search results instead of the rich snippets? I already read some posts here in moz, but no answer was found. In the post was said that if you have keywords in the meta google may take this information instead, but it's not like this as i have keywords in the meta tags. The fact is that, in this way, the descriptions are not compelling at all, as they were intended to be. If it's not worth for ranking, so why google does not allow at least to have it's own website descriptions in their search results? I undestand that spam issues may be an answer, but in this way it penalizes also not spammy websites that may convert more if with a much more compelling description than the snippets. What do you think? and there is any way to fix this problem? Thanks!
Technical SEO | | socialengaged
Eugenio0 -
Totally confused by titles being too long!
We use a wordpress site and have tried some SEO plug-ins in the past. In the analysis, I am getting more than 2,000 pages have titles that are too long and from the results it looks like it was a result of one of these past SEO plug-ins. Do I need to go through and change all those titles? How bad is it that so many of the old blog posts are too long? Some are too long only in the archive link... So confused and hating the idea of reworking thousands of posts! Thanks y'all!
Technical SEO | | Longsphoto0 -
Yoast WP SEO Plugin: Duplicate Title / Description For Pagination
Hello, I just have installed on YAOST WP SEO plugin on my blog to optimize and get better results, as I was using All in one seo Plugin before. On Tuesday SEOMOZ crawler has been updated my site report and I found several errors with my site related to duplicate meta title / description. Home Page Pagination, Categories/archive pagination and tags pagination bring the same meta title and description. I tried several methods to get the required result but unfortunately nothing helped. I used %%pagenumber%% and %%page%% etc. Any help will be highly appreciated.
Technical SEO | | KLLC0 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
Will I still get Duplicate Meta Data Errors with the correct use of the rel="next" and rel="prev" tags?
Hi Guys, One of our sites has an extensive number category page lsitings, so we implemented the rel="next" and rel="prev" tags for these pages (as suggested by Google below), However, we still see duplicate meta data errors in SEOMoz crawl reports and also in Google webmaster tools. Does the SEOMoz crawl tool test for the correct use of rel="next" and "prev" tags and not list meta data errors, if the tags are correctly implemented? Or, is it necessary to still use unique meta titles and meta descriptions on every page, even though we are using the rel="next" and "prev" tags, as recommended by Google? Thanks, George Implementing rel=”next” and rel=”prev” If you prefer option 3 (above) for your site, let’s get started! Let’s say you have content paginated into the URLs: http://www.example.com/article?story=abc&page=1
Technical SEO | | gkgrant
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4 On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the section: On the second page, http://www.example.com/article?story=abc&page=2: On the third page, http://www.example.com/article?story=abc&page=3: And on the last page, http://www.example.com/article?story=abc&page=4: A few points to mention: The first page only contains rel=”next” and no rel=”prev” markup. Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup. The last page only contains markup for rel=”prev”, not rel=”next”. rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL. rel=”next” and rel=”prev” only need to be declared within the section, not within the document . We allow rel=”previous” as a syntactic variant of rel=”prev” links. rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain: rel=”prev” and rel=”next” act as hints to Google, not absolute directives. When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.0 -
How long should I keep 301 redirects?
I have modified a the URL structure of a whole section of a website and used mod_rewrite 301 redirect to match the new structure. Now that was around 3 months ago and I was wondering how long should I keep this redirect for? As it is a new website I am quite sure that there are no links around with the old URL structure but still I can see the google bot trying from time to time to access the old URL structure. Shouldn't the google bot learn from this 301 redirect and not go anymore for the old URL?
Technical SEO | | socialtowards0 -
How long does it take for customized Google Site Search to show results from pdf files?
The site in question is http://www.ejmh.eu I am pretty unsatisfied with the results I am getting from the Site Search provided by Google. We have over 160 pdf files in this subfolder: http://www.ejmh.eu/mellekletek The files are the digital versions of articles. When I search for content in those pdf files, Google does not show results. It does show results from older pages, dating back 1-2 years but it is certainly not showing anything from pdf files that I have just put up 3 weeks ago. My questions: If I place a Google Search on a site, does it not automatically display results from ALL the content in the root domain? Is there any correlation between how the Site Search is indexing the files and how Google is indexing the urls in general? Should I just wait and see whether site search performance improves or should I switch to another Search software like Zoom Search? It is vital to have a proper, high-quality search functioning on that site in the very near future. What are your experiences? Any tips are greatly appreciated.
Technical SEO | | Lauroca0