Is it necessary to have unique H1's for pages in a pagination series (i.e. blog)?
-
A content issue that we're experiencing includes duplicate H1 issues within pages in a pagination series (i.e. blog). Does each separate page within the pagination need a unique H1 tag, or, since each page has unique content (different blog snippets on each page), is it safe to disregard this?
Any insight would be appreciated. Thanks!
-
Read what EGOL wrote. It depends upon the nature of your blog pagination
There are a few reasons you could have pagination within the blog area of your site
-
Your articles have next buttons and different parts of the article are split across multiple URLs. The content across the paginated elements is distinct
-
Your post feeds are paginated, purely so people can browse to pages of 'older posts' and see what your wrote way back into your archives
-
Your blog posts exist on a single URL, but when users comment on your posts, your individual posts gain paginated iterations so that users to browse multiple pages of UGC comments (as they apply to an individual post)
In the case of 2 or 3 it's not necessarry to have unique H1s or Page Titles on such paginated addresses, except under exceptional circumstances. In the case of #1 you should make the effort!
-
-
This is very true for multi-section articles (which span multiple addresses), and less true of articles which have only one address yet break down into multiple addresses in terms of UGC comment-based pagination
-
I wouldn't worry about it as search bots "should" understand that these pages are part of a paginated series.
However, I would recommend you ensure that "rel=next/prev" is properly implemented (despite Google announcing that they don't support it). Once the pagination is properly implemented & understood, bots will see the pages as a continuation of a series, and therefore will not see duplicate H1s as a problem.
-
In some instances, not using unique
and unique <title>is a huge opportunity loss.</p> <p>Let's say you have a fantastic article about Widgets and you break it up over several pages. The sections of your article are:</p> <ul> <li>wooden widgets</li> <li>metal widgets</li> <li>plastic widgets</li> <li>stone widgets</li> </ul> <p>... if you make custom <h1> and <title> tags for these pages (and post them on unique URLs) you are going to get your article into a lot more SERPs and haul in a lot more traffic.</p></title>
-
Best practice is a unique H1 - only one H1 to describe a page.
-
Don't worry about it. You're not trying to rank your /blog/2 or /blog/17 for any specific terms. Those pages are pretty much for site visitors not the search engines.
As an example, Moz has the same h1 tag on all their blog pages.
All of the following URL's have "The Moz Blog" as the h1 tag:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Multiple links from same domain (different pages) considered in credibility of backlinks?
Hi, Let's say there are multiple backlinks from different pages of same domain to different pages of other domain like below: Website A: Page 1 -----------> Website B: Page 1 Website A: Page 2 -----------> Website B: Page 2 Do the pages of Website B pages will get backlinks authority equally or they don't get much backlinks impact as they have multiple backlinks from same domain? There were old school stories that Google ignores second link from same domain.....etc... So, please suggest on this. Thank you. Note: The question is NOT about content relevancy or domain authority score of the backlinks.
Algorithm Updates | | vtmoz1 -
Need only tens of pages to be indexed out of hundreds: Robots.txt is Okay for Google to proceed with?
Hi all, We 2 sub domains with hundreds of pages where we need only 50 pages to get indexed which are important. Unfortunately the CMS of these sub domains is very old and not supporting "noindex" tag to be deployed on page level. So we are planning to block the entire sites from robots.txt and allow the 50 pages needed. But we are not sure if this is the right approach as Google been suggesting to depend mostly on "noindex" than robots.txt. Please suggest whether we can proceed with robots.txt file. Thanks
Algorithm Updates | | vtmoz0 -
Does Google considers the direct traffic on the pages with rel canonical tags?
Hi community, Let's say there is a duplicate page (A) pointing to original page (B) using rel canonical tag. Pagerank will be passed from Page A to B as the content is very similar and Google honours it hopefully. I wonder how Google treats the direct traffic on the duplicate Page A. We know that direct traffic is also an important ranking factor (correct me if I'm wrong). If the direct traffic is high on the duplicate page A, then how Google considers it? Will there be any score given to original page B? Thanks
Algorithm Updates | | vtmoz0 -
A page will not be indexed if published without linking from anywhere?
Hi all, I have noticed one page from our competitors' website which has been hardly linked from one internal page. I just would like to know if the page not linked anywhere get indexed by Google or not? Will it be found by Google? What if a page not linked internally but go some backlinks from other websites? Thanks
Algorithm Updates | | vtmoz0 -
Googles Search Intent – Plural & Singular KW’s
This is more of a ‘gripe’ than a question, but I would love to hear people’s views. Typically, when you search for a product using the singular and plural versions of the keyword Google delivers different SERPs. As an example, ‘leather handbag’ and ‘leather handbags’ return different results, but surely the search intent is exactly the same? You’d have thought Google was now clever enough to work this out. We tend to optimise our webpages for both the plural and singular variations of the KW’s, but see a mixed bag of results when analysing rankings. Is Google trying to force us to create a unique webpage for the singular version, and another unique webpage for the plural version? This would confuse the visitor, and make no sense.. the search intent is the same! How do you combat this problem? Many thanks in advance. Lee.
Algorithm Updates | | Webpresence0 -
Latest Best Practices for Single Page Applications
What are the latest best practices for SPA (single page application) experiences? Google is obviously crawling Javascript now, but is there any data to support that they crawl it as effectively as they do static content? Considering Bing (and Yahoo) as well as social (FB, Pinterest, etc) - what is the best practice that will cater to the lowest-common denominator bots and work across the board? Is a prerender solution still the advised route? Escaped fragments with snapshots at the expanded URLs, with SEO-friendly URL rewrites?
Algorithm Updates | | edmundsseo2 -
Doorway Algorithm Update Affecting Location Based Pages?
Hi all, I read this article concerning the doorway algorithm update - http://searchengineland.com/google-to-launch-new-doorway-page-penalty-algorithm-216974 This quote is what got my attention: "How do you know if your web pages are classified as a “doorway page?” Google said asked yourself these questions: Is the purpose to optimize for search engines and funnel visitors into the actual usable or relevant portion of your site, or are they an integral part of your site’s user experience? Are the pages intended to rank on generic terms yet the content presented on the page is very specific? Do the pages duplicate useful aggregations of items (locations, products, etc.) that already exist on the site for the purpose of capturing more search traffic? Are these pages made solely for drawing affiliate traffic and sending users along without creating unique value in content or functionality? Do these pages exist as an “island?” Are they difficult or impossible to navigate to from other parts of your site? Are links to such pages from other pages within the site or network of sites created just for search engines?" We utilize location based pages for ourselves and a few clients too. **Example Case: ** -We attempt to rank for "keyword city/state" - "keyword city/state" - "keyword city/state" The keywords will often be the same such as "AC Repair" or "Physical Therapy" etc. with city / state combination such as "Tulsa, OK" "Seattle, WA" etc. The goal is to rank locally for those terms (NAP is applicable in some circumstances). Does the above case classify as a Doorway page? According to that definition, it does. However, this is a business that services that area. Some don't have physical address there but they do service that area (whether it be AC Repair or Website Design). Please advise me as to what a doorway page is exactly & if my practice is in-line. Thanks, Cole
Algorithm Updates | | ColeLusby0 -
In the body of index page i want to be able to add text that can be picked up by crawlers but I do not want these text to be visible? How can I code this?
in the body of index page i want to be able to add text that can be picked up by crawlers but I do not want these text to be visible? How can I code this?
Algorithm Updates | | FinindDesign0