Is it necessary to have unique H1's for pages in a pagination series (i.e. blog)?
-
A content issue that we're experiencing includes duplicate H1 issues within pages in a pagination series (i.e. blog). Does each separate page within the pagination need a unique H1 tag, or, since each page has unique content (different blog snippets on each page), is it safe to disregard this?
Any insight would be appreciated. Thanks!
-
Read what EGOL wrote. It depends upon the nature of your blog pagination
There are a few reasons you could have pagination within the blog area of your site
-
Your articles have next buttons and different parts of the article are split across multiple URLs. The content across the paginated elements is distinct
-
Your post feeds are paginated, purely so people can browse to pages of 'older posts' and see what your wrote way back into your archives
-
Your blog posts exist on a single URL, but when users comment on your posts, your individual posts gain paginated iterations so that users to browse multiple pages of UGC comments (as they apply to an individual post)
In the case of 2 or 3 it's not necessarry to have unique H1s or Page Titles on such paginated addresses, except under exceptional circumstances. In the case of #1 you should make the effort!
-
-
This is very true for multi-section articles (which span multiple addresses), and less true of articles which have only one address yet break down into multiple addresses in terms of UGC comment-based pagination
-
I wouldn't worry about it as search bots "should" understand that these pages are part of a paginated series.
However, I would recommend you ensure that "rel=next/prev" is properly implemented (despite Google announcing that they don't support it). Once the pagination is properly implemented & understood, bots will see the pages as a continuation of a series, and therefore will not see duplicate H1s as a problem.
-
In some instances, not using unique
and unique <title>is a huge opportunity loss.</p> <p>Let's say you have a fantastic article about Widgets and you break it up over several pages. The sections of your article are:</p> <ul> <li>wooden widgets</li> <li>metal widgets</li> <li>plastic widgets</li> <li>stone widgets</li> </ul> <p>... if you make custom <h1> and <title> tags for these pages (and post them on unique URLs) you are going to get your article into a lot more SERPs and haul in a lot more traffic.</p></title>
-
Best practice is a unique H1 - only one H1 to describe a page.
-
Don't worry about it. You're not trying to rank your /blog/2 or /blog/17 for any specific terms. Those pages are pretty much for site visitors not the search engines.
As an example, Moz has the same h1 tag on all their blog pages.
All of the following URL's have "The Moz Blog" as the h1 tag:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What tools and metrics do you use to show a topic's search interest over time?
I have a foundation repair client that is down in leads for the structural repair portion of their business. They have not lost any major rankings, but leads are down compared to last year. They asked if people are searching for this type of work less this year compared to last. I checked Google Trends and Keyword Planner data but found very different results. Is either of these tools accurate, or is there a better tool to use?
Algorithm Updates | | DigitalDivision1 -
What date tags are required/best to use for articles or other landing pages?
Does anyone have suggestions on which date tag(s) are most important to use and how to use them on the frontend? (i.e. dateModified, dateCreated, and datePublished). The Structured Data Testing Tool is coming up with errors for my article pages, but I'm a bit confused which ones should be in the code vs. showing on the frontend.
Algorithm Updates | | ElsaT0 -
Landing page redirect along with complete content
Hi Moz community, We have a page with "keyword" we are targeting in slug like website.com/keyword/. This page doesn't have much back-links or visits like homepage. So we decided to redirect homepage to /keyword page along with complete content. Will this going to hurt? Only change anybody can notice is URL. Are there any risks involved. I think this is the best way to highlight the page we been thinking about. Thanks
Algorithm Updates | | vtmoz0 -
Can we ignore "broken links" without redirecting to "new pages"?
Let's say we have reaplced www.website.com/page1 with www.website.com/page2. Do we need to redirect page1 to page2 even page1 doesn't have any back-links? If it's not a replacement, can we ignore a "lost page"? Many websites loose hundreds of pages periodically. What's Google's stand on this. If a website has replaced or lost hundreds of links without reclaiming old links by redirection, will that hurts?
Algorithm Updates | | vtmoz0 -
Page 1 all of a sudden for two clients
Hello, So, for many months, a couple of my clients have had a handful of terms that they were ranking for on Page 2. All of a sudden in the past month, both clients have moved up to Page 1, #2 for most of their terms. I have been working on some optimization tests and made minor changes, but I am concerned because the consistency of the #2 position for both clients for all of the previously Page 2 ranking keywords. I have seen this type of Google increase for clients before, and my experience has shown that it is a test from Google-so, from Google's perspective: "we're going to move your rankings up to Page 1 and see what you do with this to prove to us that your site is worth the position". Anyone had any experience with this kind of movement? Thanks so much in advance..
Algorithm Updates | | lfrazer0 -
Best Practices for Page Titles | RSS Feeds
Good Morning MOZers, Quick question for the community: when creating an RSS feed for one of your websites, how do you title your RSS feed? Currently, the sites I'm managing use the 'rss.xml' for the file name, but I was curious to know whether or not it would, in any way, benefit my SERP if I were to add my domain to precede the 'rss.xml', i.e. 'my-sites-rss.xml' or something of that nature. Beyond that, are there any 'best practices' for creating RSS feed page titles or is there a preferred method of implementation? Anybody have any solutions
Algorithm Updates | | NiallSmith0 -
Google site links on sub pages
Hi all Had a look for info on this one but couldn't find much. I know these days that if you have a decent domain good will often automatically put site links on for your home if someone searches for your company name, however has anyone seen these links appear for sub pages? For example, lets say I had a .com domain with /en /fr /de sub folders, each seoed for their location. If I were to then have domain.com/en/ as no1 in Google for my company in the UK would I be able to get site links under this or does it only work on the 'proper' homepage domain.com/ A client of mine wants to reorganise their website so they have different location sections ranking in different markets but they also want to keep having sitewide links as they like the look of it Thanks Carl
Algorithm Updates | | Grumpy_Carl0 -
Stop google indexing CDN pages
Just when I thought I'd seen it all, google hits me with another nasty surprise! I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site? Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages. Anyone got an idea on how to stop that? Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there. It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS? Have you seen this problem and beat it? (Of course the next thing is Roger might look at google results and start crawling them too, LOL) P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
Algorithm Updates | | loopyal0