More pages or less pages for best SEO practices?
-
Hi all,
I would like to know the community's opinion on this. A website with more pages or less pages will rank better? Websites with more pages have an advantage of more landing pages for targeted keywords. Less pages will have advantage of holding up page rank with limited pages which might impact in better ranking of pages. I know this is highly dependent. I mean to get answers for an ideal website.
Thanks,
-
I generally agree with George and Nicholas.
I also think that the strength of your site vs the strength of the competition is important - along with the difficulty of the keywords.
If you are going after long tail keywords against weaker competition, then six shorter content pages targeting six different keywords would be best. However, if the competition is strong and the keywords difficult then one big kickass page will have the best chance.
Finally, presenting one comprehensive article with all of your text and photos on one page is better for link-earning than breaking it up into six short pages.
This is a complex question asked simply. We could also consider the ad impression opportunity of getting the visitor to click through six pages.
-
Agree with George, quality over quantity is whats important. With that being said, the more pages you have the more keywords you can potentially target from your website, so while keeping the content quality high, you want to have a decent amount of pages that are each optimized for different keyword phrases that you want to rank for.
-
In my humble opinion I believe it is a matter of value. I believe that a 3000-word page targeting 6 keywords has (and is perceived by google as having) more value or than 6 pages with 500 words each.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console Not Indexing Pages
Hi there! I have a problem that I was hoping someone could help me with. On google search console, my website does not seem to be indexed well. In fact, even after rectifying problems that Moz's on-demand crawl has pointed out, it still does not become "valid". There are some of the excluded pages that Google has pointed out. I have rectified some of the issues but it doesn't seem to be helping. However, when I submitted the sitemap, it says that the URLs were discoverable, hence I am not sure why they can be discovered but are not deemed "valid". I would sincerely appreciate any suggestions or insights as to how can I go about to solve this issue. Thanks! Screenshot+%28341%29.png Screenshot+%28342%29.png Screenshot+%28343%29.png
Algorithm Updates | | Chowsey0 -
Seeing some really bad sites that ranked in my niche years ago reaching 1st page
It started after the update about 4 websites form the 1st page dropped to the 2nd and 4 of the other sites just popped back to the 1st page and the bad part is that the Da and inbound links of these sites are really bad, so my question is must we just wait this out till Google realises how bad these site are and some of them haven't been updated in years links broken i can go on and on. what these sites have is just the age of the domains, but can this really be the main focus of these results?
Algorithm Updates | | johan80 -
Have you ever seen or experienced a page indexed which is actually from a website which is blocked by robots.txt?
Hi all, We use robots file and meta robots tags for blocking website or website pages to block bots from crawling. Mostly robots.txt will be used for website and expect all the pages to not getting indexed. But there is a condition here that any page from website can be indexed by Google even the site is blocked from robots.txt; because crawler may find the page link somewhere on internet as stated here at last paragraph. I wonder if this really the case where some webpages have got indexed. And even we use meta tags at page level; do we need to block from robots.txt file? Can we use both techniques at a time? Thanks
Algorithm Updates | | vtmoz0 -
Any suggestions why I would rank 1 on google and be on 3rd page for bing/yahoo?
Currently the site I'm working on ranks very well on google rankings but then when we cross reference into yahoo and bing we are basically in the graveyard of keywords. (bottom of 3rd page). Why would that be? Any suggestions or things I can do to fix this or troubleshoot it? Here are some things I can think of that might affect this but not sure. 1. our sitemap hasn't been updated in months and URL changes have been made 2. Onsite for yahoo and bing is different from google? 3. Bing is just terrible in general? 4. Inbound links? This one doesn't make sense though unless the search engines rank links in different ways. All jokes aside I would really appreciate any help as currently the few top ranked keywords we have are about 30% of our organic traffic and would have a huge affect on the company if we were able to rank as we should across all platforms. Thanks!
Algorithm Updates | | JemJemCertified0 -
Question About : Redirecting Old Pages to New & More Relevant Ones
I'm looking over a friends website, which used to have great natural ranking for some big keywords. Those ranking & CTR's have dropped a lot, so the next thing I checked into was top selling Brand & Category pages. Its seems like every year or so a New Page was constructed for each brand... Many of which have high quality and natural inbound links. However, the pages no longer have products and simply look outdated. I'm trying to figure out if they should place redirects on all the old pages to a new URL which is more seo friendly. Example Links : http://www.xyz.com/nike2004.html , http://www.xyz.com/nike-spring2006.html , http://www.xyz.com/2011-nike-shoes.html - (have quality inbound links, bad content) .... Basically would it be advantageous to place redirects on all of these example pages to a new one that will be more permanent... http://www.xyz.com/nike-shoes.html I'm also looking at about 15 brands and maybe 100+ old/outdated urls, so I wasn't sure if I should do this & to what extent. Considering many of the brand pages do rank, but not as well as they should... Any input would help, thanks
Algorithm Updates | | Southbay_Carnivorous_Plants0 -
New .TLD domains - SEO Value?
Hi all, I see that a new wave of domains are to be released soon. We are not talking or 1 or 2 new extensions, but more like 700 new extensions on a TLD level. What's your views on their SEO value? thanks!
Algorithm Updates | | bjs20100 -
Google.co.uk vs pages from the UK - anyone noticed any changes?
We've started to notice some changes in the rankings of Google UK and Google pages from the UK. Pages from the UK have always typically ranked higher, however it seems like these are slipping, and Google UK pages (pages from the web) are climbing. We've noticed a similar thing happening in the Bing/Yahoo algorithm as well. Just wondered if anyone else has anyone else noticed this? Thanks
Algorithm Updates | | Digirank0 -
Stop google indexing CDN pages
Just when I thought I'd seen it all, google hits me with another nasty surprise! I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site? Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages. Anyone got an idea on how to stop that? Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there. It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS? Have you seen this problem and beat it? (Of course the next thing is Roger might look at google results and start crawling them too, LOL) P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
Algorithm Updates | | loopyal0