Many pages small unique content vs 1 page with big content
-
Dear all,
I am redesigning some areas of our website, eurasmus.com and we do not have clear what is the best
option to follow. In our site, we have a city area i.e: www.eurasmus.com/en/erasmus-sevilla which we are going
to redesign and a guide area where we explain about the city, etc...http://eurasmus.com/en/erasmus-sevilla/guide/
all with unique content.The thing is that at this point due to lack of resources, our guide is not really deep and we believe like this it does not
add extra value for users creating a page with 500 characters text for every area (transport...). It is not also really user friendly.
On the other hand, this pages, in long tail are getting some results though is not our keyword target (i.e. transport in sevilla)
our keyword target would be (erasmus sevilla).When redesigning the city, we have to choose between:
a)www.eurasmus.com/en/erasmus-sevilla -> with all the content one one page about 2500 characters unique.
b)www.eurasmus.com/en/erasmus-sevilla -> With better amount of content and a nice redesign but keeping
the guide pages.What would you choose?
Let me know what you think.
Thanks!
-
Wow, Jose, you got a whole audit from Luis.
1. Luis makes a good point about Seville vs Sevilla. When you're trying to target a region other than your own, make sure that you change the location in Google Keyword Planner. Seville is the English version of Sevilla (which I know sounds strange, but we also call your country Spain rather than España).
2. Both subdomains and subfolders can effectively designate different languages. If you've made the call to use subfolders, that's fine. It's probably what I would have done, too, since that means the Domain Authority will transfer easily.
4 & 5. Keyword repetition in URLs isn't necessarily bad in your case, because it's caused by a lot of subfolders.
It seems like there's been some debate here on more subfolders vs less: there isn't a hard and fast rule about it. If you have more subfolders, those pages higher up in the structure tend to get more link equity out of the deal and rank better. That takes away from deeper pages, though, which are presumably targeting the most important words. If you use fewer subfolders, the link equity will be evenly distributed, but that means that higher level pages will be weaker and deeper pages will be stronger. In your case, I don't know the answer, since I don't know how competitive different keywords are at different levels. If I were your SEO, I'd tell you to stick with your current URL structure, because moving pages to new URLs tends to cause a big knock in rankings for awhile.
-
Hi Jose,
I like your current set up, with more pages at 500 words. 500 words doesn't make for thin content from a search engine's perspective, and it means that you're delivering a more targeted result for searchers; if you don't have separate pages and they search for "restaurants in Seville," they're not going to be thrilled if they land on your mega guide page and have to search to find what they're looking for.
That said, you may want to change the language on the main Seville page so you don't call these "detailed" city guides.
Good luck!
Kristina
-
hello again,
1. Seville has 22,000 searches in UK but very few people look for Sevilla.
2. It depends, I prefer subdomain.domain.com instead of subfolders.... I only found English language at your site. Even if you use /en/ you need a main language (that could be English), and this is not necessary to have the subfolder: www.domain.com (for English), then www.domain.com/es (for Spanish).... and so on. But well is a personal decision
3. OK
4. You didnt get my point. Please read my message and my example carefully (since I checked your site carefully). It's very very important you dont repeat similar or same keywords in the same URL. In my example before it was "Seville+Sevilla" and "university+universities" in one single URL.
5. Again, the best is to have the minimum subfolder as possible! URL like this: www.eurasmus.com/erasmus-seville-city-guide are much nicer for Google than www.eurasmus.com/erasmus-sevilla/city-guide
You can keep only one if this is your strategy, or both since they have different content and context. It's up to you and if you apply a good SEO strategy I dont see any problem having two pages.
About the long tail, I already explained before. You are maybe ranking now for non-competitive keywords (study the keyword difficulty rankings for your pages/keywords) for those pages. I recommend to focus on why you are not ranking well for the pages/KW you want and optimize your strategy.
Hope this helps!
Luis
-
Hi Luis!
Thank you for your message.
I will try to answer to all your comments.1. We have done all the research of the cities already and used the one with more results.
I will recheck it, to check that it is applied everywhere.
2.We are now publishing spanish and 7 languages more, that is why we have the /en.
We decided to go for the /fr /it etc...as far as i know there is not a relevant difference, we believe.
Am I right?
3. I agree. That is why we are redesigning (also not friendly user at all).
4. It is eurasmus.com, brand name, what is not erasmus. Different words. Another chapter would be
to discuss if it is a good brand election for SEO that has been a long discussion in our company long time.
5. We will study how to make it better!Concerning my direct question, would you recommend using all the guides content in the erasmus-sevilla
home page and delete the guide area or would you leave the guide and just make more content in the home?
Main thing is that we get results for long tail but those keywords do not really generate conversion....What do you think?
-
Hi Jose,
Some advices and questions:
- Have you done a keyword analyse before? How many searches you have for your supposed "focus keywords"?After checking a little bit I see the word "Seville" is much better than "Sevilla".... foreign users call it like that
- Don't abuse of URL sublevels: /en/eramus-sevilla/guide/... (You don't need the /en/ since your site is only in English. Please, if you plan to translate to new languages you can use subdomains for this (es.eurasmus.com, fr.eurasmus.com,...)
- Add much more content to your landing page (/erasmus-sevilla is quite poor in content)
- Don't repeat keywords in the URL: http://eurasmus.com/en/erasmus-sevilla/universities/university-of-seville (here you have two repeated keywords man!)
- Make things simplier! Some ideas:
- www.euramus.com/erasmus-spain/seville-city-guide
- www.euramus.com/erasmus-spain/seville-city-transport
- www.euramus.com/erasmus-spain/seville-universities
- www.euramus.com/erasmus-spain/madrid-city-guide
- www.euramus.com/erasmus-belgium/brussels-city-guide
Long tail results for different keywords are normal and that happen. Have you tested with the Moz Grade tool if your pages need some improvements for the related keywords? That would be necessary too.
Btw, I'm Spanish so dont hesitate to send me a PM if you need more help man
Luis
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal link from blog content to commercial pages risks?
Hi guys, Has anyone seen cases where a site has been impacted negatively from internal linking from blog content to commercial based pages (e.g. category pages). Anchor text is natural and the links improve user experience (i.e it makes sense to add them, they're not forced). Cheers.
Intermediate & Advanced SEO | | jayoliverwright0 -
Duplicate page content on numerical blog pages?
Hello everyone, I'm still relatively new at SEO and am still trying my best to learn. However, I have this persistent issue. My site is on WordPress and all of my blog pages e.g page one, page two etc are all coming up as duplicate content. Here are some URL examples of what I mean: http://3mil.co.uk/insights-web-design-blog/page/3/ http://3mil.co.uk/insights-web-design-blog/page/4/ Does anyone have any ideas? I have already no indexed categories and tags so it is not them. Any help would be appreciated. Thanks.
Intermediate & Advanced SEO | | 3mil0 -
What is the proper way to execute 'page to page redirection'
I need to redirection every page of my website to a new url of another site I've made. I intend to add:"Redirect 301 /oldpage.html http://www.example.com/newpage.html"I will use the 301 per page to redirect every page of my site, but I'm confused that if I add:"Redirect 301 / http://mt-example.com/" it will redirect all of my pages to the homepage and ignore the URLs i have separately mentioned for redirection.Please guide me.
Intermediate & Advanced SEO | | NABSID0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Merge content pages together to get one deep high quality content page - good or not !?
Hi, I manage the SEO of a brand poker website that provide ongoing very good content around specific poker tournaments, but all this content is split into dozens of pages in different sections of the website (blog section, news sections, tournament section, promotion section). It seems like today having one deep piece of content in one page has better chance to get mention / social signals / links and therefore get a higher authority / ranking / traffic than if this content was split into dozens of pages. But the poker website I work for and also many other website do generate naturally good content targeting long tail keywords around a specific topic into different section of the website on an ongoing basis. Do you we need once a while to merge those content pages into one page ? If yes, what technical implementation would you advice ? (copy and readjust/restructure all content into one page + 301 the URL into one). Thanks Jeremy
Intermediate & Advanced SEO | | Tit0 -
Links to images on a page diluting page value?
We have been doing some testing with additional images on a page. For example, the page here:
Intermediate & Advanced SEO | | Peter264
http://flyawaysimulation.com/downloads/files/2550/sukhoi-su-27-flanker-package-for-fsx/ Notice the images under the heading Images/Screenshots After adding these images, we noticed a ranking drop for that page (-27 places) in the SERPS. Could the large amount of images - in particular the links on the images (links to the larger versions) be causing it to dilute the value of the actual page? Any suggestions, advice or opinions will be much appreciated.0 -
Video Content, created for links back to our domain landing page.. the right way to do it?
Hi We've just finished a great bit of video content for a page we're trying to push up the serps. I've done some reading around but there are lots of conflicting views. We have created this content (its a city guide) for the purpose of becoming the resource on this topic but we are a new site and we need the links we hope to generate from this, we don't want the links to youtube. Where would you upload the video? Everywhere? One place? Vimeo? Thanks
Intermediate & Advanced SEO | | xoffie0 -
Could you use a robots.txt file to disalow a duplicate content page from being crawled?
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
Intermediate & Advanced SEO | | gregelwell0