Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Location Pages On Website vs Landing pages
-
We have been having a terrible time in the local search results for 20 + locations. I have Places set up and all, but we decided to create location pages on our sites for each location - brief description and content optimized for our main service. The path would be something like .com/location/example.
One option that has came up in question is to create landing pages / "mini websites" that would probably be location-example.url.com.
I believe that the latter option, mini sites for each location, would be a bad idea as those kinds of tactics were once spammy in the past.
What are are your thoughts and and resources so I can convince my team on the best practice.
-
Hi KJ,
Agree with the consensus here that building mini sites is not the right approach. Take whatever energy you would have put into developing these and channel it into making the landing pages for your locations the best in their industry/towns. I was just watching a great little video by Darren Shaw in which this is one of the things he covers. Might be worth sharing with your team:
http://www.whitespark.ca/blog/post/70-website-optimization-basics-for-local-seo
And earlier this year, Phil Rozek penned some pretty fine tips on making your pages strong:
I am curious about one element of your original post. You mention, "We have been having a terrible time in the local search results for 20 + locations." I wasn't sure whether you were saying that you've never done well in them, were doing well in them until something changed (such as the universal rollout of Local Stacks) or something else. With the latter, I would guess that a huge number of businesses are now struggling to cope with the fact that there are only 3 spots to rank for any keyword, necessitating greater focus on lower volume keywords/categories, organic and paid results. Everybody but the top 3 businesses is now in this boat. Very tough.
-
Hi KJ,
First things first, do you have a physical address for each location and are these set up in Google My Business? I doubt you have premises in each location, so ranking for all the areas is going to be an uphill task.
Google is smart and knows if you have physical premises in the targeted location, after all it's all about delivering highly relevant results to its users. Lets say for example you're an electrician and a user searches for "Electrician in Sheffield" - realistically, if you only have premises in Leeds, it's going to be difficult to rank above the company who is actually located in Sheffield.
I would firstly target 2-3 of your primary locations and focus on building 10x content, I would aim to write 1000+ words for each page (completely unique content) whilst focusing on your set keywords, but be natural and don't keyword stuff. Put reviews from customers in that specific area on the landing page and build citations from local directories.
Again, you can't build citations unless you have physical premises in the location. Trust me, I've done it for years for a Roofing company and it's taken some time to see the results. He's #1 for the city he is located in, but for other cities it's a very difficult task. Writing about the same service for each location is a daunting task too, you should consider Great Content to outsource the content if you're stuck for ideas. It's a low budget solution and will save you mountains of time.
I would also use folders and not subdomains. Build a 'service areas' page, examples of urls for the roofing company below.
-
Hello KJ,
You absolutely don't want to begin creating subdomains for different locations. That will split your link flow across multiple domains (rather than consolidating it within a single domain).
It sounds like you are attempting a silo structure for your website (multiple locations targeting the same keyword) but this can be seen as stuffing if performed incorrectly. Using multiple pages to rank for a single keyword is problematic as it hits both Panda and Penguin red flags. What you want to do is begin ranking for different keywords or at least ensuring that your content for each of these locations pages is unique and sufficiently long (500 words+) to avoid arousing suspicion.
Your site structure sounds like it is okay. For example, a silo we put in place for one of our clients followed the following pattern:
domain.com/country/region/city/service
We hit about 15 cities using this tactic, and they have been sitting 1st page for the last year or so. We also built sufficient links to the home page and relevant pages and ensured that our technical SEO was spotless, so perhaps these are the areas you might engage your team to move forward on.
If you want to know more about our process, feel free to touch base and I will provide what advice I can.
Hope this helps and best of luck moving forward!
Rob
-
Right. You will not beat the other folks with the subdomain approach. You are getting beat because your competitors are taking the time to make better content in a niche. Find a way to get better content on those pages and mark them up with schema to make the info more readable to the search engines and possibly get an enhanced listing the SERPs.
We went through a site relaunch and the review schema on locations got messed up. Did not impact our rankings, but did impact click through from the search engines. None of the stars were showing up in the SERPs due to the schema goof up. Got the schema fixed and traffic was back up.
This link will point you toward the relevant Moz resources
https://moz.com/community/q/moz-s-official-stance-on-subdomain-vs-subfolder-does-it-need-updating
If you are happy with my response, please feel free to mark as a "Good Answer" thanks!
-
I agree with you. Some marketing people believe that we cannot beat out smaller companies is that we are too diverse in services. We do great with niche keywords and markets, but are being beat by companies who only focus on one of our key services. That is why they thought sub domains would do better, but I remember Rand posting something on sub domains vs sub folders, but cannot find the original source.
Thanks for your answer...
-
This is similar to the question on if a blog should be on a subdomain (blog.website.com) vs a folder (website.com/blog).
Most people agree that the use of the folder is the better option as with every blog post that you get links to etc, you are building your domain authority and generally speaking, rising tides raise all ships.
You would run into the same issue with your option to setup subdomains for each location. You would also end up having to deal with separate webmaster accounts for each etc. I don't think the subdomain is the solution. I run a site with thousands of locations and using a folder structure the business pages rank well for a given location, if you search on the name of the location, so I know it works and I manage it at scale.
I would get back to looking at any technical issues you have and your on page options for the pages. Anything you can further do to make these pages 10x better than any other page on the net for those locations?
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Category Page as Shopping Aggregator Page
Hi, I have been reviewing the info from Google on structured data for products and started to ponder.
Intermediate & Advanced SEO | | Alexcox6
https://developers.google.com/search/docs/data-types/products Here is the scenario.
You have a Category Page and it lists 8 products, each products shows an image, price and review rating. As the individual products pages are already marked up they display Rich Snippets in the serps.
I wonder how do we get the rich snippets for the category page. Now Google suggest a markup for shopping aggregator pages that lists a single product, along with information about different sellers offering that product but nothing for categories. My ponder is this, Can we use the shopping aggregator markup for category pages to achieve the coveted rich results (from and to price, average reviews)? Keen to hear from anyone who has had any thoughts on the matter or had already tried this.0 -
Can noindexed pages accrue page authority?
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking). I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content. My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
Intermediate & Advanced SEO | | THandorf0 -
Replace dynamic paramenter URLs with static Landing Page URL - faceted navigation
Hi there, got a quick question regarding faceted navigation. If a specific filter (facet) seems to be quite popular for visitors. Does it make sense to replace a dynamic URL e.x http://www.domain.com/pants.html?a_type=239 by a static, more SEO friendly URL e.x http://www.domain.com/pants/levis-pants.html by creating a proper landing page for it. I know, that it is nearly impossible to replace all variations of this parameter URLs by static ones but does it generally make sense to do this for the most popular facets choose by visitors. Or does this cause any issues? Any help is much appreciated. Thanks a lot in advance
Intermediate & Advanced SEO | | ennovators0 -
How to rank for a location/country without having a physical address in that location/country
How do I go about it if my physical address (office) is in Country A but I want to rank my website in Country B, C and D (without having an office or physical address in the countries B, C and D)? I am aware of people setting up virtual offices in other countries/cities and adding them to Google Places/Maps with toll free phone numbers, but I don't wish to do any of that. I know Google will catch up with this one day or the other and punish me hard for trying to play games with it. Is there a way rank a website in another country without actually having a physical location there? If yes, please guide me how to go about it.
Intermediate & Advanced SEO | | KS__0 -
Should we 301 redirect old events pages on a website?
We have a client that has an events category section that is filled to the brim with past events webpages. Another issue is that these old events webpages all contain duplicate meta description tags, so we are concerned that Google might be penalizing our client's website for this issue. Our client does not want to create specialized meta description tags for these old events pages. Would it be a good idea to 301 redirect these old events landing pages to the main events category page to pass off link equity & remove the duplicate meta description tag issue? This seems drastic (we even noticed that searchmarketingexpo.com is keeping their old events pages). However it seems like these old events webpages offer little value to our website visitors. Any feedback would be much appreciated.
Intermediate & Advanced SEO | | RosemaryB0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Targeting local areas without creating landing pages for each town
I have a large ecommerce website which is structured very much for SEO as it existed a few years ago. With a landing page for every product/town nationwide (its a lot of pages). Then along came Panda... I began shrinking the site in Feb last year in an effort to tackle duplicate content. We had initially used a template only changing product/town name. My first change was to reduce the amount of pages in half by merging the top two categories, as they are semantically similar enough to not need their own pages. This worked a treat, traffic didn't drop at all and the remaining pages are bringing in the desired search terms for both these products. Next I have rewritten the content for every product to ensure they are now as individual as possible. However with 46 products and each of those generating a product/area page we still have a heap of duplicate content. Now i want to reduce the town pages, I have already started writing content for my most important areas, again, to make these pages as individual as possible. The problem i have is that nobody can write enough unique content to target every town in the UK via an individual page (times by 46 products), so i want to reduce these too. QUESTION: If I have a single page for "croydon", will mentioning other local surrounding areas on this page, such as Mitcham, be enough to rank this page for both towns? I have approx 25 Google local place/map listings and grwoing, and am working from these areas outwards. I want to bring the site right down to about 150 main area pages to tackle all the duplicate content, but obviously don't want to lose my traffic for so many areas at once. Any examples of big sites that have reduced in size since Panda would be great. I have a headache... Thanks community.
Intermediate & Advanced SEO | | Silkstream0 -
301 vs 410 redirect: What to use when removing a URL from the website
We are in the process of detemining how to handle URLs that are completely removed from our website? Think of these as listings that have an expiration date (i.e. http://www.noodle.org/test-prep/tphU3/sat-group-course). What is the best practice for removing these listings (assuming not many people are linking to them externally). 301 to a general page (i.e. http://www.noodle.org/search/test-prep) Do nothing and leave them up but remove from the site map (as they are no longer useful from a user perspective) return a 404 or 410?
Intermediate & Advanced SEO | | abargmann0