Lots of city pages - How do I ensure we don't get penalized
-
We are planning on having a job posting page for each city that we are looking to hire new CFO partners in. But, the problem is, we have LOTS of locations. I was wondering what would be the best way to have similar content on each page (since the job description and requirements are the same for each job posting) without being hit by Google for having duplicate content? One of the main reasons we have decided to have location based pages is that we have noticed visitors to our site are searching for "cfo job in [location] but we notice that most of these visitors then leave. We believe it to be because the pages they land on make no mention of the location that they were looking for and is a little incongruent with what they were expecting.
We are looking to use the following URLs and TItle/Description as an example:
|
http://careers.b2bcfo.com/cfo-jobs/Alabama/Birmingham
| CFO Careers in Birmingham, AL |
| Are you looking for a CFO Career in Birmingham, Alabama ? We're looking for partners there. Apply today! |
|
Any advice you have for this would be greatly appreciated.
Thank you.
-
We would have the job description on each page mentioning the locations, then we would also have the job capture form.
You are right in that these descriptions do have unique data on them. I am thinking we are just going to have to take the time to write as much unique content as possible.
Thanks for the feedback.
-
Hey, the last sentence was based around other ways to bring in this inbound traffic but scratch that for now.
So, have you examined how these other, well ranking sites are doing what they do? Are they living off the fact they are big domains? Is the content on these pages unique as I just Googled:
CFO Careers in Birmingham, AL
And it appears they are job listings specific to that location so I am guessing that content is fairly unique and the listings is the content.
These pages that you would create, what content would they have on them? Would they all be different?
My initial understanding was that this would just be a data capture form but if we actually have unique job listings like on indeed.com, simplyhired, jobs2careers etc then these pages should be unique enough to rank.
Or am I missing something? (it is late in the day here 7pm, hitting my 12th hour of work so the old synapses may be failing me somewhat).
-
Marcus,
I am not sure I understand the last line of your post. But I have looked at the Keyword difficulty tool and these are fairly competitive phrases.
The problem we have is that we are competing against the likes of Indeed.com, Monster.com and sites such as that. While we do use these sites, they don't quite provide the flexibility we are looking for.
We used to rank quite highly for these types of phrases, but I have noticed a recent trend in Google for them to rank the job search sites ahead of us. The hope is that if we provide similar content, then Google would start pushing us up the rankings again.
-
A lot of this depends on the competitiveness of the search query and would need some testing to better determine your approach.
You can use the keyword difficulty tool here but also just google the terms and see what comes up. If the results are weak, you could try this as a stage 1 approach and see how you get on.
Maybe there is another way to think about it, what about the job listings themselves or does it not work that way?
-
I think these need to be indexed as it is through organic search that people have been getting to our site using terms such as "cfo jobs in [location]"
I have been thinking about adding new content for each city, but you are right, that is a LOT of work. I wonder if it might be worth having one page with unique, location based content for the main city in an area and just have a list of nearby cities on the page that we are also hiring in.
-
Hey Danny
A few suggestions:
1. Make each location page unique enough that you can safely have it on the site without worrying about duplication (lots of work).
2. If people are only searching or browsing to these pages internally then don't index them (robots.txt / meta noindex)
3. You could do this dynamically and use a canonical to your main enquiry page on these pages.
4. You could just create all the variations and add a canonical to your main enquiry page and they may, if it is not mega competitive rank (bit risky but easy to fix if it causes issues).
I would always try to look at this from the perspective of your users and if you don't really care about having these as organic search landing pages then simply noindexing them would seem an ideal solution.
Hope that helps!
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Paginated Pages Page Depth
Hi Everyone, I was wondering how Google counts the page depth on paginated pages. DeepCrawl is showing our primary pages as being 6+ levels deep, but without the blog or with an infinite scroll on the /blog/ page, I believe it would be only 2 or 3 levels deep. Using Moz's blog as an example, is https://moz.com/blog?page=2 treated to be on the same level in terms of page depth as https://moz.com/blog? If so is it the https://site.comcom/blog" /> and https://site.com/blog?page=3" /> code that helps Google recognize this? Or does Google treat the page depth the same way that DeepCrawl is showing it with the blog posts on page 2 being +1 in page depth compared to the ones on page 1, for example? Thanks, Andy
Intermediate & Advanced SEO | | AndyRSB0 -
If my products aren't showing in rich snippets, is there still value in adding product schema?
I'm adding category pages for an online auction site and trying to determine if its worth marking up the products listed on the page. All of the individual product pages have product schema, but I have never seen them show up in rich snippets likely due to the absence of the price element and the unique nature of the items. Is there still value in adding the product schema even if the items won't show in rich snippets? Also, is it possible the product schema will help optimize for commerce related keywords such as [artist name] + for sale?
Intermediate & Advanced SEO | | Haleyb350 -
Why isn't Google indexing this site?
Hello, Moz Community My client's site hasn't been indexed by Google, although it was launched a couple of months ago. I've ran down the check points in this article https://moz.com/ugc/8-reasons-why-your-site-might-not-get-indexed without finding a reason why. Any sharp SEO-eyes out there who can spot this quickly? The url is: http://www.oldermann.no/ Thank you
Intermediate & Advanced SEO | | Inevo
INEVO, digital agency0 -
NGinx rule for redirecting trailing '/'
We have successfully implemented run-of-the-mill 301s from old URLs to new (there were about 3,000 products). As normal. Like we do on every other site etc. However, recently search console has started to report a number of 404s with the page names with a trailing forward slash at the end of the .html suffix. So, /old-url.html is redirecting (301) to /new-url.html However, now for some reason /old-url.html/ has 'popped up' in the Search Console crawl report as a 404. Is there a 'blobal' rule you can write in nGinx to say redirect *.html/ to */html (without the forward slash) rather than manually doing them all?
Intermediate & Advanced SEO | | AbsoluteDesign0 -
Why some pages show schema and some don't in Google?
I notice Google displays the schema(reviews, price, availability etc.) in results only for some of our item pages in same category using same template. Any ideas why this is happening. They are created around same time - more than a year ago. Schema was also added a year ago.
Intermediate & Advanced SEO | | rbai0 -
Google's 'related:' operator
I have a quick question about Google's 'related:' operator when viewing search results. Is there reason why a website doesn't produce related/similar sites? For example, if I use the related: operator for my site, no results appear.
Intermediate & Advanced SEO | | ecomteam_handiramp.com
https://www.google.com/#q=related:www.handiramp.com The site has been around since 1998. The site also has two good relevant DMOZ inbound links. Any suggestions on why this is and any way to fix it? Thank you.0 -
Dynamic pages - ecommerce product pages
Hi guys, Before I dive into my question, let me give you some background.. I manage an ecommerce site and we're got thousands of product pages. The pages contain dynamic blocks and information in these blocks are fed by another system. So in a nutshell, our product team enters the data in a software and boom, the information is generated in these page blocks. But that's not all, these pages then redirect to a duplicate version with a custom URL. This is cached and this is what the end user sees. This was done to speed up load, rather than the system generate a dynamic page on the fly, the cache page is loaded and the user sees it super fast. Another benefit happened as well, after going live with the cached pages, they started getting indexed and ranking in Google. The problem is that, the redirect to the duplicate cached page isn't a permanent one, it's a meta refresh, a 302 that happens in a second. So yeah, I've got 302s kicking about. The development team can set up 301 but then there won't be any caching, pages will just load dynamically. Google records pages that are cached but does it cache a dynamic page though? Without a cached page, I'm wondering if I would drop in traffic. The view source might just show a list of dynamic blocks, no content! How would you tackle this? I've already setup canonical tags on the cached pages but removing cache.. Thanks
Intermediate & Advanced SEO | | Bio-RadAbs0 -
How to get the 'show map of' tag/link in Google search results
I have 2 clients that have apparently random examples of the 'show map of' link in Google search results. The maps/addresses are accurate and for airports. They are both aggregators, they service the airports e.g. lax airport shuttle (not actual example) BUT DO NOT have Google Place listings for these pages either manually OR auto populated from Google, DO NOT have the map or address info on the pages that are returned in the search results with the map link. Does anyone know how this is the case? Its great that this happens for them but id like to know how/why so I can replicate across all their appropriate pages. My understanding was that for this to happen you HAD to have Google Place pages for the appropriate pages (which they cant do as they are aggregators). Thanks in advance, Andy
Intermediate & Advanced SEO | | AndyMacLean0