Lots of city pages - How do I ensure we don't get penalized
-
We are planning on having a job posting page for each city that we are looking to hire new CFO partners in. But, the problem is, we have LOTS of locations. I was wondering what would be the best way to have similar content on each page (since the job description and requirements are the same for each job posting) without being hit by Google for having duplicate content? One of the main reasons we have decided to have location based pages is that we have noticed visitors to our site are searching for "cfo job in [location] but we notice that most of these visitors then leave. We believe it to be because the pages they land on make no mention of the location that they were looking for and is a little incongruent with what they were expecting.
We are looking to use the following URLs and TItle/Description as an example:
|
http://careers.b2bcfo.com/cfo-jobs/Alabama/Birmingham
| CFO Careers in Birmingham, AL |
| Are you looking for a CFO Career in Birmingham, Alabama ? We're looking for partners there. Apply today! |
|
Any advice you have for this would be greatly appreciated.
Thank you.
-
We would have the job description on each page mentioning the locations, then we would also have the job capture form.
You are right in that these descriptions do have unique data on them. I am thinking we are just going to have to take the time to write as much unique content as possible.
Thanks for the feedback.
-
Hey, the last sentence was based around other ways to bring in this inbound traffic but scratch that for now.
So, have you examined how these other, well ranking sites are doing what they do? Are they living off the fact they are big domains? Is the content on these pages unique as I just Googled:
CFO Careers in Birmingham, AL
And it appears they are job listings specific to that location so I am guessing that content is fairly unique and the listings is the content.
These pages that you would create, what content would they have on them? Would they all be different?
My initial understanding was that this would just be a data capture form but if we actually have unique job listings like on indeed.com, simplyhired, jobs2careers etc then these pages should be unique enough to rank.
Or am I missing something? (it is late in the day here 7pm, hitting my 12th hour of work so the old synapses may be failing me somewhat).
-
Marcus,
I am not sure I understand the last line of your post. But I have looked at the Keyword difficulty tool and these are fairly competitive phrases.
The problem we have is that we are competing against the likes of Indeed.com, Monster.com and sites such as that. While we do use these sites, they don't quite provide the flexibility we are looking for.
We used to rank quite highly for these types of phrases, but I have noticed a recent trend in Google for them to rank the job search sites ahead of us. The hope is that if we provide similar content, then Google would start pushing us up the rankings again.
-
A lot of this depends on the competitiveness of the search query and would need some testing to better determine your approach.
You can use the keyword difficulty tool here but also just google the terms and see what comes up. If the results are weak, you could try this as a stage 1 approach and see how you get on.
Maybe there is another way to think about it, what about the job listings themselves or does it not work that way?
-
I think these need to be indexed as it is through organic search that people have been getting to our site using terms such as "cfo jobs in [location]"
I have been thinking about adding new content for each city, but you are right, that is a LOT of work. I wonder if it might be worth having one page with unique, location based content for the main city in an area and just have a list of nearby cities on the page that we are also hiring in.
-
Hey Danny
A few suggestions:
1. Make each location page unique enough that you can safely have it on the site without worrying about duplication (lots of work).
2. If people are only searching or browsing to these pages internally then don't index them (robots.txt / meta noindex)
3. You could do this dynamically and use a canonical to your main enquiry page on these pages.
4. You could just create all the variations and add a canonical to your main enquiry page and they may, if it is not mega competitive rank (bit risky but easy to fix if it causes issues).
I would always try to look at this from the perspective of your users and if you don't really care about having these as organic search landing pages then simply noindexing them would seem an ideal solution.
Hope that helps!
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Page getting indexed and not the main page!
Main Page: www.domain.com/service
Intermediate & Advanced SEO | | Ishrat-Khan
Duplicate Page: www.domain.com/products-handler.php/?cat=service 1. My page was getting indexed properly in 2015 as: www.domain.com/service
2. Redesigning done in Aug 2016, a new URL pattern surfaced for my pages with parameter "products-handler"
3. One of my product landing pages had got 301-permanent redirected on the "products-handler" page
MAIN PAGE: www.domain.com/service GETTING REDIRECTED TO: www.domain.com/products-handler.php/?cat=service
4. This redirection was appearing until Nov 2016.
5. I took over the website in 2017, the main page was getting indexed and deindexed on and off.
6. This June it suddenly started showing an index of this page "domain.com/products-handler.php/?cat=service"
7. These "products-handler.php" pages were creating sitewide internal duplicacy, hence I blocked them in robots.
8. Then my page (Main Page: www.domain.com/service) got totally off the Google index Q1) What could be the possible reasons for the creation of these pages?
Q2) How can 301 get placed from main to duplicate URL?
Q3) When I have submitted my main URL multiple times in Search Console, why it doesn't get indexed?
Q4) How can I make Google understand that these URLs are not my preferred URLs?
Q5) How can I permanently remove these (products-handler.php) URLs? All the suggestions and discussions are welcome! Thanks in advance! 🙂0 -
NGinx rule for redirecting trailing '/'
We have successfully implemented run-of-the-mill 301s from old URLs to new (there were about 3,000 products). As normal. Like we do on every other site etc. However, recently search console has started to report a number of 404s with the page names with a trailing forward slash at the end of the .html suffix. So, /old-url.html is redirecting (301) to /new-url.html However, now for some reason /old-url.html/ has 'popped up' in the Search Console crawl report as a 404. Is there a 'blobal' rule you can write in nGinx to say redirect *.html/ to */html (without the forward slash) rather than manually doing them all?
Intermediate & Advanced SEO | | AbsoluteDesign0 -
How I can improve my website On page and Off page
My Website is guitarcontrol.com, I have very strong competition in market. Please advice me the list of improvements on my websites. In regarding ON page, Linkbuiding and Social media. What I can do to improve my website ranking?
Intermediate & Advanced SEO | | zoe.wilson170 -
Best tips for getting a video page to rank?
We have a video for our company, located here: http://www.imageworkscreative.com/imageworks-creative-video It's an overview of our company and the services we offer. We'd like to get this page ranking, but we haven't had much luck so far. Our Youtube account does better, but I'm looking for some things we can do on or offsite to get this page to rank. Any tips would be appreciated!
Intermediate & Advanced SEO | | ScottImageWorks0 -
Error reports showing pages that don't exist on website
I have a website that is showing lots of errors (pages that cannot be found) in google webmaster tools. I went through the errors and re-directed the pages I could. There are a bunch of remaining pages that are not really pages this is why they are showing errors. What's strange is some of the URL's are showing feeds which these were never created. I went into Google webmaster tools and looked at the remove URL tool. I am using this but I am confused if I need to be selecting "remove page from search results and cache" option or should I be selecting this other option "remove directory" I am confused on the directory. I don't want to accidentally delete core pages of the site from the search engines. Can anybody shed some light on this or recommend which I should be selecting? Thank you Wendy
Intermediate & Advanced SEO | | SOM240 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
What if you can't navigate naturally to your canonicalized URL?
Assume this situation for a second... Let's say you place a rel= canonical tag on a page and point to the original/authentic URL. Now, let's say that that original/authentic URL is also populated into your XML sitemap... So, here's my question... Since you can't actually navigate to that original/authentic URL (it still loads with a 200, it's just not actually linkded to from within the site itself), does that create an issue for search engines? Last consideration... The bots can still access those pages via the canonical tag and the XML sitemap, it's just that the user wouldn't be able to access those original/authentic pages in their natural site navigation. Thanks, Rodrigo
Intermediate & Advanced SEO | | AlgoFreaks0 -
Which page to target? Home or /landing-page
I have optimized my home page for the keyword "computer repairs" would I be better of targeting my links at this page or an additional page (which already exists) called /repairs it's possible to rename & 301 this page to /computer-repairs The only advantage I can see from targeting /computer-repairs is that the keywords are in the target URL.
Intermediate & Advanced SEO | | SEOKeith0