Best practices for repetitive job postings
-
I have a client who is a recruiter for skilled trades jobs. They post quite a few jobs on their job board on a regular basis. They frequently have job postings that are very similar to older jobs or multiple current job postings that are similar to each other.
Looking at their webmaster tools and site: command search in google, it does appear they have some duplicate content issues. We're thinking it's because of the similar job posts.
What is the best practice for dealing with this? And is there any way to correct the situation so that the number of "omitted due to similarity" results declines?
Thanks for you help!
-
Ok if the previous job posts are causing your concern, you can easily fix this by setting up meta data expiry:
_It will automatically remove the content of the page from search engines index as soon as the job becomes unavailable. _
-
It could be worth posting the question in GWT forum, so at least there might be a chance one of the google employees takes a note and may (or may not) be able to do something about penalties given to the site.
-
Hmmm... This is an interesting situation for sure!
My first thought was adding a canonical tag on the postings, but I'm sure you don't have that kind of access. My first assumption is that this kind of duplicate content isn't going to hurt you. Mainly because this is not a new situation to Google. Kind of like how a /blog page would have a snippet of the actual blog post. Would you consider that duplicate content? Technically, but Google isn't going to see it like that.
If you're super worried or concerned about this, you could always have two job descriptions for the same job. One that you have on the corporate site, and the other that you're submitting to indeed, monster, etc. This doesn't need to take too much time. You could just have some generic copy then say "...to see more about this job posting, visit http://www.yoursite.com".
I still going to be surprised if Google is seeing this as duplicate content though... Also, Google may filter it out of their SERPs, but do you have any indication that your potential applicants are finding it in the SERPs anyways?
Was that helpful?
Kevin Phelps
http://www.linkedin.com/in/kevinwphelps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What's the best way to test Angular JS heavy page for SEO?
Hi Moz community, Our tech team has recently decided to try switching our product pages to be JavaScript dependent, this includes links, product descriptions and things like breadcrumbs in JS. Given my concerns, they will create a proof of concept with a few product pages in a QA environment so I can test the SEO implications of these changes. They are planning to use Angular 5 client side rendering without any prerendering. I suggested universal but they said the lift was too great, so we're testing to see if this works. I've read a lot of the articles in this guide to all things SEO and JS and am fairly confident in understanding when a site uses JS and how to troubleshoot to make sure everything is getting crawled and indexed. https://sitebulb.com/resources/guides/javascript-seo-resources/ However, I am not sure I'll be able to test the QA pages since they aren't indexable and lives behind a login. I will be able to crawl the page using Screaming Frog but that's generally regarded as what a crawler should be able to crawl and not really what Googlebot will actually be able to crawl and index. Any thoughts on this, is this concern valid? Thanks!
Technical SEO | | znotes0 -
Best Practices for Image Optimisation
Hi Guys, I would love some recommendations from you all. A potential client of mine is currently hosting all their website image galleries (of which there are many) on a flickr account and realise that they could gain more leverage in Google images (currently none of their images cover off any of the basics for optimisation eg filename, alt text etc), I did say that these basics would at least need to be covered off and that Image hosting is supposedly an important factor especially when it comes to driving traffic from Google Image Search. (potentially images hosted on the same domain as the text are given more value than the images hosted at another domain like websites such as Flickr). The client has now come back saying they have done some 'reading' and that this suggests a sub-domain could be the way to go, e.g. images.mydomain.com - would love feedback on this before I go back to them as it would be a huge undertaking for them. Cheers
Technical SEO | | musthavemarketing0 -
Best way to deal with over 1000 pages of duplicate content?
Hi Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue! 95% of the issues arise from our news and news archive as its been going for sometime now. We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added. The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article. What is the best way to solve this issue? From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long. Thanks Ben
Technical SEO | | benjmoz0 -
Can Silos and Exact Anchor Text In Links Hurt a Site Post Penguin?
Just got a client whose site dropped from a PR of 3 to zero. This happened shortly after the Penguin release, June, 2012. Examining the site, I couldn't find any significant duplicate content, and where I did find duplicate content (9%), a closer look revealed that the duplication was totally coincidental (common expressions). Looking deeper, I found no sign of purchased links or linking patterns that would hint at link schemes, no changes to site structure, no change of hosting environment or IP address. I also looked at other factors, too many to mention here, and found no evidence of black hat tactics or techniques. The site is structured in silos, "services", "about" and "blog". All page titles that fall under services are categorized (silo) under "services", all blog entries are categorized under "blogs", and all pages with company related information are categorized under "about". When exploring the site's links in Site Explorer (SE), I noticed that SE is identifying the "silo" section of links (i.e. services, about, blog, etc.) and labeling it as an anchor text. For example, domain.com/(services)/page-title, where the page title prefix (silo), "/services/", is labeled as an anchor text. The same is true for "blog" and "about". BTW, each silo has its own navigational menu appearing specifically for the content type it represents. Overall, though there's plenty of room for improvement, the site is structured logically. My question is, if Site Explorer is picking up the silo (services) and identifying it as an anchor text, is Google doing the same? That would mean that out of the 15 types of service offerings, all 15 links would show as having the same exact anchor text (services). Can this type of site structure (silo) hurt a website post Penguin?
Technical SEO | | UplinkSpyder0 -
Determine the best URL structure
Hi guys, I'm working my way through a URL restructure at the moment and I've several ideas about the best way to do it. However, it would be good to get some views on this. At the moment I'm working on a property website - http://bit.ly/N7eew7 As you can quickly see, the URL structure of the site needs a lot of work. Similar websites - http://bit.ly/WXH5WG http://bit.ly/Q3UiLC One of the sites has http://www.domain.ie/property-to-let/location/ And the other has http://www.domain.ie/rentals/location/property-to-let/ I could do with some guidance about the best steps to take with this. I've a few ideas myself but this is a massive project. Cheers, Mark
Technical SEO | | MarkScully0 -
Syndicated posts extracts on wordpress and impact on SERPS
On our main site (http://www.deeperblue.com) we've been syndicating posts (not the full posts just link and short extract) from a trusted partner of ours. These posts are listed as Diverwire Staff and point directly back to the original website. What i'm concerned about is the impact on SERPS - we don't want to be penalised by any of the search engines.
Technical SEO | | StephanWhelan0 -
Best Joomla SEO Extensions?
My website is a Joomla based website. My designer is good, but I don't think he knows that much about SEO . . . so I doubt if he added any extensions that can assist with SEO. I assume there are some good ones that can help my site. Does anyone know what/which Joomla extensions are must haves?
Technical SEO | | damon12120 -
Best cross linking strategy for micro sites?
Hi Guys. I created a micro site (A design showcase gallery) away from the main website to attract a lot of links in my space from competitors. It works so well it has become a valuable resource in my industry and I believe I will keep it running and adding content to it. Is the best SEO strategy for the main site simply to link from each page to the main site? Or should I be looking at something else? Thanks, Alan
Technical SEO | | spoiltchild0