Best Way To Handle Expired Content
-
Hi,
I have a client's site that posts job openings. There is a main list of available jobs and each job has an individual page linked to from that main list. However, at some point the job is no longer available. Currently, the job page goes away and returns a status 404 after the job is no longer available.
The good thing is that the job pages get links coming into the site. The bad thing is that as soon as the job is no longer available, those links point to a 404 page. Ouch. Currently Google Webmaster Tools shows 100+ 404 job URLs that have links (maybe 1-3 external links per).
The question is what to do with the job page instead of returning a 404. For business purposes, the client cannot display the content after the job is no longer available. To avoid duplicate content issues, the old job page should have some kind of unique content saying the job is longer available.
Any thoughts on what to do with those old job pages? Or would you argue that it is appropriate to return 404 header plus error page since this job is truly no longer a valid page on the site?
Thanks for any insights you can offer.
Matthew -
Hey Sebastian -
We already do something similar to know if it is expired (instead of the if condition in MySQL, we query for records where job_closing_date >= CURDATE()). Thankfully they programmed that in to pull the old job off the list and out of the job search results. (Though up until yesterday the old jobs were on the XML sitemap...woops. Guess what I fixed yesterday!)
I like your idea though of keeping the content active and keeping the page alive, but with some kind of message above there. That would definitely keep the page unique. I'm not positive that will fly on the business side but I'll definitely propose that.
Thanks for the reply!
-
I like that idea of 301 redirecting the page back to the job search page. The search page would certainly be a good introduction and probably satisfy looking for the job. These pages aren't high ranking pages in the SERPs, the traffic is referral traffic from other websites. Give that, so Utah Tiger's question about keywords and search engine wouldn't apply in this website's case. Thanks for the idea!
-
Hi Matthew,
What I would do is to still have it accessible through a direct link, but not through a list of jobs displayed on the main site. I would also include the note at the top of the page saying something like 'This job offer has already expired'.
This way you still have a page, which is unique, does not show on the main jobs list and indicates that it is expired.
I'm not sure how much of the programming knowledge you have and what technology is the site built in, but a simple IF condition in your SQL statement to add specific flag to each record indicating whether it is expired or not would be something like this (this specific one is based on the MySQL syntax):
IF (
CURDATE() BETWEENdate_from
ANDdate_to
,
0,
1
) ASexpired
Then, when you call specific job you simply check whether the 'expired' field is equal 1 - and if so - display the message above the job.
I hope this helps.
-
EGOL..Your technical response is way above me....could you restate in tyro terms.
Is the expired data hidden? Does the 301 redirect go to homepage or job search page or either? What value does it add? Keywords? I guess the pages would still be indexed in order for value to be created or does a 301 redirect just add all the value on the page it is redirected too? I will also go look up 301 redirects right now.
Utah Tiger
-
I have expiring content on one of my sites.
I place all of the postings into folders according to date such as...
mysite.com/postings/2012/02/job-at-mcds/
Then on certain dates I add an htaccess file to the /2012/02/ folder that will 301 redirect all items in that folder to the homepage.
You could 301 the old posts to a job search page or some other type of page that will introduce the visitor to your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content spamming risk
If some websites, which provide information about apps in a particular niche, are publishing the same content which we have given in our app's description when they refer our app for that particular niche then would it lead to spamming? Our website is getting a backlink from one such website so are we at any sort of risk? What should we do about it without having to lose that backlink?
Technical SEO | | Reema240 -
Purchasing duplicate content
Morning all, I have a client who is planning to expand their product range (online dictionary sites) to new markets and are considering the acquisition of data sets from low ranked competitors to supplement their own original data. They are quite large content sets and would mean a very high percentage of the site (hosted on a new sub domain) would be made up of duplicate content. Just to clarify, the competitor's content would stay online as well. I need to lay out the pros and cons of taking this approach so that they can move forward knowing the full facts. As I see it, this approach would mean forgoing ranking for most of the site and would need a heavy dose of original content as well as supplementing the data on page to build around the data. My main concern would be that launching with this level of duplicate data would end up damaging the authority of the site and subsequently the overall domain. I'd love to hear your thoughts!
Technical SEO | | BackPack851 -
No Java, No Content..?
Hello Mozers! 🙂 I have a question for you: I am working on a site and while doing an audit I disabled JavaScript via the Web Developer plugin for Chrome. The result is that instead of seeing the page content, I see the typical “loading circle” but nothing else. I imagine this not a good thing but what does this implies technically from crawler perspective? Thanks
Technical SEO | | Pipistrella0 -
How to protect against duplicate content?
I just discovered that my company's 'dev website' (which mirrors our actual website, but which is where we add content before we put new content to our actual website) is being indexed by Google. My first thought is that I should add a rel=canonical tag to the actual website, so that Google knows that this duplicate content from the dev site is to be ignored. Is that the right move? Are there other things I should do? Thanks!
Technical SEO | | williammarlow0 -
Duplicate Content Issue
SEOMOZ is giving me a number of duplicate content warnings related to pages that have an email a friend and/or email when back in stock versions of a page. I thought I had those blocked via my robots.txt file which contains the following... Disallow: /EmailaFriend.asp Disallow: /Email_Me_When_Back_In_Stock.asp I had thought that the robot.txt file would solve this issue. Anyone have any ideas?
Technical SEO | | WaterSkis.com0 -
Pages with content defined by querystring
I have a page that show traveltips: http://www.spies.dk/spanien/alcudia/rejsemalstips-liste This page shows all traveltips for Alcudia. Each traveltip also has its own url: http://www.spies.dk/spanien/alcudia/rejsemalstips?TravelTipsId=19767 ( 2 weeks ago i noticed the url http://www.spies.dk/spanien/alcudia/rejsemalstips show up in google webmaster tools as a 404 page, along with 100 of others urls to the subpage /rejsemalstips WITHOUT a querystring. With no querystring there is no content on the page and it goes 404. I need my technicians to redirect that page so it shows the list, but in the meantime i would like to block it in robots.txt But how do i block a page if it is called without a querystring?
Technical SEO | | alsvik0 -
Similar Content vs Duplicate Content
We have articles written for how to setup pop3 and imap. The topics are technically different but the settings within those are very similar and thus the inital content was similar. SEOMoz reports these pages as duplicate content. It's not optimal for our users to have them merged into one page. What is the best way to handle similar content, while not getting tagged for duplicate content?
Technical SEO | | Izoox0 -
Up to my you-know-what in duplicate content
Working on a forum site that has multiple versions of the URL indexed. The WWW version is a top 3 and 5 contender in the google results for the domain keyword. All versions of the forum have the same PR, but but the non-WWW version has 3,400 pages indexed in google, and the WWW has 2,100. Even worse yet, there's a completely seperate domain (PR4) that has the forum as a subdomain with 2,700 pages indexed in google. The dupe content gets completely overwhelming to think about when it comes to the PR4 domain, so I'll just ask what you think I should do with the forum. Get rid of the subdomain version, and sometimes link between two obviously related sites or get rid of the highly targeted keyword domain? Also what's better, having the targeted keyword on the front of Google with only 2,100 indexed pages or having lower rankings with 3,400 indexed pages? Thanks.
Technical SEO | | Hondaspeder0