Best Way To Handle Expired Content
-
Hi,
I have a client's site that posts job openings. There is a main list of available jobs and each job has an individual page linked to from that main list. However, at some point the job is no longer available. Currently, the job page goes away and returns a status 404 after the job is no longer available.
The good thing is that the job pages get links coming into the site. The bad thing is that as soon as the job is no longer available, those links point to a 404 page. Ouch. Currently Google Webmaster Tools shows 100+ 404 job URLs that have links (maybe 1-3 external links per).
The question is what to do with the job page instead of returning a 404. For business purposes, the client cannot display the content after the job is no longer available. To avoid duplicate content issues, the old job page should have some kind of unique content saying the job is longer available.
Any thoughts on what to do with those old job pages? Or would you argue that it is appropriate to return 404 header plus error page since this job is truly no longer a valid page on the site?
Thanks for any insights you can offer.
Matthew -
Hey Sebastian -
We already do something similar to know if it is expired (instead of the if condition in MySQL, we query for records where job_closing_date >= CURDATE()). Thankfully they programmed that in to pull the old job off the list and out of the job search results. (Though up until yesterday the old jobs were on the XML sitemap...woops. Guess what I fixed yesterday!)
I like your idea though of keeping the content active and keeping the page alive, but with some kind of message above there. That would definitely keep the page unique. I'm not positive that will fly on the business side but I'll definitely propose that.
Thanks for the reply!
-
I like that idea of 301 redirecting the page back to the job search page. The search page would certainly be a good introduction and probably satisfy looking for the job. These pages aren't high ranking pages in the SERPs, the traffic is referral traffic from other websites. Give that, so Utah Tiger's question about keywords and search engine wouldn't apply in this website's case. Thanks for the idea!
-
Hi Matthew,
What I would do is to still have it accessible through a direct link, but not through a list of jobs displayed on the main site. I would also include the note at the top of the page saying something like 'This job offer has already expired'.
This way you still have a page, which is unique, does not show on the main jobs list and indicates that it is expired.
I'm not sure how much of the programming knowledge you have and what technology is the site built in, but a simple IF condition in your SQL statement to add specific flag to each record indicating whether it is expired or not would be something like this (this specific one is based on the MySQL syntax):
IF (
CURDATE() BETWEENdate_from
ANDdate_to
,
0,
1
) ASexpired
Then, when you call specific job you simply check whether the 'expired' field is equal 1 - and if so - display the message above the job.
I hope this helps.
-
EGOL..Your technical response is way above me....could you restate in tyro terms.
Is the expired data hidden? Does the 301 redirect go to homepage or job search page or either? What value does it add? Keywords? I guess the pages would still be indexed in order for value to be created or does a 301 redirect just add all the value on the page it is redirected too? I will also go look up 301 redirects right now.
Utah Tiger
-
I have expiring content on one of my sites.
I place all of the postings into folders according to date such as...
mysite.com/postings/2012/02/job-at-mcds/
Then on certain dates I add an htaccess file to the /2012/02/ folder that will 301 redirect all items in that folder to the homepage.
You could 301 the old posts to a job search page or some other type of page that will introduce the visitor to your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to handle one section of duplicate content
Hi guys, i'm wondering if I can get some best practice advice in preparation for launching our new e-commerce website. For the new website we are creating location pages with a description and things to do which will lead the user to hotels in the location. For each hotel page which relates to the location we will have the same 'Things to do' content. This is what the content will look like on each page: Location page Location title (1-3 words) Location description (150-200 words) Things to do (200-250 words) Reasons to visit location (15 words) Hotel page Hotel name and address (10 words) Short description (25 words) Reasons to book hotel (15 words) Hotel description (100-200 words) Friendly message why to visit (15 words) Hotel reviews feed from trust pilot Types of break and information (100-200 words) Things to do (200-250 words) My question is how much will we penalised for having the same 'Things to do' content on say up to 10 hotels + 1 location page? In an ideal world we want to develop a piece of code which tells search engines that the original content lies on the location page but this will not be possible before we go live. I'm unsure whether we should just go and take the potential loss in traffic or remove the 'Things to do' section on hotel pages until we develop the piece of code?
Technical SEO | | CHGLTD1 -
Who gets punished for duplicate content?
What happens if two domains have duplicate content? Do both domains get punished for it, or just one? If so, which one?
Technical SEO | | Tobii-Dynavox0 -
What is the best way to handle links that lead to a 404 page
Hi Team Moz, I am working through a site cutover with an entirely new URL structure and have a bunch of pages that could not, would not or just plain don't redirect to new pages. Steps I have taken: Multiple new sitemaps submitted with new URLs and the indexing looks solid used webmasters to remove urls with natural result listings that did not redirect and produce urls Completely built out new ppc campaigns with new URL structures contacted few major link partners Now here is my question: I have a pages that produce 404s that are linked to in forums, slick deals and stuff like that which will not be redirected. Is disavowing these links the correct thing to do?
Technical SEO | | mm9161570 -
Advice urgently needed on best practice for handling multiple product categories on Magento website
I have an ecommerce site built using Magento and urgently need advice on best practice for handling multiple product categories (where products appear in more than one category on the site creating multiple URLs to the same page). In April this year, based on advice from my SEO who felt that duplicate content issues were causing my rankings to be held back, I changed about 25% of the product categories to 'noindex, follow'. This has made organic traffic fall (obviously) as these pages fell out of Google's index. But, contrary to what I was hoping for, it didn't then improve rankings - not one iota, nothing - which was the ONLY reason why I did this. This has had a real negative impact on sales, so I'm starting to think this was actually an a terrible idea. Should I change them back? And to ask a wider question, what is best practice for this particular scenario?
Technical SEO | | Coraltoes770 -
Content too buried in source code?
Our team is working on a refresh/redesign and am wondering if there's a quantifiable way of determining how high our meta data, H1 and paragraph should be in the source code. Or even whether I should be concerned with that. Our navigation will likely have dozens of links (we're going to keep it to under 100), and this doesn't even factor in the design elements. I am concerned about the content being buried. Are these the kind of concerns I should be having? Is there a measurable way to avoid it?
Technical SEO | | SSFCU0 -
Duplicate content or titles
Hello , I am working on a site, I am facing the duplicate title and content errors,
Technical SEO | | KLLC
there are following kind of errors : 1- A link with www and without www having same content. actually its a apartment management site, so it has different bedrooms apartments and booking pages , 2- my second issue is related to booking and details pages of bedrooms, because I am using 1 file for all booking and 1 file for all details page. these are the main errors which i am facing ,
can anyone give me suggestions regarding these issues ? Thnaks,0 -
URLs in Greek, Greeklish or English? What is the best way to get great ranking?
Hello all, I am Greek and I have a quite strange question for you. Greek characters are generally recognized as special characters and need to have UTF-8 encoding. The question is about the URLs of Greek websites. According the advice of Google webmasters blog we should never put the raw greek characters into the URL of a link. We always should use the encoded version if we decide to have Greek characters and encode them or just use latin characters in the URL. Having Greek characters un-encoded could likely cause technical difficulties with some services, e.g. search engines or other url-processing web pages. To give you an example let's look at A) http://el.wikipedia.org/wiki/%CE%95%CE%BB%CE%B2%CE%B5%CF%84%CE%AF%CE%B1which is the URL with the encoded Greek characters and it shows up in the browser asB) http://el.wikipedia.org/wiki/Ελβετία The problem with A is that everytime we need to copy the URL and paste it somewhere (in an email, in a social bookmark site, social media site etc) the URL appears like the A, plenty of strange characters and %. This link sometimes may cause broken link issues especially when we try to submit it in social networks and social bookmarks. On the other hand, googlebot reads that url but I am wondering if there is an advantage for the websites who keep the encoded URLs or not (in compairison to the sites who use Greeklish in the URLs)! So the question is: For the SEO issues, is it better to use Greek characters (encoded like this one http://el.wikipedia.org/wiki/%CE%95%CE%BB%CE%B2%CE%B5%CF%84%CE%AF%CE%B1) in the URLs or would it be better to use just Greeklish (for example http://el.wikipedia.org/wiki/Elvetia ? Thank you very much for your help! Regards, Lenia
Technical SEO | | tevag0 -
How best to redirect URL from expired classified ads?
We have problem because our content are classifieds. Every ad expired after one or two mounts and then ad becomes inactive and we keep his page for one mount latter like a same page but we ad a notice that ad is inactive. After that we delete the ad and his page but need to redirect that URL to search results page which contains similar ads because we don't want to lose the traffic form that pages. How is the best way to redirect ad URL? Our thinking was to redirect internal without 301 redirection because the httacces file will be very big after a while and we are thinking to try a canonicalization because we don't want engine to think that we have to much duplicate content.
Technical SEO | | Donaab0