How do I Address Low Quality/Duplicate Content Issue for a Job portal?
-
Hi,
I want to optimize my job portal for maximum search traffic.
Problems
- Duplicate content- The portal takes jobs from other portals/blogs and posts on our site. Sometimes employers provide the same job posting to multiple portals and we are not allowed to change it resulting in duplicate content
- Empty Content Pages- We have a lot of pages which can be reached via filtering for multiple options. Like IT jobs in New York. If there are no IT jobs posted in New York, then it's a blank page with little or no content
- Repeated Content- When we have job postings, we have about the company information on each job listing page. If a company has 1000 jobs listed with us, that means 1000 pages have the exact same about the company wording
Solutions Implemented
- Rel=prev and next. We have implemented this for pagination. We also have self referencing canonical tags on each page. Even if they are filtered with additional parameters, our system strips of the parameters and shows the correct URL all the time for both rel=prev and next as well as self canonical tags
- For duplicate content- Due to the volume of the job listings that come each day, it's impossible to create unique content for each. We try to make the initial paragraph (at least 130 characters) unique. However, we use a template system for each jobs. So a similar pattern can be detected after even 10 or 15 jobs. Sometimes we also take the wordy job descriptions and convert them into bullet points. If bullet points already available, we take only a few bullet points and try to re-shuffle them at times
Can anyone provide me additional pointers to improve my site in terms of on-page SEO/technical SEO?
Any help would be much appreciated.
We are also thinking of no-indexing or deleting old jobs once they cross X number of days. Do you think this would be a smart strategy? Should I No-index empty listing pages as well?
Thank you.
-
Unique Listing Copy
I would try to get that unique content to the top of the source order - it doesn't necessarily have to appear at the top of the page - it could be in a sidebar for instance, but it should be first in the source so that Googlebot gobbles it up before it reaches duplicate stuff or secondary nav / footer links etc.
No Results pages
Yes, you could certainly noindex your no-results pages with a robots meta tag - that would be a good idea.
Loading duplicate content with ajax
In terms of Google and ajax content, yes Googlebot can and does follow links it finds in javascript.
All I can tell you here is my own experience On my product detail template, I have loaded up category descriptions with ajax that appear canonically (if that's the right way of putting it) on my listing pages. In the SERPs, the cat description content is indexed for the pages I want it to be indexed for (the listings in this case), and not for the product detail pages where I'm loading with ajax. And these product detail pages still perform well and get good organic landing traffic.
On the product detail page where I'm loading with ajax, I have the copy in an accordion, and it's loaded with an ajax request on document ready. It might be considered slightly more cochre to do this in response to a user action though - such as clicking on the accordion in my case. The theory being that you're making your site responsive to a user's needs, rather than loading up half the content one way and the other half another way, if you get what I mean.
Sometimes of course you just cannot avoid certain amounts of content duplication within your site.
-
Luke,
Thank you for your detailed reply.
I forgot to mention that for each of our important filter pages (like IT jobs in New York) we do have a unique paragraph text which is human readable and at the same time SEO optimized (They are around 200 words long and is not there for all filter pages due to the volume of such pages.) This unique block of text rests at the bottom of the page, just above the footer, after all the latest 20 job listings are shown.
"Filtering & Blank Results Pages Could this not be done with javascript / ajax, so that Google never finds an empty listing?"
I am afraid this cannot be done due to the structure of our system. No-Indexing them would be much more easier. Wouldn't it do?
"You could load this content from an ajax template, either as the page loads, or in response to a user action (eg. click on a button 'show company details')."
Sounds like a good idea. Are you sure Google will not consider this as cloaking and that Google cannot read Ajax content?
"Try not to load up the duplicate description by default."- Do you mean we should implement Ajax again for this part?
"You will want to, where possible, specify a view-all page for Google"- not sure if this will be possible from our side due to engineering limitations. I thought rel=next and prev would solve the issue. However, I still see intermediate pages indexed.
-
Hi, I've tried to address your issues point by point according to your post...
Duplicate Job Posting Content
You can try to offset this by having a couple hundred words of unique copy per listing page url exposed to Google. So, if your page lists all jobs in the catering industry in New Jersey for instance, write some copy on the topic, and try to make it readable and useful to the user as well. Add microdata to the template using schema.org, so that Google can understand what's there at the data level - there will likely be entities available there to describe your content in terms of location, the companies that are hiring, etc.
I'm inclined to say don't bother with reshuffling duplicate content and adding bullet points to it - Google is smart enough to recognise that this copy is the same, and will not give you any points - perhaps the opposite - for trying to disguise this.
Filtering & Blank Results Pages
Could this not be done with javascript / ajax, so that Google never finds an empty listing?
'About the Company' Duplicate Content
You could load this content from an ajax template, either as the page loads, or in response to a user action (eg. click on a button 'show company details'). I have solved this exact problem like this in the past - loading a tour category description that appears on a great many tour detail pages.
Perhaps you can do as I'm suggesting above for the job description duplication - where possible, and as long as it's done in a way that does not come across as cloaking. It's good that you have a unique paragaph above the duplicate description.Try not to load up the duplicate description by default. I'm not sure on your source order or site / template structure so difficult to get too detailed here and I don't want to risk suggesting something that could be interpreted as a violation of Google's guidelines.
Pagination
You will want to, where possible, specify a view-all page for Google - this is suggested by them as a best practice, and in my experience, Googlebot loves to find chunky listing content, PROVIDED that it loads quickly enough not to hamper the user experience.
You can make sure of this by lazyloading images and other media. Be sure to specify the correct image src attributes (not spacer.gif for instance) inside of noscript tags to make sure that image content is still indexed.
You could also load up the markup for all items in the listing, and then use javascript to chunk the content into 'pages', or load it asynchronously where javascript is available. If no javascript, then load all content. By using javascript pagination, you basically avert the need for a separate view all page, meaning only have one template to maintain and optimise.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content problem
Hi there, I have a couple of related questions about the crawl report finding duplicate content: We have a number of pages that feature mostly media - just a picture or just a slideshow - with very little text. These pages are rarely viewed and they are identified as duplicate content even though the pages are indeed unique to the user. Does anyone have an opinion about whether or not we'd be better off to just remove them since we do not have the time to add enough text at this point to make them unique to the bots? The other question is we have a redirect for any 404 on our site that follows the pattern immigroup.com/news/* - the redirect merely sends the user back to immigroup.com/news. However, Moz's crawl seems to be reading this as duplicate content as well. I'm not sure why that is, but is there anything we can do about this? These pages do not exist, they just come from someone typing in the wrong url or from someone clicking on a bad link. But we want the traffic - after all the users are landing on a page that has a lot of content. Any help would be great! Thanks very much! George
Technical SEO | | canadageorge0 -
Wordpress categories causing too many links/duplicate content?
I've just added categories to my wordpress site and some of the posts show in several of the categories. Will this cause me duplicate content problems as I want the category pages to be indexed? Also as I add more categories I'm creating more links on the page. They can't be seen to the user as I have a plugin that creates drop down categories. When I go to 'view source' though all the links are there so google will see lots of links. How can I fix the too many links problem? And should I worry about duplicate content issue?
Technical SEO | | SamCUK1 -
Duplicate page content - index.html
Roger is reporting duplicate page content for my domain name and www.mydomain name/index.html. Example: www.just-insulation.com
Technical SEO | | Collie
www.just-insulation.com/index.html What am I doing wrongly, please?0 -
Tags and Duplicate Content
Just wondering - for a lot of our sites we use tags as a way of re-grouping articles / news / blogs so all of the info on say 'government grants' can be found on one page. These /tag pages often come up with duplicate content errors, is it a big issue, how can we minimnise that?
Technical SEO | | salemtas0 -
How to handle this specific duplicate title issue
Part of my website is a directory of companies. Some of the companies have mane locations in the same city. For these listings titles and url's are like this: 1. Company ABC - Miami, FL http://www.website.com/florida/miami/company-abc-10001 2. Company ABC - Miami, FL http://www.website.com/florida/miami/company-abc-10002 What is the best way to fix this problem? Thank you
Technical SEO | | Boxes0 -
Crawl Errors and Duplicate Content
SEOmoz's crawl tool is telling me that I have duplicate content at "www.mydomain.com/pricing" and at "www.mydomain.com/pricing.aspx". Do you think this is just a glitch in the crawl tool (because obviously these two URL's are the same page rather than two separate ones) or do you think this is actually an error I need to worry about? Is so, how do I fix it?
Technical SEO | | MyNet0 -
Solution for duplicate content not working
I'm getting a duplicate content error for: http://www.website.com http://www.website.com/default.htm I searched for the Q&A for the solution and found: Access the.htaccess file and add this line: redirect 301 /default.htm http://www.website.com I added the redirect to my .htaccess and then got the following error from Google when trying to access the http://www.website.com/default.htm page: "This webpage has a redirect loop
Technical SEO | | Joeuspe
The webpage at http://www.webpage.com/ has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer." "Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects." How can I correct this? Thanks0 -
Follow up from http://www.seomoz.org/qa/discuss/52837/google-analytics
Ben, I have a follow up question from our previous discussion at http://www.seomoz.org/qa/discuss/52837/google-analytics To summarize, to implement what we need, we need to do three things: add GA code to the Darden page _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.darden.virginia.edu']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Change links on the Darden Page to look like http://www.darden.virginia.edu/web/MBA-for-Executives/ and [https://darden-admissions.symplicity.com/applicant](<a href=)">Apply Now and make into [https://darden-admissions.symplicity.com/applicant](<a href=)" > onclick="_gaq.push(['_link', 'https://darden-admissions.symplicity.com/applicant']); return false;">Apply Now Have symplicity add this code. _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.symplicity.com']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Due to our CMS system, it does not allow the user to add onClick to the link. So, we CANNOT add part 2) What will be the result if we have only 1) and 3) implemented? Will the data still be fed to GA account 'UA-12345-1'? If not, how can we get cross domain tracking if we cannot change the link code? Nick
Technical SEO | | Darden0