Establishing our web pages as the original source
-
I'm currently working on a recrutiment website. One of the things we do to drive traffic to the site is post our job listings out to a number of job boards eg. Indeed. These sites replicate our own job listings which means that for every job there are at least 5-10 exact duplicates on the web. By nature the job boards have good domain authority so they always rank above us but I would still expect to see more in the way of long-tail traffic.
Is it necessary for me to claim our own job listings as the original source and if so, how do I go about doing it?
Thanks
-
Hi,
Having a self referencing canonical tag on your own pages is not a problem. The canonical tag needs to go into the head of the page though (it is not valid if in the body of the html) so just make sure that the 3rd party syndication service actually provides this - it might - but it might not I am guessing. Even with the canonical I would still include a clean text link back to the original page if this is possible (both as a second indication of origin but also for the visits it might send).
-
Thanks Lynn, that's perfect!
Another question then - the job listings are syndicated out to the job boards automatically via a 3rd party (probably used by 75% of Uk recruitment companies). If I was to put a rel=canonical tag on each job listing it should be carried over to each of the job boards which would get around the duplicate content problem. However, each job listing page on our site would carry the re=canonical tag essentially pointing back to itself. Would this cause any issues?
Thanks
-
HI,
There was a recent WBF about syndicated content which runs down the various technical ways you can attribute your listings as the original source - check it out here. It would probably also help to make sure your listings are the first ones into the index which can be done by internally linking to the new jobs (obviously), quickly adding them to your sitemap, sharing them through social channels (especially twitter) - all of which should help make sure your content is indexed quickly and ideally before it is replicated on other sites.
If the other sites have stronger authority than yours you would still really want to get one of the 3 options discussed in the video implemented. It sounds like you already have the link back to your site (option 3 in the video) so perhaps the link is not 'clean' ie a straight text link that leads to the exact job on your site and is not no-followed?
It might also depend on what kind of long tail you are looking at ranking for. Individual job ads might not pull a lot of organic traffic by themselves if they are not aggregated by type or location for example - at which point the higher authority domains are likely to show an advantage (just a thought).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
On-Page Problem
Hello Mozzers, A friend has a business website and the on-page stuff is done really bad. He wants to rank for: conference room furnishing, video conference, digital signage. (Don't worry about the keywords, it's just made up for an example.) For these three services he has a page: hiswebsite.com/av AV stands for audio and video and is the h1. If you click on one of the service, the url doesn't change. Like if you click on video conference, just the text changes, the url stays /av. All his targeted pages got an F Grade, I am not surprised, the services titles are in . Wouldn't it be a lot better to make an own page for every service with a targeted keyword, like hiswebsite.com/video-conference All this stuff is on /av, how will a 301 resirect work to all the service pages, does this make sense? Any help is appreciated! Thanks in advance!
Technical SEO | | grobro1 -
Duplicate page issue
Hi, i have a serious duplicate page issue and not sure how it happened and i am not sure if anyone will be able to help as my site was built in joomla, it has been done through k2, i have never come across this issue before i am seem to have lots of duplicate pages under author names, example http://www.in2town.co.uk/blog/diane-walker this page is showing the full articles which is not great for seo and it is also showing that there are hundreds more articles at the bottom on the semoz tool i am using, it is showing these as duplicates although there are hundreds of them and it is causing google to see lots of duplicate pages. Diane Walker
Technical SEO | | ClaireH-184886
http://www.in2town.co.uk/blog/diane-walker/Page-2 5 1 0
Diane Walker
http://www.in2town.co.uk/blog/diane-walker/Page-210 1 1 0
Diane Walker
http://www.in2town.co.uk/blog/diane-walker/Page-297 1 1 0
Diane Walker
http://www.in2town.co.uk/blog/diane-walker/Page-3 5 1 0
Diane Walker can anyone please help me to sort this important issue out.0 -
Redirecting Several Hundred Pages
As of May 21st 2013 (Penguin 2.0 update) we hit a triple-header and I think we can now officially dubbed the "KING OF GOOGLE PANALTIES"! 😞 -July 2012 - recieved 2 "Unatural Links" email -April 2012 - 20% traffic hit -May 21st 2013 - 35% traffic hit We have/had lots of very low quality links using the same anchor text as well as about 150 very low quality articles and almost 100 categories w/several hundred products that recieved little to no traffic. We have spent the last several weeks cleaning up our link profile and were highly successful in getting most of them removed and have kept detailed reports for our Reconsideration Request for the manual "Unatural Links" penalty. We have also went a step further and have completely redesigned the site that is now much faster/better on-page seo with new, high quality articles and are removing all the low quality articles, categories and products but we are unclear what to do with these. Which brings me to my question. Should we redirect these pages back to the home page or just let them go to 404 error? I have been doing lots of reading on this subject but there doesnt seem to be any good answers. From what I read, neither are good choices and I cannot decide between the lesser of the 2 evil's ..so any help with this would be greatly apreciated! Note:
Technical SEO | | k9byron
-These category and product pages have absolutly no inbound links (link benefit) and in my opinion are only sucking off link juice and generating little to no revenue. There are also no similar categories or products that these could be redirected to. For example, redirecting dog toys to the dog bed category just sounds like it would increase our bounce rate. -Again, the articles also have no link benefit and only a small handful of the articles actually generate any traffic to speak of (several thousand visitors per year) and the rest generate less than 1000 visitors per year. All have high bounce rates and low conversions. It would be nice to keep them live as I think some are okay and could be rewritten/re-purpose over time but maybe in light of our Panda penalty it might be better to just to save them offline, let them go to 404 errors and rewritten/re-purpose them another time? -We did create a very nice 404 page with category navigation and huge search bar so I am leaning more toward this option.
..
Thank in Advance!0 -
How to identify orphan pages?
I've read that you can use Screaming Frog to identify orphan pages on your site, but I can't figure out how to do it. Can anyone help? I know that Xenu Link Sleuth works but I'm on a Mac so that's not an option for me. Or are there other ways to identify orphan pages?
Technical SEO | | MarieHaynes0 -
Different links to to the same page
Hi, Based on the user's actions we post activity into users Facebook timeline. And each activity has link back to our particular page on our website. For example if original page was: www.Domain.com from Facebook timeline it would be like this: www.Domain.com?Ffb_action_ids=101508953168 Do you think this will have a negative effect on our page rankings as we will eded up having a lot of different URL's to the same page? www.Domain.com?Ffb_action_ids=101508953168 www.Domain.com?Ffb_action_ids=456788765609 etc.. Thank you, Karen Bdoyan
Technical SEO | | showme0 -
Wordpress duplicate pages
I am using Wordpress and getting duplicate content Crawler error for following two pages http://edustars.yourstory.in/tag/edupristine/ http://edustars.yourstory.in/tag/education-startups/ These two are tags which take you to the same page. All the other tags/categories which take you to the same page or have same title are also throwing errors, how do i fix it?
Technical SEO | | bhanu22170 -
Diagnostic says too many links on a page and most of the pages are from blog entries. Are tags considered links? How do I decrease links?
I just ran my first diagnostic on my site and the results came back were negative in the area of too many links one a page. There were also quite a few 404 errors. What is the best way to fix these problems? Most of the pages with too many links are from blog posts, are the tags counted as well and is this the reason for too many links?
Technical SEO | | Newport10300 -
Why googlebot indexing one page, not the other?
Why googlebot indexing one page, not the other in the same conditions? In html sitemap, for example. We have 6 new pages with unique content. Googlebot immediately indexes only 2 pages, and then after sometime the remaining 4 pages. On what parameters the crawler decides to scan or not scan this page?
Technical SEO | | ATCnik0