How can a recruitment company get 'credit' from Google when syndicating job posts?
-
I'm working on an SEO strategy for a recruitment agency. Like many recruitment agencies, they write tons of great unique content each month and as agencies do, they post the job descriptions to job websites as well as their own. These job websites won't generally allow any linking back to the agency website from the post. What can we do to make Google realise that the originator of the post is the recruitment agency and they deserve the 'credit' for the content?
The recruitment agency has a low domain authority and so we've very much at the start of the process. It would be a damn shamn if they produced so much great unique content but couldn't get Google to recognise it.
Google's advice says: "Syndicate carefully: If you syndicate your content on other sites, Google will always show the version we think is most appropriate for users in each given search, which may or may not be the version you'd prefer. However, it is helpful to ensure that each site on which your content is syndicated includes a link back to your original article. You can also ask those who use your syndicated material to use the noindex meta tag to prevent search engines from indexing their version of the content." - But none of that can happen. Those big job websites just won't do it.
A previous post here didn't get a sufficient answer. I'm starting to think there isn't an answer, other than having more authority than the websites we're syndicating to. Which isn't going to happen any time soon!
Any thoughts?
-
Hey Mark,
If it were me- I'd follow Dmitrii's suggestion and fetch as Google after you post it. So long as you aren't doing 500+ more of these a month, I don't see an issue with that. As far as backlinks go, there are some sites (like meetup.com) where you can sponsor posts to get backlinks. Not sure if that would help your case or not-- you could always have your agency host a networking event and start building authority that way.
That being said, what is the goal here? Backlinks or getting credit for the content? Because once you post it and index it's yours.
-
Hi there.
Well, I think you actually have two questions:
- How to make sure that Google understands that you're original author of job post;
- How to outrank large websites.
Well, as for q1 - the only way in your situation is to do "fetch as google" in GWT of your website's page when you just posted new job. Usually it takes just couple of minutes for Google to index it and include in SERPs. So, first make sure your website is indexed, only then start posting the same job on other job search websites.
As for q2 - that's completely different story. Even if Google understands that your website was original author, it doesn't mean whatsoever that your website is going to outrank large job websites. To make that happen, you have to try to get as many backlinks as possible to those job postings (maybe from other websites with "click for full job description"), and more backlinks to your domain in general.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it necessary to use Google's Structured Data Markup or alternative for my B2B site?
Hi, We are in the process of going through a re-design for our site. Am trying to understand if we need to use some sort of structured data either from Google Structured data or schema. org?
Intermediate & Advanced SEO | | Krausch0 -
WordPress posts Title field inserts title into blog posts like a headline but doesn't ad H1 tag how to change?
I have a Wordpress website which is just using the Default theme, when I post in the blog, whatever I put in the "Title" field at the top of the editor is automatically is placed within the body of the blog post, like a headline, but it doesn't include any H1 tags that I can see. If I add my own headline within in the blog editor, it still inserts the Title like a headline. I am using the Yoast SEO Plugin and also write the meta title there, should I just leave the Wordpress title field blank so it doesn't insert into the blog post? Or is that inserted Title being recognized as an H1 even though I don't see h1 tags anywhere? Hope this isn't too confusing.
Intermediate & Advanced SEO | | SEO4leagalPA1 -
Duplicate Content through 'Gclid'
Hello, We've had the known problem of duplicate content through the gclid parameter caused by Google Adwords. As per Google's recommendation - we added the canonical tag to every page on our site so when the bot came to each page they would go 'Ah-ha, this is the original page'. We also added the paramter to the URL parameters in Google Wemaster Tools. However, now it seems as though a canonical is automatically been given to these newly created gclid pages; below https://www.google.com.au/search?espv=2&q=site%3Awww.mypetwarehouse.com.au+inurl%3Agclid&oq=site%3A&gs_l=serp.3.0.35i39l2j0i67l4j0i10j0i67j0j0i131.58677.61871.0.63823.11.8.3.0.0.0.208.930.0j3j2.5.0....0...1c.1.64.serp..8.3.419.nUJod6dYZmI Therefore these new pages are now being indexed, causing duplicate content. Does anyone have any idea about what to do in this situation? Thanks, Stephen.
Intermediate & Advanced SEO | | MyPetWarehouse0 -
Website penalised can't find where the problem is. Google went INSANE
Hello, I desperately need a hand here! Firstly I just want to say that I we never infracted google guidelines as far as we know. I have been around in this field for about 6 years and have had success with many websites on the way relying only in natural SEO and was never penalised until now. The problem is that our website www.turbosconto.it is and we have no idea why. (not manual) The web has been online for more than 6 months and it NEVER started to rank. it has about 2 organic visits a day at max. In this time we got several links from good websites which are related to our topic which actually keep sending us about 50 visits a day. Nevertheless our organic visita are still 1 or 2 a day. All the pages seem to be heavily penalised ... when you perform a search for any of our "shops"even including our Url, no entries for the domain appear. A search example: http://www.turbosconto.it zalando What I will expect to find as a result: http://www.turbosconto.it/buono-sconto-zalando The same case repeats for all of the pages for the "shops" we promote. Searching any of the brads + our domain shows no result except from "nike" and "euroclinix" (I see no relationship between these 2) Some days before for these same type of searches it was showing pages from the domain which we blocked by robots months ago, and which go to 404 error instead of our optimised landing pages which cannot be found in the first 50 results. These pages are generated by our rating system... We already send requests to de index all theses pages but they keep appearing for every new page that we create. And the real pages nowhere to be found... Here isan example: http://www.turbosconto.it/shops/codice-promozionale-pimkie/rat
Intermediate & Advanced SEO | | sebastiankoch
You can see how google indexes that for as in this search: site:www.turbosconto.it rate Why on earth will google show a page which is blocked by the robots.txt displaying that the content cannot retrieved because it is blocked by the robots instead of showing pages which are totally SEO Friendly and content rich... All the script from TurboSconto is the same one that we use in our spanish version www.turbocupones.com. With this last one we have awesome results, so it makes things even more weird... Ok apart from those weird issues with the indexation and the robots, why did a research on out backlinks and we where surprised to fin a few bad links that we never asked for. Never the less there are just a few and we have many HIGH QUALITY LINKS, which makes it hard to believe that this could be the reason. Just to be sure we, we used the disavow tool for these links, here are the bad links we submitted 2 days ago: domain: www.drilldown.it #we did not ask for this domain: www.indicizza.net #we did not ask for this domain: urlbook.in #we did not ask for this, moreover is a spammy one http://inpe.br.way2seo.org/domain-list-878 #we did not ask for this, moreover is a spammy one http://shady.nu.gomarathi.com/domain-list-789 #we did not ask for this, moreover is a spammy one http://www.clicdopoclic.it/2013/12/i-migliori-siti-italiani-di-coupon-e.html #we did not ask for this, moreover and is a copy of a post of an other blog http://typo.domain.bi/turbosconto.it I have no clue what can it be, we have no warning messages in the webmaster tools or anything.
For me it looks as if google has a BUG and went crazy on judging our italian website. Or perhaps we are just missing something ??? If anyone could throw some light on this I will be really glad and willing to pay some compensation for the help provided. THANKS A LOT!0 -
How can Google index a page that it can't crawl completely?
I recently posted a question regarding a product page that appeared to have no content. [http://www.seomoz.org/q/why-is-ose-showing-now-data-for-this-url] What puzzles me is that this page got indexed anyway. Was it indexed based on Google knowing that there was once content on the page? Was it indexed based on the trust level of our root domain? What are your thoughts? I'm asking not only because I don't know the answer, but because I know the argument is going to be made that if Google indexed the page then it must have been crawlable...therefore we didn't really have a crawlability problem. Why Google index a page it can't crawl?
Intermediate & Advanced SEO | | danatanseo0 -
Google bot vs google mobile bot
Hi everyone 🙂 I seriously hope you can come up with an idea to a solution for the problem below, cause I am kinda stuck 😕 Situation: A client of mine has a webshop located on a hosted server. The shop is made in a closed CMS, meaning that I have very limited options for changing the code. Limited access to pagehead and can within the CMS only use JavaScript and HTML. The only place I have access to a server-side language is in the root where a Defualt.asp file redirects the visitor to a specific folder where the webshop is located. The webshop have 2 "languages"/store views. One for normal browsers and google-bot and one for mobile browsers and google-mobile-bot.In the default.asp (asp classic). I do a test for user agent and redirect the user to one domain or the mobile, sub-domain. All good right? unfortunately not. Now we arrive at the core of the problem. Since the mobile shop was added on a later date, Google already had most of the pages from the shop in it's index. and apparently uses them as entrance pages to crawl the site with the mobile bot. Hence it never sees the default.asp (or outright ignores it).. and this causes as you might have guessed a huge pile of "Dub-content" Normally you would just place some user-agent detection in the page head and either throw Google a 301 or a rel-canon. But since I only have access to JavaScript and html in the page head, this cannot be done. I'm kinda running out of options quickly, so if anyone has an idea as to how the BEEP! I get Google to index the right domains for the right devices, please feel free to comment. 🙂 Any and all ideas are more then welcome.
Intermediate & Advanced SEO | | ReneReinholdt0 -
Why is my site's 'Rich Snippets' information not being displayed in SERPs?
We added hRecipe microformats data to our site in April and then migrated to the Schema.org Recipe format in July, but our content is still not being displayed as Rich Snippets in search engine results. Our pages validate okay in the Google Rich Snippets Testing Tool. Any idea why they are not being displayed in SERP's? Thanks.
Intermediate & Advanced SEO | | Techboy0 -
Best solution to get mass URl's out the SE's index
Hi, I've got an issue where our web developers have made a mistake on our website by messing up some URL's . Because our site works dynamically IE the URL's generated on a page are relevant to the current URL it ment the problem URL linked out to more problem URL's - effectively replicating an entire website directory under problem URL's - this has caused tens of thousands of URL's in SE's indexes which shouldn't be there. So say for example the problem URL's are like www.mysite.com/incorrect-directory/folder1/page1/ It seems I can correct this by doing the following: 1/. Use Robots.txt to disallow access to /incorrect-directory/* 2/. 301 the urls like this:
Intermediate & Advanced SEO | | James77
www.mysite.com/incorrect-directory/folder1/page1/
301 to:
www.mysite.com/correct-directory/folder1/page1/ 3/. 301 URL's to the root correct directory like this:
www.mysite.com/incorrect-directory/folder1/page1/
www.mysite.com/incorrect-directory/folder1/page2/
www.mysite.com/incorrect-directory/folder2/ 301 to:
www.mysite.com/correct-directory/ Which method do you think is the best solution? - I doubt there is any link juice benifit from 301'ing URL's as there shouldn't be any external links pointing to the wrong URL's.0