Rel=author: Which Google+ profile do I use (personal profiles or profiles set up under company email domain)?
-
Since our organization uses Google Business Apps, everyone in our org has a Google account under our company's domain name. When Google+ came out a lot of our employees set up two separate Google+ accounts (one under their work email address and one under their personal email address). Some people use one account more than the other.
I'm about to set up rel=author on our blog, but I'm not sure which profiles to link to: personal account, business account or the account the individual uses the most?
-
I think it's possible that the company would "lose out," but that's not necessarily the case.
If I have successfully convinced the algorithm that I am an expert on widget maintenance, my articles about widget maintenance will get a rankings boost. Then I leave the widget-maintenance industry. The algorithm still believes that I'm an expert in that niche ... for a while, at least. It's quite likely that the algorithm's confidence in my expertise will decay over time if I no longer engage with that niche. My AuthorRank may drop, and the content I authored may no longer get the AR rankings boost. How far the content would fall in the SERPs depends on how much it was relying on that one ranking factor.
-
I don't see why an account tied to a particular e-mail address would have an advantage in establishing AuthorRank. But keep in mind, we're all still guessing here.
G+ does have a one-account-per-person rule. I have no idea how much they enforce it. Considering how aggressively they enforce the no-pseudonym rule, I would guess that they take all of their rules pretty seriously.
-
In the context of this question and conversation, what about using rel=publisher for the brand voice and rel=author for specific individuals?
Katie, what happens if someone who has built up great Author Rank via their personal account leaves the company. It seems the company would lose out in that scenario. Very curious to know everyone's take. From personal experience I can tell you that connecting one's personal Google account to a specific brand can make for a big mess when someone moves to another company.
-
Thanks for the clarification and reply. I've updated my original question, as you were correct, I meant rel=author set up.
From what I understand then is that it's best to link to active Google+ profiles. I guess I was also curious if there were any SEO benefits to linking to a domain-based account, but it sounds like it really is more based on active accounts that speak.
Does anyone know if Google has made any statements on having multiple Plus accounts? My assumption is that they'd rather people have one identity. It has caused great confusion within our organization. No one knows which profiles to really use.
Luckily, not many people have started using the domain-based accounts at our organization, so I think I'll go ahead and encourage our employees to use their personal accounts moving forward.
-
This is all related to another question I had about Authorship / AuthorRank:
Google Webmaster Tools will show you author stats for the sites that you are a verified contributor to. So if you have GWT linked to the same account your Google+ profile is on, you can see your own authorship stats.
But what about a corporate or client's site that you're working on, with multiple contributors? Is there a way of monitoring the impact that all your contributors' AuthorRank is having?
A common scenario will be websites commissioning content from freelancers with high AuthorRank. As the client or the agency, how will we monitor the impact of that AuthorRank without access to each individual's GWT?
This would make a very handy addition to SEMoz's Research Tools...
-
I agree....
Something else to consider....
If you are a great author but your employer has you writing quick and dirty summaries to stay within budget then you don't want to stink up your personal reputation by claiming them.
-
I think the terminology here may be a bit muddled.
AuthorRank is not something you "set up on your blog." It's a ranking factor that Google has patented and may be implementing some time soon. The thing you set up on your blog is the rel=author markup.
I'm not correcting you to be pedantic, but because it's important that you understand what AuthorRank actually is so you can make the best decision. AuthorRank is basically the answer to the question "How much should I trust what this author has to say about this subject?" Google will determine that based on your social profile on Google+. If you want Google to think you're a trustworthy expert on widgets, you need to engage with other widget enthusiasts and widget experts on Google+, and they need to engage with you.
You can use rel=author to connect your content to an inactive Google+ profile, and that will give you a pretty picture on the SERP and maybe help with CTR, but it will not help with AuthorRank. AuthorRank will only come from an active Google+ profile.
I'm not sure if it's a good idea or a bad idea to keep a personal G+ account and a professional G+ account. On the one hand, if all you use your professional G+ account for is engaging in your niche, that could be a strong sign that you're really into that subject. On the other hand, if your professional G+ account never has any off-topic, personal activity, that could ping Google as inauthentic.
-
This is a great question. I just read a blog post by Tom Critchlow about how Distilled uses Google+ for all internal communications and that they had to deal with the same issue. Here's a link to the post http://tomcritchlow.com/private-google-plus-engagement
I would set up AuthorRank on the blog from the business account, have everyone use their personal accounts for internal communications and just make sure that everyone's circles are set up accordingly. In other words, you want blog readers circling the business account circle for the blog, not necessarily individual's circles.
I'd love to hear what other people think because I think there is mass confusion over this specific issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I use the Google disavow tool?
Hi I'm a bit new to SEO and am looking for some guidance. Although there is no indication in Webmaster tools that my site is being penalised for bad links, I have noticed that I have over 200 spam links for "Pay Day Loans" pointing to my site. (This was due to a hack on my site several years ago). So my question is two fold. Firstly, is it normal to have spammy links pointing to your site and secondly, should I bother to do anything about it? I did some research into the Disavow tool in Webmaster tools wonder I should use it to block all these links. Thanks
Technical SEO | | hotchilidamo0 -
Old domain still being crawled despite 301s to new domain
Hi there, We switched from the domain X.com to Y.com in late 2013 and for the most part, the transition was successful. We were able to 301 most of our content over without too much trouble. But when when I do a site:X.com in Google, I still see about 6240 URLs of X listed. But if you click on a link, you get 301d to Y. Maybe Google has not re-crawled those X pages to know of the 301 to Y, right? The home page of X.com is shown in the site:X.com results. But if I look at the cached version, the cached description will say :This is Google's cache of Y.com. It is a snapshot of the page as it appeared on July 31, 2014." So, Google has freshly crawled the page. It does know of the 301 to Y and is showing that page's content. But the X.com home page still shows up on site:X.com. How is the domain for X showing rather than Y when even Google's cache is showing the page content and URL for Y? There are some other similar examples. For instance, you would see a deep URL for X, but just looking at the <title>in the SERP, you can see it has crawled the Y equivalent. Clicking on the link gives you a 301 to the Y equivalent. The cached version of the deep URL to X also shows the content of Y.</p> <p>Any suggestions on how to fix this or if it's a problem. I'm concerned that some SEO equity is still being sequestered in the old domain.</p> <p>Thanks,</p> <p>Stephen</p></title>
Technical SEO | | fernandoRiveraZ1 -
Meta data & xml sitemaps for mobile sites when using rel="canonical"/rel="alternate" annotations
When using rel="canonical" and rel="alternate" annotations between mobile and desktop sites (rel="canonical" on mobile, pointing to desktop, and rel="alternate" on desktop pointing to mobile), what are everyone's thoughts on using meta data on the mobile site? Is it necessary? And also, what is the common consensus on using a separate mobile xml sitemap?
Technical SEO | | 4Ps0 -
ECommerce Problem with canonicol , rel next , rel prev
Hi I was wondering if anyone willing to share your experience on implementing pagination and canonical when it comes to multiple sort options . Lets look at an example I have a site example.com ( i share the ownership with the rest of the world on that one 😉 ) and I sell stuff on the site example.com/for-sale/stuff1 example.com/for-sale/stuff2 example.com/for-sale/stuff3 etc I allow users to sort it by date_added, price, a-z, z-a, umph-value, and so on . So now we have example.com/for-sale/stuff1?sortby=date_added example.com/for-sale/stuff1?sortby=price example.com/for-sale/stuff1?sortby=a-z example.com/for-sale/stuff1?sortby=z-a example.com/for-sale/stuff1?sortby=umph-value etc example.com/for-sale/stuff1 **has the same result as **example.com/for-sale/stuff1?sortby=date_added ( that is the default sort option ) similarly for stuff2, stuff3 and so on. I cant 301 these because these are relevant for users who come in to buy from the site. I can add a view all page and rel canonical to that but let us assume its not technically possible for the site and there are tens of thousands of items in each of the for-sale pages. So I split it up in to pages of x numbers and let us assume we have 50 pages to sort through. example.com/for-sale/stuff1?sortby=date_added&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=price&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=a-z&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=z-a&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=umph-value&page=2 to ...page=50 etc This is where the shit hits the fan. So now if I want to avoid duplicate issue and when it comes to page 30 of stuff1 sorted by date do I add rel canonical = example.com/for-sale/stuff1 rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 or rel canonical = example.com/for-sale/stuff1?sortby=date_added rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 or rel canonical = example.com/for-sale/stuff1 rel next = example.com/for-sale/stuff1?page=31 rel prev = example.com/for-sale/stuff1?page=29 or rel canonical = example.com/for-sale/stuff1?page=30 rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 or rel canonical = example.com/for-sale/stuff1?page=30 rel next = example.com/for-sale/stuff1?page=31 rel prev = example.com/for-sale/stuff1?page=29 None of this feels right to me . I am thinking of using GWT to ask G-bot not to crawl any of the sort parameters ( date_added, price, a-z, z-a, umph-value, and so on ) and use rel canonical = example.com/for-sale/stuff1?sortby=date_added&page=30 rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 My doubts about this is that , will the link value that goes in to the pages with parameters be consolidated when I choose to ignore them via URL Parameters in GWT ? what do you guys think ?
Technical SEO | | Saijo.George0 -
Will Links to one Sub-Domain on a Site hurt a different Sub-Domain on the same site by affecting the Quality of the Root Domain?
Hi, I work for a SaaS company which uses two different subdomains on our site. A public for our main site (which we want to rank in SERPs for), and a secure subdomain, which is the portal for our customers to access our services (which we don't want to rank for) . Recently I realized that by using our product, our customers are creating large amounts of low quality links to our secure subdomain and I'm concerned that this might affect our public subdomain by bringing down the overall Authority of our root domain. Is this a legitimate concern? Has anyone ever worked through a similar situation? any help is appreciated!
Technical SEO | | ifbyphone0 -
Sending signals to Google to rank the correct page for a set of Keywords.
Hi All, Out of all our keywords their are 3 that are showing our home page in the serps rather than the specific product page URL on Google.co.za (Google.com ranks the correct URL) Im not sure why this is happening as most links built using the anchor text are pointing to the correct page. Why would google prefer ranking our home page on local search and rank the correct page on Google.com? (only 3 keywords have this problem) I have tried to correct this by creating links from strong internal pages with anchor text pointing to the correct URL. I have also concentrated on building links from .co.za domains using the anchor text and correct URL but to no avail. It has been 2 weeks now, since i tried to sort it out, but im not sure what else i can do to tell Google to rank the correct page. Any ideas? Regards Greg
Technical SEO | | AndreVanKets0 -
Will using http ping, lastmod increase our indexation with Google?
If Google knows about our sitemaps and they’re being crawled on a daily basis, why should we use the http ping and /or list the index files in our robots.txt? Is there a benefit (i.e. improving indexability) to using both ping and listing index files in robots? Is there any benefit to listing the index sitemaps in robots if we’re pinging? If we provide a decent <lastmod>date is there going to be any difference in indexing rates between ping and the normal crawl that they do today?</lastmod> Do we need to all to cover our bases? thanks Marika
Technical SEO | | marika-1786190