Rel="author" - This could be KickAss!
-
Google is now encouraging webmasters to attribute content to authors with rel="author". You can read what google has to say about it here and here.
A quote from one of google's articles....
When Google has information about who wrote a piece of content on the web, we may look at it as a signal to help us determine the relevance of that page to a user’s query. This is just one of many signals Google may use to determine a page’s relevance and ranking, though, and we’re constantly tweaking and improving our algorithm to improve overall search quality.
I am guessing that google might use it like this..... If you have several highly successful articles about "widgets", your author link on each of them will let google know that you are a widget expert. Then when you write future articles about widgets, google will rank them much higher than normal - because google knows you are an authority on that topic.
If it works this way the rel="author" attribute could be the equivalent of a big load of backlinks for highly qualified authors.
What do you think about this? Valuable?
Also, do you think that there is any way that google could be using this as a "content registry" that will foil some attempts at content theft and content spinning?
Any ideas welcome! Thanks!
-
I own a company and usually write my own blogs but not every time. The times I don't I pay to have them written and thus own the copy. Can an author be a company and the link point to the company about us page?
-
To anyone following this topic... A good thread at cre8asiteforums.com
-
Pretty sure both say they are interchangeable.
-
I was wondering if this is needed? Doesn't the specfication at schema.org cover this? Or would Google use the Author itemscope different from rel="Author"?
-
Right now, rel="author" is only useful with intra-domain URLs. It does not "count" if you are linking to other domains.
BUT...
In the future it might, so doing this could either give you a nice head start, or not. Time will tell.
-
I think it's a good idea and may open up some content syndication options that were discounted before...
In the past I have been firmly against content syndication - I want the content on my own site. However, if I think that the search engines are going to give me credit for doing it then I might do it when a great opportunity arrives.
-
I think it's a good idea and may open up some content syndication options that were discounted before (as per Dunamis' post) however I've not see the rel tag do much for me.
Tagging links to SM sites as rel="me" has not helped those pages get into the SERPs for my brand (though I've not been super consistent with doing it), rel="nofollow" obviously had the rug pulled from under it a while ago and I even once got carried away and tried linking language sites together with rel="alternate" lang="CC" but didn't get the uplift in other language version sites I hoped (though it was a bit of a long shot to begin with).
I'm just wondering how much value this is going to have. I still like it in principal and will attempt to use it where I can.
-
Or, the other issue could be that content sites could grab content from a non-web-savvy site owner. If the original owner didn't have an author tag, then the content site could slap their own author tag on and Google would think that they were the original author.
-
However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
Oh.... I really like that. I would like to see google internally put a date on first publication. One problem that some people might have is that their site is very new and weak and content scrapers hit them with a higher frequency than googlebot.
-
When I read it, I understood it to mean that the author tag was telling google that I was the original author. (I actually thought of you EGOL as I know you have been pushing for a content registry). Now, if someone steals my stuff I wouldn't expect them to put a rel author on it. However, I can see a few ways that the tag may be helpful:
-I recently had someone want to publish one of my articles on their site. I said no because I didn't want there to be duplicates of my stuff online. But, perhaps with rel author I could let another site publish my site as long as it is credited to me. Then, Google will know that my site deserves to be the top listing for this content.
-If I have stuff that I know scrapers are going to get, I can use the rel-author tag. My first thought was that a scraper site could sneakily put their own rel author on it and claim it as theirs. However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
-
This might be helpful for you, especially if you can get the syndication sites to place author tags on the blog posts.
rel=canonical might also be worth investigating.
I am also confused about this. I'd like to see more information from Google on exactly how these will be used - especially in cross-domain situations.
-
I actually have similar questions about this. The company I work for hosts a blog that is also syndicated across 4 to 5 other websites. The other sites have bigger reach on the web and our blog isn't getting much direct traffic out of this. I have a feeling adding the author tags to our content will eventually pay off to show that the content is being originated on our site and then syndicated. I am interested / excited to see other ways this will be used. I think its a great fix for the scraping issue and will hopefully prevent needing panda updates X.X
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Where does Google finds "Soft 404" and "Not found" links?
Hi all, We can see very old links or anonymous links of website suddenly listing under soft 404 or 404 in GSW. As per Google, some of them are some script generated ignorable links. Other are actually the ones which were deleted but not redirected. I wonder how Google get these years old links even though there are no source links available for these. These must be fixed even though they are not linked anywhere from our internal or external pages? Thanks
Algorithm Updates | | vtmoz0 -
What happens when most of the website visitors end up at an "noindex" log-in page?
Hi all, As most of the users are visiting our website for log-in, we are planning to deindex login page. As they cannpt find it on SERP, they gonna visit our website and login; I just wonder what happens when most of the visitors just end up at homepage by browsing into an "noindex" page. Obviously it increases bounce rate and exit rate as they just gonna disappear. Is this going to push down us in rankings? What are the other concerns to check about? Thanks
Algorithm Updates | | vtmoz0 -
Google "special" results for "top" products
Hi all, When we search for top tools or software like "top cms systems", we can see Google listing some companies in boxes. What these results are called? I know search snippets are different. Any idea on what basis Google is listing them? I couldn't able to give you screenshot as imgur failed to upload image. Thanks
Algorithm Updates | | vtmoz0 -
Directories and Domain Authority
I read all the time about how directories have very little weight in SEO anymore, but in my field, a lot of our competitors are propped up by paying for "profiles" aka links from places like martindale-hubbard, superlawyers, findlaw, nolo, Avvo, etc (which are essentially directories IMO) yet all those sites have very high DAs of 80 and above. So, are links from these sites worth it? I know that's a vague questions, but if Moz's algo seems to rank them so highly, I'm guessing that's reasonably close to what google thinks as well...maybe? Thanks for any insight, Ruben
Algorithm Updates | | KempRugeLawGroup0 -
Are Some Websites "White Listed"?
I track several niches that I am not in so I am not to biased with my own, and I noticed one site despite its rather mediocre quality, never moves. I have seen other websites rise and fall in rank, a few with pretty good content. He writes reviews, but very obviously never touched the products he reviews. However I see some other sites with real photos, and good advice for making a decision - they will sit on page two or three. I havent done a lot of research other than the size of the sites, and the links, and they are about equal. Sometimes the ranking site is smaller (its about 90 pages in google). The other sites I have seen have more content on one topic as well, which is interesting google opts for his one page "once over" review over something more in depth and authentic. It got me thinking about whether some sites are white listed by google, as in hand picked to rank despite what else is out there. Is this possible?
Algorithm Updates | | PrivatePartners0 -
How could Google define "low quality experience merchants"?
Matt Cutts mentioned at SXSW that Google wants to take into consideration the quality of the experience ecommerce merchants provide and work this into how they rank in SERPs. Here's what he said if you missed it: "We have a potential launch later this year, maybe a little bit sooner, looking at the quality of merchants and whether we can do a better job on that, because we don’t want low quality experience merchants to be ranking in the search results.” My question; how exactly could Google decide if a merchant provides a low and high quality experience? I would image it would be very easy for Google to decide this with merchants in their Trusted Store program. I wonder what other data sets Google could realistically rely upon to make such a judgment. Any ideas or thoughts are appreciated.
Algorithm Updates | | BrianSaxon0 -
When Google crawls and indexes a new page does it show up immediately in Google search - "site;"?
We made changes to a site, including the addition of a new page and corresponding link/text changes to existing pages. The changes are not yet showing up in the Google index (“site:”/cache), but, approximately 24 hours after making the changes, The SERP's for this site jumped up. We obtained a new back link about a couple of weeks ago, but it is not yet showing up in OSE, Webmaster Tools, or other tools. Just wondering if you think the Google SERP changes run ahead of what they actually show us in site: or cache updates. Has Google made a significant SERP “adjustment” recently? Thanks.
Algorithm Updates | | richpalpine0 -
Website "penalized" 3 times by Google
I have a website that I'm working with that has had the misfortune of gaining rankings/traffic on Google, then having the rankings/traffic removed...3 times! (Very little was changed on the site to gain or lose "favor" with Google, either.) Notes: Site is a mixture of high quality original content and duplicate content (vacation rental listings) When traffic crashes, we lose nearly all rankings and traffic (90+%) When traffic crashes, we lose all rankings sitewide, including those gained by our high quality, unique pages None of the "crash" dates appear to coincide with any Panda update dates We are working on adding unique content to our pages with duplicate content, but it's a long process and so far doesn't seem to have made any difference I'm confounded why Google keeps "changing its mind" about our site We have an XML sitemap, and Google keeps our site indexed pretty well, even when we lose our rankings Due to the drastic and sitewide loss of rankings, I'm assuming we are dealing with some sort of algorithmic penalty Timeline: Traffic steadily grows starting in Jan 2011 Traffic crashes on Feb 19, 2011. We assumed it was due to a pre-panda anti-scraper update, but don't know. Google sends traffic to our site on March 1, then none the next day On June 16th, I block part of the site using robots.txt (most of the section wasn't indexed anyway) On June 17th, Google starts ranking our site again. I thought it might be due to the robots.txt change, but I had just made the change a few hours ago, and Google wasn't even indexing the part of the site I blocked Traffic/rankings crash again on July 6th. No theory why. Site URL: http://www.floridaisbest.com Traffic Stats: Attached I know that we need more backlinks and less duplicate content, but I can't explain why our Google rankings are "on again, off again". I have never seen a site gain and lose all of its rankings/traffic so drastically multiple times, for no apparent reason. Any thoughts or ideas would be welcome. Thanks! t8IqB
Algorithm Updates | | AdamThompson0