Rel="author" - This could be KickAss!
-
Google is now encouraging webmasters to attribute content to authors with rel="author". You can read what google has to say about it here and here.
A quote from one of google's articles....
When Google has information about who wrote a piece of content on the web, we may look at it as a signal to help us determine the relevance of that page to a user’s query. This is just one of many signals Google may use to determine a page’s relevance and ranking, though, and we’re constantly tweaking and improving our algorithm to improve overall search quality.
I am guessing that google might use it like this..... If you have several highly successful articles about "widgets", your author link on each of them will let google know that you are a widget expert. Then when you write future articles about widgets, google will rank them much higher than normal - because google knows you are an authority on that topic.
If it works this way the rel="author" attribute could be the equivalent of a big load of backlinks for highly qualified authors.
What do you think about this? Valuable?
Also, do you think that there is any way that google could be using this as a "content registry" that will foil some attempts at content theft and content spinning?
Any ideas welcome! Thanks!
-
I own a company and usually write my own blogs but not every time. The times I don't I pay to have them written and thus own the copy. Can an author be a company and the link point to the company about us page?
-
To anyone following this topic... A good thread at cre8asiteforums.com
-
Pretty sure both say they are interchangeable.
-
I was wondering if this is needed? Doesn't the specfication at schema.org cover this? Or would Google use the Author itemscope different from rel="Author"?
-
Right now, rel="author" is only useful with intra-domain URLs. It does not "count" if you are linking to other domains.
BUT...
In the future it might, so doing this could either give you a nice head start, or not. Time will tell.
-
I think it's a good idea and may open up some content syndication options that were discounted before...
In the past I have been firmly against content syndication - I want the content on my own site. However, if I think that the search engines are going to give me credit for doing it then I might do it when a great opportunity arrives.
-
I think it's a good idea and may open up some content syndication options that were discounted before (as per Dunamis' post) however I've not see the rel tag do much for me.
Tagging links to SM sites as rel="me" has not helped those pages get into the SERPs for my brand (though I've not been super consistent with doing it), rel="nofollow" obviously had the rug pulled from under it a while ago and I even once got carried away and tried linking language sites together with rel="alternate" lang="CC" but didn't get the uplift in other language version sites I hoped (though it was a bit of a long shot to begin with).
I'm just wondering how much value this is going to have. I still like it in principal and will attempt to use it where I can.
-
Or, the other issue could be that content sites could grab content from a non-web-savvy site owner. If the original owner didn't have an author tag, then the content site could slap their own author tag on and Google would think that they were the original author.
-
However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
Oh.... I really like that. I would like to see google internally put a date on first publication. One problem that some people might have is that their site is very new and weak and content scrapers hit them with a higher frequency than googlebot.
-
When I read it, I understood it to mean that the author tag was telling google that I was the original author. (I actually thought of you EGOL as I know you have been pushing for a content registry). Now, if someone steals my stuff I wouldn't expect them to put a rel author on it. However, I can see a few ways that the tag may be helpful:
-I recently had someone want to publish one of my articles on their site. I said no because I didn't want there to be duplicates of my stuff online. But, perhaps with rel author I could let another site publish my site as long as it is credited to me. Then, Google will know that my site deserves to be the top listing for this content.
-If I have stuff that I know scrapers are going to get, I can use the rel-author tag. My first thought was that a scraper site could sneakily put their own rel author on it and claim it as theirs. However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
-
This might be helpful for you, especially if you can get the syndication sites to place author tags on the blog posts.
rel=canonical might also be worth investigating.
I am also confused about this. I'd like to see more information from Google on exactly how these will be used - especially in cross-domain situations.
-
I actually have similar questions about this. The company I work for hosts a blog that is also syndicated across 4 to 5 other websites. The other sites have bigger reach on the web and our blog isn't getting much direct traffic out of this. I have a feeling adding the author tags to our content will eventually pay off to show that the content is being originated on our site and then syndicated. I am interested / excited to see other ways this will be used. I think its a great fix for the scraping issue and will hopefully prevent needing panda updates X.X
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Great DA but page authority not increasing!
Hey team, I hope you are doing great, I have been working effortlessly to increase the authority of my blog. I have used a number of Moz recommended methods like long-form content, posting frequency, getting references from influencers and great websites. It has all resulted in a good domain authority but no matter what I do, the page authority of my blog isn't increasing. Can you please have a look and guide: https://androidcompare.com/ Kind regards...
Algorithm Updates | | Bsmdi0 -
Why is Page Authority dropping?
Hi I'm trying to review pages which have previously ranked, but in March have dropped out completely. Some of these pages I can see have dropped to having a Page Authority of 1, we haven't changed anything on these pages, so is there a reason why the authority has dropped? These pages only had around 8 - 10 Page Authority to begin with. I'm trying to identify why we have lost keywords, and if it has anything to do with the Google Updates in March Here are examples of the pages with drops: http://www.key.co.uk/en/key/heavy-duty-shelving-1830x1830mm-blue-orange
Algorithm Updates | | BeckyKey
http://www.key.co.uk/en/key/metal-feet-for-heavy-duty-steel-shelving
http://www.key.co.uk/en/key/health-and-safety-law-poster-a2 Thank you!0 -
Directories and Domain Authority
I read all the time about how directories have very little weight in SEO anymore, but in my field, a lot of our competitors are propped up by paying for "profiles" aka links from places like martindale-hubbard, superlawyers, findlaw, nolo, Avvo, etc (which are essentially directories IMO) yet all those sites have very high DAs of 80 and above. So, are links from these sites worth it? I know that's a vague questions, but if Moz's algo seems to rank them so highly, I'm guessing that's reasonably close to what google thinks as well...maybe? Thanks for any insight, Ruben
Algorithm Updates | | KempRugeLawGroup0 -
Is having an identical title, h1 and url considered "over optimization"? Is it better to vary?
To get some new pages out without over-thinking things, I decided to line up the title tag, h1 tag and URLs of my pages exactly. They are dynamically generated based on the content the user is viewing (internal search results pages) They're not ranking very well at the moment, but there are a number of factors that are likely to blame. But, in particular, does anyone know if varying the text in these elements tends to perform better vs. having them all identical? Has there been any information from Google about this? Most if not all of the "over optimization" content I have seen online pertains to backlinks, not on-page content. It's easy to say, "test it!" And of course, that's just what I'm planning to do. But I thought I would leverage the combined knowledge of this forum to see what information I could obtain first, so I can do some informed testing, as tests can take a while to see results. Thanks 🙂
Algorithm Updates | | ntcma0 -
Rel canonical
Hi, Since we sorted all duplication issues using the rel canonical tag in the home page, and redirects in the htaccess file, our Moz Ranking has dropped markedly (possibly because there are now less apparent links on our site. At the same time our rankings and traffic from Google have dropped markedly. I notice that none of our top ranking competitors are using the rel canonical tag in the source on their home pages. We have just performed the same seo strategy on another unrelated site with the same immediate drop in MOZ ranking.
Algorithm Updates | | FFTCOUK0 -
"We've processed your reconsideration request for www...." - Could this be good news?
Hey, We recently had a Google Penguin related links warning and I've been going through Google WMT and removing the most offensive links. We have requested resubmission a couple of times and have had the standard response of: "
Algorithm Updates | | ChrisHolgate
Site violates Google's quality guidelines We received a request from a site owner to reconsider your site for compliance with Google's Webmaster Guidelines. We've reviewed your site and we still see links to your site that violate our quality guidelines. Specifically, look for possibly artificial or unnatural links pointing to your site that could be intended to manipulate PageRank. Examples of unnatural linking could include buying links to pass PageRank or participating in link schemes. We encourage you to make changes to comply with our quality guidelines. Once you've made these changes, please submit your site for reconsideration in Google's search results. If you find unnatural links to your site that you are unable to control or remove, please provide the details in your reconsideration request. If you have additional questions about how to resolve this issue, please see our Webmaster Help Forum for support.
" On the 5th September after spending another couple more days removing the most prolific offenders we resubmitted the site again and again got the automated response saying they had received our request. A week later on the 13th September we got a slightly different response of : "
We've processed your reconsideration request We received a request from a site owner to reconsider how we index your site. We've now reviewed your site. When we review a site, we check to see if it's in violation of our Webmaster Guidelines. If we don't find any problems, we'll reconsider our indexing of your site. If your site still doesn't appear in our search results, check our Help Center for steps you can take. " I left it another couple of weeks to see if we'd get a slightly more in depth response however so far there has been nothing. I'll be honest in not being entirely sure what this means. The e-mails says simultaneously 'We've now reviewed your site' (as in past tense) but then continues with "If we don't find any problems" which suggests a future tense. I’m unsure from reading the e-mail whether they have indeed reviewed it (and just not told us the outcome) or whether it’s just a delayed e-mail saying that they have received the reconsideration request. Of course, if I received this e-mail off anyone other than Google I would have thought I was still in the dog house but the fact that it differs from the standard ‘Site violates Google’s quality guidelines’ message leads me to believe that something has changed and they may be happy with the site or at least happier than they were previously. Has anybody else received the latter message and has anybody managed to determine exactly what it means? Cheers guys!0 -
New Google "Knowledge Graph"
So according to CNN an hour ago regarding new Google update: "With Knowledge Graph, which will begin rolling out to some users immediately, results will be arranged according to categories with which the search term has been associated" http://www.cnn.com/2012/05/16/tech/web/google-search-knowledge-graph/index.html?hpt=hp_t3 Does this mean we need to start optimizing for Categories as well as Keywords?
Algorithm Updates | | JFritton0 -
ECommerce site being "filtered" by last Panda update, ideas and discussion
Hello fellow internet go'ers! Just as a disclaimer, I have been following a number of discussions, articles, posts, etc. trying to find a solution to this problem, but have yet to get anything conclusive. So I am reaching out to the community for help. Before I get into the questions I would like to provide some background: I help a team manage and improve a number of med-large eCommerce websites. Traffic ranges anywhere from 2K - 12K+ (per day) depending on the site. Back in March one of our larger sites was "filtered" from Google's search results. I say "filtered" because we didn't receive any warnings and our domain was/is still listed in the first search position. About 2-3 weeks later another site was "filtered", and then 1-2 weeks after that, a third site. We have around ten niche sites (in total), about seven of them share an identical code base (about an 80% match). This isn't that uncommon, since we use a CMS platform to manage all of our sites that holds hundreds of thousands of category and product pages. Needless to say, April was definitely a frantic month for us. Many meetings later, we attributed the "filter" to duplicate content that stems from our product data base and written content (shared across all of our sites). We decided we would use rel="canonical" to address the problem. Exactly 30 days from being filtered our first site bounced back (like it was never "filtered"), however, the other two sites remain "under the thumb" of Google. Now for some questions: Why would only 3 of our sites be affected by this "filter"/Panda if many of them share the same content? Is it a coincidence that it was an exact 30 day "filter"? Why has only one site recovered?
Algorithm Updates | | WEB-IRS1