How do you remove Authorship photos from your homepage?
-
Suppose you have a website with a blog on it, and you show a few recent blog posts on the homepage. Google see the headline + by Author Name and associates that user's Google+ profile.
This is great for the actual blog posts, but how do you prevent this from happening on the homepage or other blog roll page?
-
I have a similar issue. For whatever reason, Google has decided our CEO (Glen Kelman) is the 'author' of some of our site pages. There is no author markup on the page anywhere. In fact, our CEO's name isn't anywhere on the page. Yet, in SERPs, he is the 'author' of our Seattle market page (you can likely see it by searching for 'seattle real estate' and looking for Redfin in the results).
Glen is a prolific blogger who not only posts to the Redfin blog, but also guest blogs on high profile sites around the web so it stands to reason that Google is very 'familiar' with him as an author. Moreover, he lives in Seattle so maybe Google is thinking, "Glen is from Seattle...he's the CEO of Redfin...he's a prolific author...Glen + Seattle + Redfin + Author = Glen is the author of the Seattle market page on Redfin!"
Any ideas on how to stop Google from making this mistake?
-
Hi Tom, thanks for the response but that doesn't work.
There is no link to a Google+ profile on this page - the Author, though, is verified by the domain name and the page includes "by", causing this.
Any other thoughts?
-
Hi Stephen
Basically, all you need to do is make sure that the rel=author code is not in the tag of that page.
The code will look something like rel="author" href="https://plus.google.com/112656687930780652496"/> but obviously with the G+ profile URL that you are talking about.
If that code isn't on the page, then Google will not verify the page as marked by an author.
If you've gone a different way and linked by an actual URL on the page, like Name here - again all you need to do is just make sure that this link isn't present on the page and the authorship markup won't be attributed to that page.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing the Trailing Slash in Magento
Hi guys, We have noticed trailing slash vs non-trailing slash duplication on one of our sites. Example:
Intermediate & Advanced SEO | | brandonegroup
Duplicate: https://www.example.com.au/living/
Preferred: https://www.example.com.au/living So, SEO-wise, we suggested placing a canonical tag on all trailing slash pointing to non-trailing slash. However, devs have advised against removing the trailing slash from some URLs with a blanket rule, as this may break functionality in Magento that depends on the trailing slash. The full site would need to be tested after implementing a blanket rewrite rule. Is any other way to address this trailing slash duplication issue without breaking anything in Magento? Keen to hear from you guys. Cheers,0 -
Duplicate Homepage - How to fix?
Hi Everyone, I've tried using BeamUsUp SEO Crawler and have found one warning and two errors on our site. The warning is for a duplicate meta description, and the errors are a duplicate page and a duplicate title. For each problem it's showing the same two pages as the source of the error, but one has a slash at the end and one doesn't. They're both for the homepage. https://www.url.com/ And https://www.url.com Has anyone seen this before? Does anyone know if this is anything we should worry about?
Intermediate & Advanced SEO | | rswhtn1 -
Removing Parameterized URLs from Google Index
We have duplicate eCommerce websites, and we are in the process of implementing cross-domain canonicals. (We can't 301 - both sites are major brands). So far, this is working well - rankings are improving dramatically in most cases. However, what we are seeing in some cases is that Google has indexed a parameterized page for the site being canonicaled (this is the site that is getting the canonical tag - the "from" page). When this happens, both sites are being ranked, and the parameterized page appears to be blocking the canonical. The question is, how do I remove canonicaled pages from Google's index? If Google doesn't crawl the page in question, it never sees the canonical tag, and we still have duplicate content. Example: A. www.domain2.com/productname.cfm%3FclickSource%3DXSELL_PR is ranked at #35, and B. www.domain1.com/productname.cfm is ranked at #12. (yes, I know that upper case is bad. We fixed that too.) Page A has the canonical tag, but page B's rank didn't improve. I know that there are no guarantees that it will improve, but I am seeing a pattern. Page A appears to be preventing Google from passing link juice via canonical. If Google doesn't crawl Page A, it can't see the rel=canonical tag. We likely have thousands of pages like this. Any ideas? Does it make sense to block the "clicksource" parameter in GWT? That kind of scares me.
Intermediate & Advanced SEO | | AMHC0 -
Removing www from printed and digital
I wanted to make sure there will be no negative SEO implications if we change all ‘www.fdmgroup.com' references (printed and digital)?After some Googling, apparently some search engines regard www.fdmgroup.com and fdmgroup.com as two different websites and split SEO rankings (quote me if i’m wrong!).To date, a lot (if not all) of our online presence (e.g. adverts, banner links etc) use www.fdmgroup.com (both visually and in HTML markup) so these would also need updating to remove the 'www'.What are your thoughts? For the sake of SEO and canonical/duplicate content/ranking issues etc, would changing all www.fdmgroup.com references have a negative effect?
Intermediate & Advanced SEO | | fdmgroup0 -
Should I remove all meta descriptions to avoid duplicates as a short term fix?
I’m currently trying to implement Matt Cutt’s advice from a recent YouTube video, in which he said that it was better to have no meta descriptions at all than duplicates. I know that there are better alternatives, but, if forced to make a choice, would it be better to remove all duplicate meta descriptions from a site than to have duplicates (leaving a lone meta tag description on the home page perhaps?). This would be a short term fix prior to making changes to our CMS to allow us to add unique meta descriptions to the most important pages. I’ve seen various blogs across the internet which recommend removing all the tags in these circumstances, but I’m interested in what people on Moz think of this. The site currently has a meta description which is duplicated across every page on the site.
Intermediate & Advanced SEO | | RG_SEO1 -
Removing Content 301 vs 410 question
Hello, I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website. I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware). Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere). This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience. When we cut pages, though, we used a different approach, doing all of the below steps:
Intermediate & Advanced SEO | | Eric_R
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages. When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way… I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda. So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions: 1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)? 2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did? Thank you in advance for your help,
Eric1 -
How do you achieve Google Authorship verification on a site with no clearly defined authors?
Google Authorship seems to be the current buzz topic in SEO. It seems perfect for people who write lots of articles of blog posts, but what about sites where the main focus isn't articles e.g. e-commerce sites? Can the website as a whole get verified?
Intermediate & Advanced SEO | | statman870