Rel="author" showing old image
-
I'm using http://www.google.com/webmasters/tools/richsnippets to test my rel="author" tag which was successful, but I noticed I wanted to change my image in Google+ as it is not what I want.
I changed my image in Google+, it's been over 14 hours now and still not showing the new picture using the RichSnippets tool. I know Google can take a couple weeks at least to show changes in search results, but this RichSnippet tool I thought was immeidate.
Am I missing something here or am I just impatient? I want my new photo to show.
-
Just an update, on Oct 15th, 2012 I added the links in my various websites to use Rel=author. Today, October 30th, 2012 (only 15 days later) I see my image live in the SERPs! Ya hoo!
and that's for all my websites I added including my company site, books website, and my blog sub-domain. Thanks SEO Moz for an awesome suggestion to do this. I really stand out now.
P.S. I just did my happy dance before writing this.
-
OK, I just checked again now and my new image is showing in Google's Rich Snippet Tester. The not so good news is my image doesn't show in the live results yet. At least I know it updates the image pretty quickly, but apparently, what I've read, it can take a while to show up in the live SERPs, so I'll have to just be patient now and hope they allow it soon.
-
I've never tested the turn around time of the rich snippet tester - but actual Google SERPs for Author markup are changed almost instantly. Have you checked a live search result for one of your articles to see how quickly it changes?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How handle pages with "read more" text query strings?
My site has hundreds of keyword content landing pages that contain one or two sections of "read more" text that work by calling the page and changing a ChangeReadMore variable. This causes the page to currently get indexed 5 times (see examples below plus two more with anchor tag set to #sectionReadMore2 This causes Google to include the first version of the page which is the canonical version and exclude the other 4 versions of the page. Google search console says my site has 4.93K valid pages and 13.8K excluded pages. My questions are: 1. Does having a lot of excluded pages which are all copies of included pages hurt my domain authority or otherwise hurt my SEO efforts? 2. Should I add a rel="nofollow" attribute to the read more link? If I do this will Google reduce the number of excluded pages? 3. Should I instead add logic so the canonical tag displays the exact URL each time the page re-displays in another readmore mode? I assume this would increase my "included pages" and decrease the number of "excluded pages". Would this somehow help my SEO efforts? EXAMPLE LINKS https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp?ChangeReadMore=More#sectionReadMore1 https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp?ChangeReadMore=Less#sectionReadMore1
Technical SEO | | DougHartline0 -
URL Question: Is there any value for ecomm sites in having a reverse "breadcrumb" in the URL?
Wondering if there is any value for e-comm sites to feature a reverse breadcrumb like structure in the URL? For example: Example: https://www.grainger.com/category/anchor-bolts/anchors/fasteners/ecatalog/N-8j5?ssf=3&ssf=3 where we have a reverse categorization happening? with /level2-sub-cat/level1-sub-cat/category in the reverse order as to the actual location on the site. Category: Fasteners
Technical SEO | | ROI_DNA
Sub-Cat (level 1): Anchors
Sub-Cat (level 2): Anchor Bolts0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
How unique does a page need to be to avoid "duplicate content" issues?
We sell products that can be very similar to one another. Product Example: Power Drill A and Power Drill A1 With these two hypothetical products, the only real difference from the two pages would be a slight change in the URL and a slight modification in the H1/Title tag. Are these 2 slight modifications significant enough to avoid a "duplicate content" flagging? Please advise, and thanks in advance!
Technical SEO | | WhiteCap0 -
Penalization for Duplicate URLs with %29 or "/"
Hi there - Some of our dynamically generated product URLs somehow are showing up in SEOmoz as two different URLs even though they are the same page- one with a %28 and one with a 🙂 e.g., http://www.company.com/ProductX-(-etc/ http://www.company.com/ProductX-(-etc/ Also, some of the URLs are duplicated with a "/" at the end of them. Does Google penalize us for these duplicate URLs? Should we add canonical tags to all of them? Finally, our development team is claiming that they are not generating these pages, and that they are being generated from facebook/pinterest/etc. which doesn't make a whole lot of sense to me. Is that right? Thanks!
Technical SEO | | sfecommerce0 -
Should I change these "Overly dynamic URLs" ?
Hello, My client have pages that look like this: www.domain.com/blog/index.aspx?blogmonth=1&blogday=10&blogyear=2012&blogid=256 Question 1: SEOMoz say they are overly dynamic. Is it really in this case as the numbers indicate the year, month and day and do not change? Question 2: Should we change the URLs to proper SEO friendly URLs such as www.domain.com/keywords1-keyword2? The pages are already ranking well and we worry that changing the URL may damage the ranking? Do we risk the page to go down in ranking by creating SEO friendly URLs? (and using a 301 to redirect from the old URL)
Technical SEO | | DavidSpivac0 -
How can i get the Authors photo to show in the Google search result?
I added the rel="author" tags to the blog posts last week and updated the authors page with a link to the Google+ account, but I have yet to see the authors photo surface in the Google Results. Example URL: http://spotlight.vitals.com/2011/10/dr-richelle-cooper-testifies-against-dr-conrad-murray-in-trial/ Can anyone identify what else needs to be done?
Technical SEO | | irvingw0 -
Domain with or without "www"
Does it influence the search engine result if we have our domain name without the "www." ?
Technical SEO | | netbuilder0