How do I redeem a tweet from Roger?
-
Last week, I reached over 100 mozPoints and I've been waiting by my virtual mailbox for days for a tweet from Roger.
Level mozPoints Benefits **Aspirant - ** ** - 100 - 199** ** - A tweet from Roger (the week you reach 100 MozPoints)** The past few days have been sad. I've been moping around the office feeling terribly alone, wondering why I've received no Twitter based recognition for my efforts. The radio has been playing 'Careless Whisper' on repeat and it hasn't stopped raining outside.
Can anyone help?
(Attached an image of my sadness for reference)
-
Hey Thomas,
It looks like your tweet did the trick and they've been tweeting a whole bunch of people - congrats!
I shall turn off the sad music now.
Sean
-
Lets see if tweeting Roger awakens him/her/it! https://twitter.com/thomasharvey_me/status/783958116316643328
-
I want a tweet from Roger too!!!!!!!!!!!!!!!!!
Maybe he is busy crawling your website to give you some useful tips and tricks.
I think he will be in touch soon on twitter.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Where is my Hug from Roger?
I just remembered that when I got 50 mozPoints I had to receive a hug from Roger. So, I need my hug!
Moz Pro | | ditoroin2 -
Using api to find likes, tweets and shares
Hi all, I have a question regarding using the API to get data on a list of urls and not one by one. I am not a developer so I might not grasp the API so well. Basically I have a list of urls, what I need from each is the following information: <colgroup><col width="98"><col width="89"><col width="92"><col width="111"><col width="99"><col width="50"><col width="64"></colgroup>
Moz Pro | | rightmove
| Page authority | linking root domains | total links | Facebook Shares | Facebook likes | tweets | Google+1's | I found a google doc that gives me SOME of the data but not all (also it might be old as it uses the linkscape call (I dont know if thats changed)) Any idea how I can tune my googledoc spreadsheet to include the additional metrics? Thanks 🙂0 -
Shares and tweets
I've shared to FB and tweeted my blog many times but the Link Research & Analysis With: Open Site Explorer hasnt picked them up, does this mean Google isnt picking them up either? What am I doing wrong?
Moz Pro | | Ranj0 -
Our Duplicate Content Crawled by SEOMoz Roger, but Not in Google Webmaster Tools
Hi Guys, We're new here and I couldn't find the answer to my question. Here it goes: We had SEOMoz's Roger Crawl all of our pages and he came up with quite a few erros (Duplicate Content, Duplicate Page Titles, Long URL's). Per our CTO and using our Google Webmaster Tools, we informed Google not to index those Duplicate Content Pages. For our Long URL Errors, they are redirected to SEF URL's. What we would like to know is if Roger is able to know that we have instructed Google to not index these pages. My concern is Should we still be concerned if Roger is still crawling those pages and the errors are not showing up in our Webmaster Tools Is there a way we can let Roger know so they don't come up as errors in our SEOMoz Tools? Thanks so much, e
Moz Pro | | RichSteel0 -
Roger keeps telling me my canonical pages are duplicates
I've got a site that's brand spanking new that I'm trying to get the error count down to zero on, and I'm basically there except for this odd problem. Roger got into the site like a naughty puppy a bit too early, before I'd put the canonical tags in, so there were a couple thousand 'duplicate content' errors. I put canonicals in (programmatically, so they appear on every page) and waited a week and sure enough 99% of them went away. However, there's about 50 that are still lingering, and I'm not sure why they're being detected as such. It's an ecommerce site, and the duplicates are being detected on the product page, but why these 50? (there's hundreds of other products that aren't being detected). The URLs that are 'duplicates' look like this according to the crawl report: http://www.site.com/Product-1.aspx http://www.site.com/product-1.aspx And so on. Canonicals are in place, and have been for weeks, and as I said there's hundreds of other pages just like this not having this problem, so I'm finding it odd that these ones won't go away. All I can think of is that Roger is somehow caching stuff from previous crawls? According to the crawl report these duplicates were discovered '1 day ago' but that simply doesn't make sense. It's not a matter of messing up one or two pages on my part either; we made this site to be dynamically generated, and all of the SEO stuff (canonical, etc.) is applied to every single page regardless of what's on it. If anyone can give some insight I'd appreciate it!
Moz Pro | | icecarats0 -
Help with Roger finding phantom links
It Monday and Roger has done another crawl and now I have a couple of issues: I have two pages showing 404->302 or 500 because these links do not exist. I have to fix the 500 but the 404 is trapped correctly. http://www.oznappies.com/nappies.faq & http://www.oznappies.com/store/value-packs/\ The issue is when I do a site scan there is no anchor text that contains these links. So, what I would like to find out is where is Roger finding them. I cannot see any where in the Crawl Report that tells me where the origin of these links is. I also created a blog on Tumblr and now every tag and rss feed entry is producing a duplicate content error in the crawl stats. I cannot see anywhere in Tumblr to fix this issue. Any Ideas?
Moz Pro | | oznappies0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0