See your friends’ connections with Mailana

Marshallkscreenshot

Marshall Kirkpatrick has turned into one of my favorite people, not only helping spread the word about Mailana but also coming up with new ways to use it. Most precious of all he's given me feedback about how to improve it.

One of his comments was that it's hard to see how your friends are grouped when they're all clustered around you. I've put in a fix, you can now remove yourself from the graph:

– Load your graph
– Single-click on yourself, and you should see a menu
– Choose 'Remove'

You should see the graph change shape, and your different groups of friends emerge more clearly. Previously everyone was orbiting around you, now they'll only be held together by conversations between themselves. That will likely mean that some people will fly off on their own as they have no connections to other people you talk to, and other clusters of people will emerge representing different groups. You may need to use the +/- zoom controls in the top left to explore the expanded view.

If you're a heavy Twitter user like Marshall, you might also like to try a hidden feature along with this. By default I only show your top 75 friends on the graph, but the 'showcount' URL option lets you increase that. Add showcount=150 to the end of your url to double the limit, for example

http://twitter.mailana.com/profile.php?person=marshallk&showcount=150

Now remove yourself from the graph and you should see a rich set of groups if you've got a lot of contacts on Twitter. Be warned though, the calculations involved put quite a strain on Flash, so be prepared for some slowdown!

I'll be looking at a more user-friendly way of exposing these options, possibly through a new advanced search panel, but for now give them a try and let me know what you think.

Why metrics can be dangerous

Bomb

Edwin from Feedly kicked off a great discussion in the comments on Jeff's blog post about their suggestions for startup development. I actually really like their recipe, I'm a true believer in Customer Development and a stats fanatic, but I also know that metrics can be used for evil as well as good.

I'm worried that the important step is being glossed over; metrics are only as good as what you do with them. In Edwin's algorithm, the process and importance of measurement is gone into in detail, but the action you're supposed to take based on them is vague. What if "validate that the changes you are making are improving the metrics you are tracking" contradicts the later advice "you want … to try to polarize"? What if a change decreases average satisfaction but makes a minority ecstatically happy?

The default action would be to do more of things that raise your metrics, and less of those that lower them. In computer science we'd call this a greedy algorithm, and it may only lead to a 'local maximum' because it commits to choices too early. To picture this, imagine trying to get to the top of a mountain by only walking uphill. If the mountain were a perfect cone, you'd get to the top, but if it's more complex you may get stuck at the top of a much lower foothill.

For another analogy, imagine trying to walk to the north pole by always following a compass north. You'd hit a lot of obstacles in the way, and would probably have to take some excursions east, west or even south to get to your goal.

This isn't an abstract discussion for me; a lot of my previous work has been at large companies where innovation is stifled because metrics make it impossible to propose something that reduces any metric, even where there's massive customer benefit. For an extreme example of this, see Tom Evslin's experiences at AT&T:
http://blog.tomevslin.com/2005/02/att_lesson_from…

The
engineers were so fixated on the traditional "6 9's" of reliability as
a metric (meaning no more than one in a million calls could fail) that
they killed attempts to pioneer cell phones, internet access and VOIP.

I know from their great product that the Feedly team had to make hard choices to get where they are. What I really need are more war stories where founders had to take risks and do something that caused the metrics to drop in the short-term in the hope of long-term gain. Those are the decisions I find difficult!

How to create a survey for your startup

Clipboard

T.A. from Gist has been a fantastic source of help and advice, and chatting to him this week he reminded me how important it is to gather information on your customers. This is something I've known in the abstract, but I've never properly put into practice. So today I built a simple process so people can sign up for my beta program and I can find out a bit about them. I used SurveyMonkey, which offers a basic service for free, but I went with the $200 annual package so I could customize the look and avoid some of the restrictions. Here's the result:
http://www.surveymonkey.com/s.aspx?sm=GAxrTx02sg8Or3c1J1cZYg_3d_3d

The goal of the questions is to learn more about people who are interested in Mailana, so I can both contact the right people about testing new features that integrate with other services they're using, eg Outlook, Gmail or LinkedIn, and learn what I should be building based on what they're demanding.

Actually creating the survey was straightforward through SurveyMonkey's interface, but choosing the right questions was tougher. I wanted enough information to be useful, but not so much that people would fail to complete it. The first page covers personal and employment information, so I can understand who's using Mailana. The second asks probably the most important question on the survey, how likely they are to recommend it to a friend. This rating can be used to build a net-promoter score, a crucial metric for understanding how your customers view your service. I also ask about the other servces they use, so I can understand what the potential customer base is like for different kinds of integration, and who to contact about testing them.

The only downside with SurveyMonkey is that they don't offer an API to access the results, so I can't easily include any metrics in my daily report email, I'll just have to do frequent manual downloads. I've added a link on twitter.mailana.com, and I'm looking forward to hearing more from my customers through the program. If you're interested, please give it a try.

How to build a daily email report for free

Ruler
Photo by Luigi Chiesa

The only way to tell if you're making progress is to measure it. If you're web-based and pre-revenue like me, then traffic and customer reactions are all you have to guide you. Happily Google just released their analytics API, and by combining that with Twitter you can create your own automated daily email report, absolutely free!

If you want to see the results, I'm publicly posting them on the Mailana Stats list:
http://groups.google.com/group/mailana-stats

There's also an RSS version here:
http://groups.google.com/group/mailana-stats/feed/rss_v2_0_msgs.xml

The result is an email sent daily that looks like this:

Subject: twitter.mailana.com daily report: 04-22-09
1673 visits, 1423 visitors
15 twitter messages mentioning mailana

You can grab the source here; it's a set of bash scripts tested on Fedora and OS X. To run it on your own site you'll need to follow these steps:

1- Install Google Analytics on your site
2- Download my scripts onto your Linux machine
3- Get your profile ID. Edit galistprofiles.sh with your Google email address and password, and then run it to see a list of website's stats you have access to and each site's profile ID.
4- Enter your account details. Put your gmail address, password and the profile ID into the gatotalvisits.sh script.
5- Test your API access. Run ./gatotalvisits.sh and by default you should see a single number showing the number of visits. You can look at other stats by passing one of the many metric names as an argument to the script, eg ./gatotalvisits.sh visitors
6- Make sure you can email from the command line. The script needs a way to mail the results, and you may not this set up on you Linux box by default. I ended up following this guide and using SMTP to send from my existing Gmail account.
7- Create a Google Group. You need somewhere to organize and publish your results. My list is public, but you can also create a private version with controlled access. This also gives you an RSS feed, though obscurely you get the link by clicking on the XML button at the bottom of the group's home.
8- Pick Twitter search terms. I'm doing a daily search of Twitter for mentions of 'mailana', but you can edit emailstats.sh to look for your own keywords. Run ./latesttweets.sh <your keyword> to test.
9- Customize the email. Edit emailstats.sh to get the subject and message content in the form you want, and maybe add other stats you care about too.
10- Schedule it daily. I created a link to the script in the daily cron directory by running ln -s /vol/analytics/emailstats.sh /etc/cron.daily/emailstats.sh

You should now have a fully automated daily report email. Let me know how this works for you; I'm avoiding full XML parsing to keep the scripts lightweight so I'll be keeping an eye out for API changes that might break those assumptions.

What went wrong with Top Twitter Friends and how I fixed it

Brokenglass
Photo by Jef Poskanzer

Last week I suddenly noticed the number of imported people on http://twitter.mailana.com/ increase dramatically. This was suspicious since it typically takes a minute or two to import a single person's messages from the Twitter API, so seeing 50,000 added in less than a day rang alarm bells. Simultaneously I got a small flood of emails from users whose profiles were showing up completely blank. Since this is usually the result of a failed import I checked the server logs and there were indeed lots of errors.

Unfortunately I was in the middle of moving house, so I had no time to investigate and fix the problem. Instead I took down the names of everyone who contacted me in my bug database, sent them notes so they'd know I was on the case, and then completely turned off all imports. This meant at least no further profiles would be corrupted before I could solve the problem.

Yesterday we'd finally completed the drive and got the internet running at our new place, so I could sit down and figure out what was going wrong. The immediate cause was this change to the Twitter API on April 9th. Previously I'd been able to use POST for all my calls, but now some would only work with GET. This limitation was always in the documentation but never enforced, so I hadn't spotted it.

That wasn't the true problem though – that sort of changes happens all the time and it shouldn't cause corrupted data and empty profiles. In the spirit of Eric's Five Whys, here's a root-cause analysis:

1. Why did people's profiles show up blank? The import failed and output bogus data when Twitter's API changed.
2. Why did the import fail and output bogus data? The errors weren't detected and handled correctly.
3. Why weren't the errors handled? The import code wasn't tested thoroughly enough.
4. Why wasn't it tested? There was no easy way to run a test.
5. Why was there no easy test? I'd never expected the Twitter import to be so heavily used, it was quickly written code reused from another project.

With that in mind, I worked backwards down the list today, trying to address each layer of the problem in turn. Going in that direction is important because you want to leave the immediate cause until last, so you can verify that the deeper fixes actually do catch that problem.

5. This is a priority issue. I've been trying to juggle my work on email and Twitter simultaneously. That's given me too many top priorities, which really means I have no priorities. To fix that, I'm formally pausing my Exchange work for the next few months. Twitter has become a great platform to showcase my ideas. I still believe passionately that email is a killer application for this, but Twitter is a fantastic way to sell people on what I'm building, once I have people convinced it will be a lot easier to persuade them to invest time and trust installing my email version. This decision will let me give the Twitter code the resources it needs to shine.

4. I built a new unit test into the Twitter import script.

3. That test is now part of my routine whenever that code is changed.

2. I implemented entirely new error catching code. It now correctly halts the script whenever the API returns a fatal error, so no bad data is ever stored in the database. As a bonus, I also now catch temporary errors caused by server overload, etc, and wait 10 seconds and retry a fixed number of times. It's surprising watching the logs how many 502 errors I see!

1. Finally, I switched the API call from POST to GET, and got the import process rolling again.

That wasn't the end of it though. I still had a database with several thousand corrupted profiles. I'd implemented a manual method to force a reimport for users I knew about, but I had to switch to something automatic to handle that number of problems. Another issue was there was no easy way to identify all the affected users thanks to the way I'm storing the data.

I settled on detecting when a blank profile was loaded, displaying an error message then and forcing a full reimport. This is far from ideal, but with the reimport bumped to the top of the queue, should only take a few minutes. If your profile was previously showing up blank, please give it another try, hopefully this will fix that problem for you.

Thanks to everyone who helped me with bug reports on this one, and sorry for those caught with empty graphs for the last week. As always, please let me know about any other issues you're hitting.

Boulder rolls out the welcome mat

Snowwalk

We just finished the two day drive from Los Angeles to Boulder, with 3 cats and a dog in my hatchback. It was an exhausting but beautiful journey, especially the Virgin River gorge and pretty much all of Utah. It was a shame it was dark by the time we drove from Grand Junction to Vail, we could tell it must be a wild ride by the way they'd had to build the freeway! Luckily we have one more trip to make that we'll try to time better.

The day after we arrived, the snow started to fall, leaving Boulder looking Christmas-card perfect. One of our big worries has been how our dog Thor will cope with the cold, he's not quite a Beverley Hills Chihuahua but he's definitely grown up used to the southern California weather.

Snowthor

At first he was a bit freaked out by the white stuff that kept landing on his nose, but once he discovered the deer, squirrels and foxes that kept crossing our path while we walked, he had a wonderful time.

This is a fantastic start to our Colorado adventure, thanks to everyone who helped arrange this unseasonal snow. I'm also thankful we weren't trying to make it over the Vail Pass on I70 yesterday!

Snowtree

How non-programmers can use the Twitter API

Beaniechihuahua
Photo by Heather

I recently got an email from a graduate student asking for help. He wasn't really a programmer but wanted to get someone's followers and friends in .csv form to feed into his analysis tools. One of the joys of the XML REST api that Twitter uses is that it's all human-readable, so you can get a long way without coding. Here's a quick guide.

The best place to start is this web page:
http://apiwiki.twitter.com/REST+API+Documentation

That
lists all the information you can get from Twitter about other users. All of the API calls are actually web addresses, so you can work
with them without using any code. For example, if you type this into
the address bar of Firefox or Safari (not tested on IE) you'll see a
formatted list of the IDs of all my friends:
http://twitter.com/friends/ids.xml?screen_name=petewarden

The same idea works for followers:
http://twitter.com/followers/ids.xml?screen_name=petewarden

The output is actually XML, but you don't need to understand the details to pull out simple information. It's not quite CSV, but hopefully with some simple
text replacement hacking you might be able to convert it to the form
you want. For example you could copy and paste from Firefox and use your favorite text editor to remove "<id>", "</id>", "<ids>" and "</ids>". You'd then have a text file with just an id number on each line.

The next issue would be mapping those ids to names. To do
that you'll need another call, replacing 4411041 with the number you
want:
http://twitter.com/users/show/4411041.xml

In Firefox you'll see another page of XML. To get the information you need, search for the line with "<screen_name>".

You'll
also probably want to do this from the command line to automate the
process which will mean some coding. I recommend using a unix-y system
like OS X or Linux, since they give you access to curl to fetch the web
page rather than manually copying it from the web browser, and the text
processing tools (eg Perl or your favorite script language). If you're
stuck on Windows, wget and .bat files may be an alternative, though it
won't be pretty.

Another wrinkle is that some of the calls require logging in. One way to do that is include your username and password in the URL, something like this:
http://username:password@twitter.com/&#8230;

Understanding your introvert

Landscape
Photo by Difusa

Most people imagine that introvert is a synonym for shy, but it's more accurate to say that they're someone who is recharged by time spent alone, and drained by company. That definition fits me – it's not that I lack social skills or dislike time spent with others, I just have a limited stamina for social gatherings and thrive in one-to-one conversation and quiet time.

When I saw this old Atlantic article on caring for your introvert it rang very true. I don't agree with the complaints about oppression – in the computer industry introverts are well catered for – but if people understood more about introversion life would be a little easier. My friends have got used to the fact I like nothing better than just quietly hanging out with them, but that's confusing for a lot of people I meet. I'm not shy, and don't mean to come across as arrogant or stand-offish, I just lean towards a high signal-to-noise ratio in conversation, which means a high thinking-to-talking ratio too!

Video of my NewTech talk

Craig Kendall has just posted video of my talk at the Boulder/Denver NewTech Meetup. It’s a five minute tour through what I’m building with Mailana, followed by some very sharp audience questions.

Thanks to Craig for doing all the production, and Robert Reich for organizing the show. It’s a fantastic resource for the Colorado tech scene, building the community that’s a big part of why I’m moving. I look forward to attending a lot more.

Life beyond death

Deathshead

Photo by Bill McIntyre

A lot of my favorite technologies are allegedly dead. You don't get more unfashionable than Usenet, but I was just able to get an answer to my story ID request in under 10 minutes from rec.arts.sf.written! Show me a web tool that can match that. Email's another technology that's been written off, but it's still the central electronic communication channel for most people.

I'm not a mindless luddite, I love shiny new toys as much as the next geek, but I try to learn from how people actually use computers, rather than how I'd like them to. I remember an Enterprise 2.0 technologist describing how he used a wiki to create all his documents, and was frustrated that the rest of his company wouldn't do the same. He didn't get that while wikis are a great innovation, as word processors they suck.

Users generally push back on changes for a reason. If you're getting strong resistance that seems senseless, that just means you don't understand their requirements well enough. Go back and stare deeply at how they use the old solution. You'll usually see why they keep dragging that corpse around.