How to track iOS memory crashes

Crashtesthamlet
Photo by Fingle

I love being able to use HTML5 content within Jetpac, but hosting it in Apple's UIWebView component can use a lot of memory. That matters because iOS apps crash when they run out of memory, and to make things worse they crash so hard that you don't even get a report! The process is killed, even low-level exit handlers don't get run, our code is shut down with no chance to do anything.

We do sometimes see low memory warnings, but these aren't as useful as you might think. They can occur in fairly benign circumstances and be cured by Javascript running a round of garbage collection, which means they aren't great predictors of crashes. They also don't always fire before there's a memory exhaustion crash, so we can't rely on taking preventative measures in those handlers.

To help understand what's going on, I've added low-level OS memory tracking instrumentation to help us track the free memory situation over time. I've combined it with our home-brewed Javascript function tracing to get quite a fine-grained view of which operations are using the most space, and we found some fascinating issues, like simple Canvas drawing operations appearing to leak a full image's worth of memory every time! 

It's important for us to understand how widespread the crashes are in the wild though, and without crash reports we can't keep track of how well we're doing with our fixes. I've been talking to the folks at Crittercism, who we use and love for our general crash reporting, and they don't yet have a solution, but I did have a brain wave that I'm trying.

We have no chance to run code if the app crashes hard, but we do if the user deliberately quits by pressing the home button. We have an in-house activity log server, so if we fire off an event when a user starts up the app, and one when they deliberately close it, we can estimate how many times it crashed. We get great reports from Crittercism for normal crashes, so by subtracting those we can figure out roughly how many users are affected by this. The numbers will be a bit biased by lost connections (since we need to communicate with our log server to record an app close), but it will be far better than nothing.

I've submitted a new version of the app with the logging included, so I should have a better idea of how this works in practice within the next few weeks. Here's hoping it helps!

Five short links

Pentagonalhead
Mural by Monte Thrasher

Heads by Monte Thrasher – Normally my short link images are side-notes, but the pentagonal helmet image led me to discover what I think is my favorite mural ever. Check out Twiggy, the world's ugliest dog, and the inflatable skull, and much more.

Stately – US state outlines as a font, in their correct positions. I love the explosion of font hacks we've seen recently!

Carmen Geocoder – The MapBox folks have done some great detective work to find open data sources for this project. I'm looking forward to seeing how this might work together with the Data Science Toolkit.

R Language interface to the DSTK – On that topic, it was great to see Ryan Elmore release an interface to the toolkit in R.

Big Data – Beyond the Hype – An opinionated and thought-provoking exploration of where the data world is headed.

The dignity of customer service

Nametag
Photo by Ricky Brigante

When I started my first job as a supermarket clerk, I dreaded going in. I needed the money to feed my weed and D&D habit though, so I gritted my teeth and dragged myself to work every Saturday. After a few weeks something strange happened – I found myself enjoying my time behind the checkout!

I'd grown up with the idea of any kind of service job as shameful and embarassing. In Britain, there were still a lot of holdover attitudes from the Downton Abbey days of servants and lords. Somebody working in a job where you had to do things for people off the street risked losing face and descended down the class hierarchy. The usual defense was surliness; "You're looking down on me? I'll show you I'm not at your beck and call!".

I was lucky enough to be at Tescos, which in the early 90's was the scrappy upstart of the supermarket world, and their killer advantage was a different approach to customer service. As my supervisor patiently explained to me:

"The women you'll see have just spent an hour dragging screaming toddlers around the store after a full day of work. They'll likely be in a foul mood by the time they reach your checkout, but don't take it personally. They're so focused on their own worries, they don't even see you! If they lash out, just listen to them patiently, and let them know they're being heard. They take their cues off you, so if you react calmly instead of being upset, most of them calm down too. And if you take the high ground and smile sweetly, you don't give the few nasty ones the satisfaction of getting to you. This is the hardest part of your job, so take pride in doing it well."

It was never an easy job but I found there was a real dignity in it, once I treated good customer service as something I could take internal pride in, rather than being shameful and servile.

I've been thinking about this a lot after reading Andrew Sullivan's posts on the subject. He's made the same journey from Britain to the US, and shares my joy in the American dedication to good customer service. A lot of his readers don't agree, complaining that it's soul-violating to be forced to "treat callers like royalty". I'm not trying to romanticize any entry-level job, but a professional attitude to customer interactions was the best armor I had against the emotional assaults that the general public inflicts. It gave me good boundaries so I could approach awkward customers as a problem to be handled, not a reflection on my own self-worth.

This became especially important once I moved to Manchester, and to pay for college spent a year working at an infamous chain called Kwik Save. It was known for its "No Frills" brand, and everything about the store lived up to the slogan. Even box cutters were precious items given to honored employees, while the rest of us improvised using keys or pens to poke through the sticky tape. Management also embraced a distinctly old-fashioned approach to customer service – "The people who shop here are scum, we have to treat them like scum" as the manager Mr Albinson put it!

I kept to my Tesco training as much as I could, but I found being in an environment of bad customer service was far more soul-destroying than my old job. Employees would argue and even yell at shoppers, and they'd get sucked into all sorts of petty disputes. Everyone left work a lot more upset than they ever did at Tescos. Losing the shield of professionalism made the inevitable friction with customers far more soul-destroying than it had to be.

That all means that articles like Timothy Noah's leave me hopping mad. A close member of my family is a long-time Pret employee, with multiple awards for great customer service (which are good chunks of cash, and shared with the whole team), and he's not been brainwashed by a cultish corporation. He's a good guy working a tough job, and part of it is treating customers extremely well. Sure, Pret employees may not be allowed to have an off day, but only in the same way that they aren't allowed to drop the sandwiches on the floor. Doctors, lawyers, and anyone who has to deal with the public has a work persona they need to adopt to effectively do their job. It's patronizing to assume that clerks and servers aren't making the same kind of tradeoffs as people in more prestigious professions.

Expecting everyone who wants to give you money to deal with your emotional baggage is a luxury few of us can afford. There are a lot of genuine problems out there, like the awful working conditions that many service employees have to suffer, but most companies that care about employees appearing happy have figured out that treating them decently is a big help. Don't take away the dignity of great customer service givers by assuming they're silently suffering as they smile, and need your protection. They're secure on their side of the professional barrier, and the most helpful thing you can do is give them the respect they deserve.

How do analytics really work at a small startup?

I was lucky enough to spend a few hours today with my friend Kevin Gates, one of the creators of Google's internal business intelligence systems, and it turned out to be a very thought-provoking chat. His mind was somewhat boggled that we were so data-obsessed at such an early stage in our life. Most people running analytics work at a large company and have a big stream of users to run experiments on. Our sample sizes are much smaller, which makes even conceptually simple approaches like A/B tests problematic. Just waiting long enough to get a statistically-significant results becomes a big bottleneck.

We've found ways around a lot of the technical issues, for example focusing on pre/post testing rather than A/B to speed up the process, but there's a bigger philosophical question. Is it even worth focusing on data when you only have tens of thousands of users?

The key for us is that we're using the information we get primarily for decision-making (should we build out feature X?) rather than optimization (how can we improve feature X?). Our quest is to understand what users are doing and what they want. Everything we're looking at should be actionable, should answer a product question we're wrestling with. To help answer that, I sketched out a diagram of how the information flows through our tools to the team:

Analytics

The silhouettes show where people are looking at the results of our data crunching. The primary things that everyone on our team religiously watches are the daily report emails, and the UserTesting.com videos that show ordinary people using new features of our app. The daily reports are built on top of our analytics database, which is a Postgres machine with a homebrewed web UI to create, store, and regularly run reports on the event logs it holds. We built this when our requirements expanded beyond KissMetrics more funnel-focused UI, but we still use their web interface for some of our needs. Qualaroo is an awesome offshoot of KissMetrics that we use for in-app surveys, and we also refer to MailChimp's Mandrill dashboard and Urban Airship's statistics to understand how well our emails and push notifications are working. We have to use AppAnnie to keep track of our iOS download numbers and reviews over time.

We also have about twenty key statistics that we automatically add to a 'State of the App' Google Docs spreadsheet every day. This isn't something we constantly refer to, but it is crucial when we want to understand trends over weeks or months.

Over the last 18 months we've experimented with a lot of different approaches and sources of data, but these are the ones that have proved their worth in practice. It doesn't look the same as a large company approach to analytics, but this flow has been incredibly useful in our startup environment. It has helped us to make better and faster decisions, and most importantly spot opportunities we'd never have seen otherwise. If you're a small company and are feel like you're too early to start on analytics, you may be surprised by how easy it is to get started and how much you get out of it. Give simple services like KissMetrics a try, and I bet you'll end up hooked!

 

How good are our geocoders?

Confusingsign
Photo by Oatsy 40

My last post was a quick rant about the need for a decent open geocoder, but what's wrong with the ones we have? I've created a command-line tool to explore their quality: https://github.com/petewarden/geocodetest.

As a first pass, I pulled together a list of six addresses, some from my past and a few from spreadsheets users have uploaded to OpenHeatMap. The tool runs through the list (or any file of addresses you give it) and geocodes them through DSTK and Nominatim, returning a CSV of whether the locations are within 100m of Google's result. Run the script with -h to see all the options.  Here are the results, produced by running ./geocodetest.rb -i testinput.txt

dstk,google,nominatim,address
Y,Y,Y,2543 Graystone Place, Simi Valley, CA 93065
Y,Y,N,400 Duboce Ave, #208, San Francisco CA 94117
Y,Y,N,11 Meadow Lane, Over, Cambridge, CB24 5NF, UK
N,Y,N,VIC 3184, Australia
N,Y,Y,Lindsay Crescent, Cape Town, South Africa
N,Y,N,3875 wilshire blvd los angeles CA

The first three are standard test cases for me, so it's not be a massive surprise that my DSTK (based on Schuyler Erle and GeoIQ's original work) works better than Nominatim for two of them. It does highlight one of the reasons I've struggled to use Nominatim though – it's not good at coping with alternative address forms. This makes it quite brittle, especially around addresses like the UK where there are multiple common permutations of village, city, and county names. Nominatim doesn't return any results for #2 or #3 at all, when I'd hope for at least a town-level approximation.

The Australian postal code is about 30 km from Google's result, whereas the open GeoNames data in the DSTK gets me to within 400m of Google. Nominatim does much better on the SA address, since I haven't imported OSM data into the DSTK for anywhere but the UK. I did have to correct the original user-entered spelling of 'Cresent' first though, and I'd love to see an open geocoder that was robust to this sort of common mistake. The last address is another sloppy one, but we should be able to cope with that one too!

Part of the reason there hasn't been more progress on open geocoders is that the problems are not very visible. I hope having an easy test harness changes that, and while this first pass is far from scientific, it's already inspired me to put in several fixes to my own code. I'm a big fan of the effort that's been put into the Nominatim project (I'm using their OSM loading code myself) I'm just disappointed that the results haven't been good enough to build services like OpenHeatMap on top of. I'll be expanding this tool to cover more addresses and so build a better 'map' of how we're doing, and what remains to be done. I'm excited by the opportunities to make progress here, I'll be busy working more on my own efforts and I can't wait to hear other folks thoughts too.

Why is open geocoding important?

Globe
Photo by Werner Kunz

A few years ago I had what I thought was a simple problem. I had a bunch of place names, and I needed to turn them into latitude and longitude coordinates. To my surprise, it turned out to be extremely hard. Google has an excellent geocoder, but you're only allowed to use it for data you're displaying on Google maps, and there are rate limits and charges if you use it in bulk. Yahoo has an excellent array of geo APIs with much better conditions, but there are still rate limits and their future was in doubt even then!

So, I ended up hacking up my own very basic solution based on open data. It turned out to be a fascinating problem, one you could spend a lifetime on, trying to draw a usable, detailed picture of the world from freely available data. I bulked up the underlying data and algorithms, and it became the core of the Data Science Toolkit. Turning addresses into coordinates may sound like a strange obsession, but it has become my white whale.

There are some folks who agree that this is an important problem, but I've been surprised there aren't more. Placenames describe our world, and we need an open and democratic way for machines to interpret them. Almost any application that uses locations needs to do this operation, and right now we have no alternative to commercial systems.

What are the practical impacts of this? We've got no control over what our neighborhoods are called, or how they're defined. We can't fix problems in the data that impact us, like correcting the location of our address so that delivery drivers can find us. We can't build applications that take in large amounts of address data unless we can afford high fees, which cuts out a whole bunch of interesting projects.

This is on my mind because I'm making another attack on improving the DSTK solution. I've already added a lot of international postal codes thanks to GeoNames, but next I want to combine the public domain SimpleGeo point-of-interest dump with OpenStreetMap data to see if I can synthesize more addressable ranges for at least some more countries. That will be an interesting challenge, but if I get some usable it opens the door to adding more coverage through any open data set that combines street addresses and coordinates. I can't wait to see where this takes me!