How to debug Javascript errors on iOS

Error
Photo by Nick J Webb

There are lots of advantages to developing for iOS devices in Javascript, either as a mobile website or through a native app that hosts a UIWebView. Debuggability is definitely not one of them though! You'll find yourself flying blind when you need to track down errors, especially compared to the awesome state of browser debuggers. There are techniques that can help though, so I wanted to give a quick overview of what we've ended up doing for Jetpac.

Local logging

If you're targeting Mobile Safari, it's comparatively easy to see your error messages when you're debugging, just enable it in the settings. It gets tricky with a UIWebView though, and we ended up using this custom URL scheme hack (which requires some native code changes) to get log messages appearing in the device console. It's also worth knowing that you can view the console even when you didn't run the app through the debugger (for example if you've installed it through the app store) by plugging in and looking in Organizer->Devices. You can even buy apps to let you view the console natively, which should make you think twice about putting any private information you don't want other apps to access in log messages!

Web inspector

You should check out the new iOS 6 remote debugger, which works with both Safari and UIWebView code. It's been extremely useful for digging into CSS issues, and saved our bacon when tracking down some weird script loading problems.

Catching errors in the wild

The most challenging part is getting information on problems that are happening to users with the released app. If you can't reproduce the issue locally with a device plugged in, how can you tell what went wrong?

The first step is attaching a callback to window.onerror(), which will be called whenever there's an uncaught exception. In iOS 5, you only get the error message, not the file or line, and for various reasons we've had to minify and inline code anyway, so iOS 5's addition of the line number and file name isn't very helpful. What we really need is the call stack, which just doesn't get returned in any form on Mobile Safari.

Scarily, Javascript is such a flexible language that it's possible to do a crazy level of modification of the function calling internals, enough to write user-level tracing for every function! I actually got a version of this Function.prototype hack partially running as an experiment, but the breadth of it scared me. I also realized that I didn't need every function in the call stack, I mostly just wanted to know what part of my code had triggered the problem. What I ended up doing was manually wrapping functions I know about, and outputting information about them in onerror(). It's still an extremely hacky hack, but it's been very useful as we've been tracking down tough release problems, so here's the code I ended up using:

This won't run out of the box, but it should give you an idea of what we're doing. As part of our server-side code we have an error-reporting endpoint that we post the details of any release errors to, /jserror, and that sends on an email to the team.

The heavy lifting happens in the wrapFunctions() call, which replaces each function in an object with a wrapper that first calls the supplied 'before' function (in our case just pushing onto the callstack), then the original function, followed by 'after'. There are no guarantees about the correctness of the code in all cases, the prototype stuff especially scares me, but it has worked in practice on our code base.

I tend to use this pretty sparingly to wrap our own code, rather than jQuery or other frameworks, since most of the errors are in our functions, and I'm worried about sprinkling too much voodoo over our code base. Despite those caveats, it's been a massive help in tracking down our issues.

Security by silo

Silo
Photo by Trey Ratcliff

A while ago I was having drinks with a Google employee, and we started discussing privacy problems. He asked me why Buzz had received so much bad press for its email analysis when Facebook and other social networks had been doing the same thing for years? He also pressed me on why the iPhone tracking story had become such a big issue.

People have a mental model of what devices and services are for, and get freaked out when someone changes the rules. Nobody understands constantly-changing space-shuttle-control-panel privacy settings within services, but everyone knows that LinkedIn is for business relationships, and Facebook is for friends. Users try to protect their privacy by limiting information to sites that serve the audiences they want it to reach.

When Google changed from an email and search provider to a service that could broadcast semi-public updates to her friends, it became unclear where information she'd previously shared would end up. When Apple switched from a phone and computer builder to something that followed your movements, that crossing of boundaries was the real problem. Nobody would have blinked an eye at the idea of a Garmin device keeping a file showing where you'd been.

If you're worried about how users will react to something innovative you're trying, think about how they understand your purpose. Why did they sign up for you in the first place? Ignore the grand vision in your head, what do they think you do? If what you're doing makes sense for that goal, you'll be surprised at how generous and supportive they can be, even for potentially scary applications. If you're working towards something they don't expect, if you're moving outside of the silo they think you're in, you may be in trouble!

Five short links

Ruffle
Photo by Philip Chapman-Bell

The Normal Well-Tempered Mind – I never knew the AI community had a favorite philosopher, but I can see why Daniel C Dennett is it. There are so many ideas in this conversation that made me think about how our minds work in a very different light. Even better is his disclaimer: "Everything I just said is very speculative. I'd be thrilled if 20 percent of it was right." That's an attitude I'll try hard to emulate.

Space Station Challenge – Figure out how to eke more power out of the solar panels by carefully changing their positions over an orbit. It's all the constraints that make this coding challenge so much fun.

Understand the favicon – A purist would be appalled, but the hacker within me loves how we're learning to push the limits of what's possible thanks to a deep understanding of platform quirks. Like the space station challenge, a complex but ultimately understandable set of constraints makes fertile ground for artful programming. Check out this beautiful subversion of browser's text rendering engines if you're into that sort of thing.

Pulse Tech Talk 2 – One of the best things about living in San Francisco is the plethora of great tech talks on your doorstep. Check out AirBnB's series too, they have some mind-blowing speakers.

Love and other conspiracies of the X-Files – I have a confession – I've watched all nine seasons, and I'm gearing up to rewatch them soon. They're not all good in a conventional sense, but almost every episode is interesting, and Josh captures some of the roots of why they could be so compelling.

Does reality improve when your numbers do?

Hockeystick
Photo by Judy and Ed

I had a tough meeting with an advisor this week. I was proudly showing off how we've managed to triple the amount of time that first-time users spend on Jetpac, when he interrupted. He wanted to know why he should care? It forced me to quickly run him backwards through our decision-making process, looking at why we'd chosen that as one of the numbers we wanted to improve. We'd started there because we noticed that our most successful users, those who enjoy the app enough to keep coming back, tend to interact with the app a lot on their initial visit. Users who take more actions spend more time on the app, the correlation has always been strong in our case, so time was a good approximation of how much they were interacting. That had became the goal, and I had been so focused that it took me a moment to reconstruct how we'd got there.

The dangerous part was that there were lots of ways we could keep users on the app longer without improving the experience at all, or even making it worse! Luckily we have a lot of different methods of understanding how the experience is holding up, from surveys, crowd-sourced user tests, and contacts with power users, but it's still a risk. 

When I was in college, a lecturer who was a grizzled engineering veteran warned us "You'll start off wanting to measure what you value, but you'll end up valuing what you can measure". You need to have fresh eyes looking at how you're evaluating your own progress, not only so you avoid the more obvious problem of vanity metrics, but also so you don't follow your numbers down a rabbit hole. Any measure is only a projected shadow of reality. When somebody asks "So what?", you always need to be able to point to something in the outside world that gets better when the metric does!

What should a lead engineer code on?

Lead
Photo by Cindy Cornett Seigel

If you're a programmer who's been thrust into management, you'll probably want to keep coding. It's the only way to truly understand what's happening inside the engineering team, and nobody wants to become a pointy-haired boss. Your non-programming responsibilities will take a lot of your time though, so how can you pick the right tasks to take on? I've worked with several outstanding lead engineers at Apple and elsewhere, and here's what I've noticed about what their coding responsibilities look like.

Boring

The only way to motivate good hackers is to give them something interesting and challenging to work on. As a greybeard engineer, you've probably gone through your career fighting for chance to work on tough, rewarding problems, so your reflex will be to jump in on the most daunting and fun tasks. If you're a good manager, you'll stop yourself! Look for tasks that nobody else wants to take on instead. You shouldn't need the motivation yourself, leading the team should be enough, and you'll be able to offer your engineers a more rewarding bunch of work. It also builds respect for you in the team if they can see you're willing to sacrifice something meaningful for their benefit.

Ubiquitous

In the short-lived Police Squad, Johnny Shoeshine always supplied the 'word on the street' for all sorts of implausible topics. Being a lead is a lot like that! You need to know the nitty-gritty details of what's happening in the code base, and understand intimately how it's evolving so you can offer meaningful advice and head off potential problems early. The only way to do that is to touch as much of the code base as possible as often as possible. That means picking tasks that are cross-module, whether it's integrating multiple parts of the code, or a service that's used everywhere.

Non-blocking

The sad reality of a manager's life is that you're unpredictably called away from your day to day duties, especially when deadlines are looming. That can be disasterous if other team members are relying on you to deliver code so they can make progress, or if bugs are going unfixed because you're unavailable. You need to find something that can be worked on incrementally in small chunks, and doesn't prevent others from making progress if you do get waylaid for a week.

Following this philosophy, one of the things I've ended up building is the activity log analysis system. It's not something anybody else wants to work on, we need to record events almost everywhere in the code so it touches every module, and it doesn't stop us shipping if improvements get delayed.

If you're a lead, give boring a chance, you'll be amazed at how effective an approach it can be!

How I learned to stop meddling

Maryworth

I ran across Fred Wilson's latest post this morning, and I have something to confess. I'm a meddler. If I see someone struggling with a task I know well, I have a strong urge to jump in and 'help'. This isn't always a bad thing, in the past it's helped me train up more junior folks, and experienced folks could always tell me to go take a hike.

That's all changed since I've become a CTO. Even though it's a small team, I'm a 'boss', which means that people are prone to humoring me more. It took me a while to realize, but no matter how diplomatic I think I am, my guys don't feel as comfortable telling me to bugger off.

Over the last couple of months, I've had to learn a new style of interacting with them. Instead of giving 'helpful' suggestions on the best approach to solving a problem, I'll lay out the goals and some thoughts at the start, and then step back and let them find their own path to an implementation. I'm always available to answer questions and give advice when they ask for it, and we'll often do an informal post-mortem on what did and didn't work at the end of the sprint, but otherwise I try to give them the freedom to code their own way.

I'm lucky enough to be working with a bunch of very smart folks, so the results have been impressive, the solutions have been much more imaginative and effective than they were. It's been humbling to see how strong a negative effect my frequent interventions had, but thinking back on my own career it makes sense. "Voice and choice" were the keys to the jobs I loved. If I'd been involved in planning my own work, and then made decisions about how to tackle it, it turned from being a servile task I was grudgingly performing for someone else, into my project that I worked extra hard on because I truly felt ownership. I would even go out of my way to work in areas that were difficult and unpopular because those were the ones where I had the most freedom. Nobody wanted to interfere with my work on video format conversion code in Motion, for fear they'd be pulled into the quagmire too!

The liberating thing has been how much it has freed me up to work on other vital parts of my job, but that's a subject for another post. If any of this is sounding familiar to you, try really giving your team voice and choice, you'll be amazed at the results!

Five short links

Handprints
Photo by Ryan Somma

DocHive – Transforming scanned documents into data. A lot of the building blocks exist for this as open source, the hard part has always been building something that non-technical people can use, so I'm looking forward to seeing what a journalist-driven approach will produce.

How I helped create a flawed mental health system – There are a lot more homeless people sleeping on my block in San Francisco than there were even just two years ago. I'm driven to distraction by the problems they bring, but this personal story reminded me that they're all some parent's son.

Can you parse HTML using regular expressions? – An unlikely title for some of the funniest writing I've read in months.

Forest Monitoring for Action – A great project analyzing satellite photos to produce data about ecological dama around the world. I ran across this at the SF Open Data meetup, it's well worth attending if this sort of thing floats your boat.

Data visualization tools – A nicely presented and well-curated collection.

Why you should try UserTesting.com

Humancannonball
Photo by Zen Sutherland

If you're building a website or app you need to be using UserTesting.com, a service that crowd-sources QA. I don't say that about many services, and I have no connection with the company (a co-worker actually discovered them) but they've transformed how we do testing. We used to have to stalk coffee shops and pester friends-of-friends to find people who'd never seen Jetpac before and were willing to spend half an hour of their life being recorded while they checked it out. It meant the whole process took a lot of valuable time, so we'd only do it a few times a month. This made life tough for the engineering team as the app grew more complex. We have unit tests, automated Selenium tests, and QA internally, but because we're so dependent on data caching and crunching, a lot of things only go wrong when a completely new user first logs into the system.

These are the steps to getting a test running:

 - Specify what kind of users you need. In our case we look for people between 15 and 40 years old, with over 100 friends on Facebook, who've never used Jetpac before, and who have an iPad with iOS 5 or greater.

– Write a list of tasks you want them to perform. For us, this is simply opening up the app, signing in with Facebook, and using various features.

– Prepare a list of questions you'd like them to answer at the end. We ask for their overall rating of the app, as well as questions about how easy particular features are to find and use.

Once you've prepared those, you have a template that you can re-use repeatedly, so that new tests can be started with just a few seconds of effort. The final step is paying! It does cost $39 per user, so it's not something you want to overuse, but it's saves so much development time, it's well worth it for us.

It normally takes an hour or two for our normal three-user test batches to be completed, and at the end we're emailed links to screencasts of each tester using the app. Since we're on the iPad, the videos are taken using a webcam pointing at the device on a desk, which sounds hacky but works surprisingly well. All of the users so far have been great about giving a running commentary about what they're seeing and thinking as they go through the app, which has been invaluable as product feedback. It's actually often better than the feedback we get from being in the room with users, since they're a lot more self-conscious then!

The whole process is a pleasure, with a lot of thoughtful touches throughout the interface, like the option to play back the videos at double speed. The support staff has been very helpful too, especially Matt and Holly for offering to refund two tests when I accidentally cc-ed them on an unhappy email about the bugs we were hitting in our product.

The best thing about discovering UserTesting.com has been how it changes our development process. We can suddenly get way more information than we could before about how real users are experiencing the app in the wild. It has lowered the barrier dramatically to running full-blown user tests, which means we do a lot more of them, catch bugs faster, and can fix them more easily. I don't want to sound like too much of an informercial, but it's really been a god send to us, and I highly recommend you check them out too.

Strange UIWebView script caching problems

Hieroglyphics
Photo by Clio20

I've just spent several days tracking down a serious but hard to reproduce bug, so I wanted to leave a trail of Googleable breadcrumbs for anyone else who's hitting similar symptoms.

As some background, Jetpac's iPad app uses a UIWebView to host a complex single-page web application. There are a lot of independent scripts that we normally minify down into a handful of compressed files in production. Over the last few weeks, a significant percentage of our new users have had the app hang on them the first time they loaded it. We couldn't reproduce their problems in-house, which made debugging what was going wrong tough.

From logging, it seemed like our app setup Javascript code was failing, so the interface never appeared. The strange thing was that it was rarely the same error, and often the error locations and line numbers wouldn't match with the known file contents, even after we switched to non-minified files. Eventually we narrowed it down to the text content of some of our <script> tags being pulled from a different <script> tag elsewhere in the file, seemingly at random!

That's going to be hard to swallow, so here's the evidence to back up what we were seeing:

We had client-side logging statements within each script's content, describing what code was being executed at what time, combined with <script> onload handlers that logged what src had just been processed. Normal operation would look like this:

Executing module storage.js

Loaded script with src 'https://www.jetpac.com/js/modules/storage.js&#039;

Executing module profile.js

Loaded script with src 'https://www.jetpac.com/js/modules/profile.js&#039;

Executing module nudges.js

Loaded script with src 'https://www.jetpac.com/js/modules/nudges.js&#039;

In the error case, we'd see something like this:

Executing module storage.js

Loaded script with src 'https://www.jetpac.com/js/modules/storage.js&#039;

Executing module profile.js

Loaded script with src 'https://www.jetpac.com/js/modules/profile.js&#039;

Executing module storage.js

Loaded script with src 'https://www.jetpac.com/js/modules/nudges.js&#039;

Notice that the third script thinks it's loading nudges.js, but the content comes from storage.js!

Ok, so maybe the Jetpac server is sending the wrong content? We were able to confirm through the access log the file with the bogus content (nudges.js in the example above) was never requested from the server. We saw the same pattern every time we managed to reproduce this, and could never reproduce it with the same code in a browser.

As a clincher, we were able to confirm that the content of the bogus files was incorrect using the iOS 6 web inspector.

The downside is that we can't trigger the problem often enough to create reliable reproduction steps or a test app, so we can't chase down the underlying cause much further. It has prompted us to change our cache control headers since it seems like something going wrong with the iOS caching, and the logging has also given us a fairly reliable method of spotting when this error has happened after the fact. Since it is so intermittent, we're triggering a page reload if we do know we've lost our marbles. This generally fixes the problem, since it it does seem so timing dependent, though the hackiness of the workaround doesn't leave me with a happy feeling!

If you think you're hitting the same issue, my bet is you aren't! It's pretty rare even for us, but if you want to confirm try adding logging like this in your script tags, and log inside each js file to keep track of which you think is loading:

<script src="foo.js" onload="console.log('loaded foo.js');"/>

In foo.js

console.log('executing foo.js');

Comparing the stream of log statement will tell you if things are going wrong. You'd expect every 'executing foo.js' to be followed by a 'loaded foo.js' in the logs, unless you're using defer or async attributes.

Things users don’t care about

Yawning

Photo by DJ Badly


How long you spent on it.

How hard it was to implement.

How clean your architecture is.

How extensible it is.

How well it runs on your machine.

How great it will be once all their friends are on it.

How amazing the next version will be.

Whose fault the problems are.

What you think they should be interested in.

What you expected.

What you were promised.

How important this is to you.

 

I have to keep relearning these lessons. Finding an experience that people love is far more precious and rare than most of us realize.