Five short links

Jan_van_Scorel_-_Five_Members_of_the_Utrecht_Brotherhood_of_Jerusalem_Pilgrims_-_Google_Art_Project

Picture by Jan van Scorel

A quick programming note – I’m now at Google! This blog will continue to be my personal collection of random things and occasional rants though.

Frida – Greasemonkey for arbitrary binaries! You can hook into all sorts of function calls with Javascript, even on binaries you didn’t build yourself. I love the idea of being able to mash up desktop apps.

Spotting Tumors with Deep Learning – My friend Jeremy Howard has launched a new startup to apply deep learning to medical problems. Great to see the technology being applied to more things that matter.

Mechanical Turk Worker Protection Guidelines – It’s aimed at academics, but anyone who employs human data raters should read this as a guide on how not to be a jerk.

GPU_FFT – Andrew Holmes on how he created his super-fast FFT library on the Raspberry Pi, with lots of detail on the hand-coded assembler and memory access optimizations. Geek heaven!

fork() can fail : this is important – The crazy tale of a pathological edge case with fork(), and how code that doesn’t check return values very carefully will wipe out all the processes on a machine in a mysterious fashion. “Unix: just enough potholes and bear traps to keep an entire valley going.

How to optimize Raspberry Pi code using its GPU

warpspeed

Photo by Michal

When I was at Apple, I spent five years trying to get source-code access to the Nvidia and ATI graphics drivers. My job was to accelerate image-processing operations using GPUs to do the heavy lifting, and a lot of my time went into debugging crashes or strange performance issues. I could have been a lot more effective if I’d had better insights into the underlying hardware, and been able to step through and instrument the code that controlled the graphics cards. Previously I’d written custom graphics drivers for game consoles, so I knew how useful having that level of control could be.

I never got the access I’d wanted, and it left me with an unscratched itch. I love CUDA/OpenCL and high-level shader interfaces, but the underlying hardware of graphics cards is so specialized, diverse, and quirky that you can’t treat them like black boxes and expect to get the best performance. Even with CUDA, you end up having to understand the characteristics of what’s under the hood if you want to really speed things up. I understand why most GPU manufacturers hate the idea, even just the developer support you’d need to offer for a bare-metal interface would take a lot of resources, but it still felt like a big missed opportunity to write more efficient software.

That all meant I was very excited when Broadcom released detailed documentation of the GPU used on the Raspberry Pi a few months ago. The Pi’s a great device to demonstrate the power of deep learning computer vision, and I’d ported my open-source library to run on it, but the CPU was woefully slow on the heavy math that neural networks require, taking almost twenty seconds even with optimized assembler, so I had a real problem I thought GPU acceleration might be able to help with.

Broadcom’s manual is a good description of the hardware interface to their GPU, but you’ll need more than that if you’re going to write code to run on it. In the end I was able to speed up object recognition from twenty seconds on the CPU to just three on the GPU, but it took a lot of head-scratching and help from others in the community to get there. In the spirit of leaving a trail of breadcrumbs through the forest, I’m going to run through some of what I learned along the way.

Getting started

Broadcom’s Videocore Reference Guide will be your bible and companion, I’m constantly referring to it to understand everything from assembly instructions to interface addresses.

The very first program you should try running is the hello_fft sample included in the latest Raspbian. If you can get this running, then at least you’re set up correctly to run GPU programs.

There’s a missing piece in that example though – the source assembler text isn’t included, only a compiled binary blob. [Thanks to Andrew Holmes and Eben for pointing me to a recent update adding the assembler code!] There isn’t an official program available to compile GPU assembler, so the next place to look is eman’s excellent blog series on writing an SHA-256 implementation. This includes a simple assembler, which I’ve forked and patched a bit to support instructions I needed for my algorithm. Once you’ve got his code running, and have the assembler installed, you should be ready to begin coding.

Debugging

There’s no debugger for the GPU, at all. You can’t even log messages. In the past I’ve had to debug shaders by writing colors to the screen, but in this case there isn’t even a visible output surface to use. I’ve never regretted investing time up-front into writing debug tools, so I created a convention where a register was reserved for debug output, it would be written out to main memory at the end of the program, could be immediately invoked with a LOG_AND_EXIT() macro, and the contents would be printed out to the console after the code was done. It’s still painful, but this mechanism at least let me get glimpses of what was going on internally.

I also highly recommend using a regular laptop to ssh into your Pi, alongside something like sshfs so you can edit source files easily in your normal editor. You’ll be crashing the device a lot during development, so having a separate development machine makes life a lot easier.

Vertex Program Memory

One of the eternal problems of GPU optimization is getting data back and forth between the main processor and the graphics chip. GPUs are blazingly fast when they’re working with data in their local memory, but coordinating the transfers so they don’t stall either processor is a very hard problem. My biggest optimization wins on the Playstation 2 came from fiddling with the DMA controller to feed the GPU more effectively, and on modern desktop GPUs grouping data into larger batches to upload is one of the most effective ways to speed things up.

The Broadcom GPU doesn’t have very much dedicated memory at all. In fact, the only RAM that’s directly accessible is 4,096 bytes in an area known as Vertex Program Memory. This is designed to be used as a staging area for polygon coordinates so they can be transformed geometrically. My initial assumption was that this would have the fastest path into and out of the GPU, so I built my first implementation to rely on it for data transfer. Unfortunately, it has a few key flaws.

There are actually 12 cores inside the GPU, each one known as a QPU for Quad Processing Unit. The VPM memory is shared between them, so there wasn’t much available for each. I ended up using only 8 cores, and allocating 512 bytes of storage to each, which meant doing a lot of small and therefore inefficient transfers from main memory. The real killer was that a mutex lock was required before kicking off a transfer, so all of the other cores ground to a halt while one was handling an upload, which killed parallelism and overall performance.

Texture Memory Unit

After I released the initial VPM-based version of the matrix-to-matrix multiply GEMM function that’s the most time-consuming part of the object recognition process, several people mentioned that the Texture Memory Unit or TMU was a lot more efficient. The documentation only briefly mentions that you can use the TMU for general memory access, and there wasn’t any detail on how to do it, so I ended up looking at the disassembly of the hello_fft sample to see how it was done. I also received some help over email from Eben Upton himself, which was a lovely surprise! Here’s a summary of what I learned:

 – There are two TMUs available to each core. You can manually choose how to use each if you have an algorithmic way to send the same work to both, by turning off ‘TMU swap’, or if you leave it enabled half the cores will be transparently rewired to use alternating TMUs for 0 and 1.

 – You write a vector of 16 addresses to registers ra56 and ra60 for TMU0 and 1 respectively, and that will start a fetch of the values held in those addresses.

 – Setting a ldtmu0/1 code in an instruction causes the next read in the pipeline to block until the memory values are returned, and then you can read from r4 to access those values in further instructions.

 – There’s a potentially long latency before those values are ready. To mitigate that, you can kick off up to four reads on each TMU before calling a ldtmu0/1. This means that memory reads can be pipelined while computation is happening on the GPU, helping performance a lot thanks to all the overlapping pipelining.

 – To reduce extra logic-checking instructions, I don’t try to prevent overshooting on speculative reads, which means there may be accesses beyond the end of arrays (though the values aren’t used). In practice this hasn’t caused problems.

 – I didn’t dive into this yet, but there’s a 4K direct-mapped L1 cache with 64-byte lines for the TMU. Avoiding aliasing on this will be crucial for maintaining speed, and in my case I bet it depends heavily on the matrix size and allocation of work to different QPUs. There are performance counters available to monitor cache hits and misses, and on past experience dividing up the data carefully so everything stays in-cache could be a big optimization.

 – A lot of my data is stored as 8 or 16-bit fixed point, and the VPM had a lot more support for converting them into float vectors than the TMU does. I discovered some funky problems, like the TMU ignoring the lower two bits of addresses and only loading from 32-bit aligned words, which was tricky when I was dealing with odd matrix widths and lower precision. There isn’t much support for ‘swizzling’ between components in the 16-float vectors that are held in each register either, beyond rotating, so I ended up doing lots of masking tricks.

 – Reading from nonsensical addresses can crash the system. During development I’d sometimes end up with wildly incorrect values for my read addresses, and that would cause a hang so severe I’d have to reboot.

 – This isn’t TMU specific, but I’ve noticed that having a display attached to your Pi taxes the GPU, and can result in slower performance by around 25%.

In the end I was able to perform object recognition in just three seconds with the optimized TMU code, rather than six using the VPM, which opens up a lot more potential applications!

Going Further

Developing GPU code on the Raspberry Pi has come a long way in just the last few months, but it’s still in its early stages. I’m hitting mysterious system hangs when I try to run my deep learning TMU example with any kind of overclocking for example, and there’s no obvious way to debug those kind of problems, especially if they’re hard to reproduce in a simple example.

The community, including folks like eman, Eben, Andrew Holme, and Herman Hermitage, are constantly improving and extending the documentation, examples, and tools, so developing should continue to get easier. I recommend keeping an eye on the Raspberry Pi forums to see the latest news! 

Running the example

If you want to try out the deep learning object recognition code I developed yourself, you can follow these steps:

Install Raspbian.

Install the latest firmware by running `sudo rpi-update`.

From `raspi-config`, choose 256MB for GPU memory.

Clone qpu-asm from Github.

Run `make` inside the qpu-asm folder.

Create a symbolic link to the qpu-asm program, for example by running `sudo ln -s /home/pi/projects/qpu-asm/qpu-asm /usr/bin/`.

Clone DeepBeliefSDK from Github.

From the DeepBeliefSDK/source folder, run `make TARGET=pi GEMM=piqpu`.

Once it’s successfully completed the build, make sure the resulting library is in your path, for example by running `sudo ln -s /home/pi/projects/DeepBeliefSDK/source/libjpcnn.so /usr/lib/`.

Run `sudo ./jpcnn -i data/dog.jpg -n ../networks/jetpac.ntwk -t -m s`

You should see output that looks like this:Screen Shot 2014-08-07 at 1.49.33 PM

How to get computer vision out of the unimpressive valley

valley

Photo by Severin Sadjina

When I first saw the results of the Kaggle Cats vs Dogs competition, I was amazed by how accurate it was. When I show consumers our Spotter iPhone app based on the same deep learning technology the contestants used, most people are distinctly underwhelmed thanks to all the mistakes it makes.

The problem is that while computer vision has got dramatically better in the last few years, it was so bad before that we’re still a long way behind what a human can achieve. Most of the obvious applications of computer vision, like the Fire Phone’s object recognition, implicitly assume a higher degree of accuracy than we can achieve, so users are left feeling disappointed and disillusioned by the technology. There’s a disconnect between researchers’ excitement about the improvements and future promise, and the general public’s expectations of what good computer vision should be able to do. I think we’re in a space much like the uncanny valley, where the technology is good enough to be built into applications, but bad enough that those apps will end up frustrating users.

I believe we need to stop trying to build applications that assume human levels of accuracy, and instead engineer around the strengths and weaknesses of the actual technology we have. Here’s some of the approaches that can help.

Forgiving Interfaces

Imagine a user loads a video clip and the application suggests a template and music that fit the subject, whether it’s a wedding or a kids soccer match. The cost and annoyance of the algorithm getting it wrong are low because it’s just a smart suggestion the user can dismiss, so the recognition accuracy only needs to be decent, not infallible. This approach of using computer vision to assist human decisions rather than replacing them can be used in a lot of applications if the designers are willing to build an interface around the actual capabilities of the technology.

Big Data

A lot of the ideas I see for vision applications are essentially taking a job humans currently do, and getting a computer to do it instead (eg identifying products on the Fire Phone). They almost always involve taking a single photo, extracting a rock-solid identification, and then fetching related data based on that. These kind of applications fall apart if the identification piece is inaccurate, which it currently for everything but the simplest cases of bar codes. Going in to build Jetpac’s City Guides, I knew that I wouldn’t be able to identify hipsters with 100% accuracy, but by analyzing a few thousand photos taken at the same place, I could get good data about the prevalence of hipsters at a venue even if there were some mistakes on individual images. As long as the errors are fairly random, throwing more samples at the problem will help. If you can, try to recast your application as something that will ingest a lot more photos than a human could ever deal with, and mine that bigger set for meaning. 

Grunt Work

Right now, looking at photos and making sense of them is an expensive business. Even if you give a security guard a bank of monitors, they probably can’t track more than a dozen or so in any meaningful way. With the current state of computer vision, you could have hundreds of cheap cameras in a facility, and have them trigger an alert when something unusual happens, saving the guard’s superior recognition skills to make sense of the anomaly rather than trying to spot them in the first place. More generally, intelligent cameras become more like sensors that can be deployed in large numbers all over an assembly line, road tunnel, or sewer to detect when things are out of the ordinary. You’ll still need a human’s skills to investigate more deeply, but cheap computing power means you can deploy an army of smart sensors for applications you never could justify paying people to manually monitor.

I’m sure there are other approaches that will help too, but my big hope is that we can be more imaginative about designing around the limitations of current vision technology, and actually start delivering some of the promise that researchers are so excited about!

Setting up Caffe on Ubuntu 14.04

A lot of people on this morning’s webcast asked if I had an Amazon EC2 image of pre-installed Caffe. I didn’t then, but I’ve just put one together! It’s available as ami-2faaa96a in the Northern California zone. There’s also a Vagrant VM at https://d2rlgkokhpr1uq.cloudfront.net/dl_webcast.box, and I’ve got full instructions for setting up your own machine on the Caffe wiki. I’m shaving yaks, so you don’t have to!

Join me for a Deep Learning webcast tomorrow

radios

Photo by RoadsidePictures

I’ve been having a lot of fun working with the O’Reilly team on ways to open up Deep Learning to a wider audience of developers. I’m working on a book tentatively titled “Deep Learning for Hackers” (since I’m a fan of Drew and John’s work), I’ve put up some introductory chapters as articles on the O’Reilly website, and tomorrow I’ll be doing an hour-long webcast walking through the basics of training and running deep networks with Caffe.

I hope you can join me, it’s a big help whenever I can collaborate with an audience to understand more about what developers need! If you do, I recommend downloading the Vagrant box ahead of time to make it easy to follow along. I look forward to seeing you there.

Five short links

cinque

Photo by Gianni

Databending using Audacity effects – What happens when you apply audio effects to images? Occasionally-wonderful glitch art! It always lifts my heart to see creative tinkering like this, especially when it’s well-documented.

Scan Processor Studies – On the topic of glitch art, here are some mind-bending analog video effects from 1972. They look amazingly fresh, and I discovered them while following the influences behind the phenomenal Kentucky Route Zero, which features a lot of tributes to the Bronze Age of computing.

The crisis of reproducibility in omen reading – In which I cheer as the incomparable Cosma Shalizi takes on fuzzy thinking, again.

What’s next for OpenStreetMap? – Has an essential overview of the Open Database License virality problem that’s stopped me from working with OSM data for geocoding. David Blackman from Foursquare gave a heart-breaking talk describing all the work he’s put into cleaning up and organizing OSM boundaries for his open-source TwoFishes project, only to find himself unable to use it in production, or even recommend that other users adopt it!

Where is my C++ replacement? – As a former graphics engine programmer, I recognize the requirements and constraints in this lament. I want to avoid writing any more C or C++, after being scared straight by all the goto-fail-like problems recently, and Go’s my personal leading contender, but there’s still no clear winner.

Why engineers shouldn’t disdain frontend development

disdain

Picture by Sofi

When I saw Cate Huston’s latest post, this part leapt out at me:

‘There’s often a general disdain for front-end work amongst “engineers”. In a cynical mood, I might say this is because they don’t have the patience to do it, so they denigrate it as unimportant.’

It’s something that’s been on my mind since I became responsible for managing younger engineers, and helping them think about their careers. It’s been depressing to see how they perceive frontend development as low status, as opposed to ‘real programming’ on crazy algorithms deep in the back end. In reality, becoming a good frontend developer is a vital step to becoming a senior engineer, even if you end up in a backend-focused role. Here’s why.

You learn to deal with real requirements

The number one reason I got when I dug into juniors’ resistance to frontend work was that the requirements were messy and constantly changing. All the heroic sagas of the programming world are about elegant systems built from a components with minimal individual requirements, like classic Unix tools. The reality is that any system that actually gets used by humans, even Unix, has its elegance corrupted to deal with our crazy and contradictory needs. The trick is to fight that losing battle as gracefully as you can. Frontend work is the boss level in coping with those pressures and will teach you how to engineer around them. Then, when you’re inevitably faced with similar problems in other areas, you’ll be able to handle them with ease.

You learn to work with people

In most other programming roles you get to sit behind a curtain like the Great and Powerful Wizard of Oz whilst supplicants come to you begging for help. They don’t understand what you’re doing back there, so they have to accept whatever you tell them about the constraints and results you can produce. Quite frankly it’s an open invitation to be a jerk, and a lot of engineers RSVP!

Frontend work is all about the visible results, and you’re accountable at a detailed level to a whole bunch of different people, from designers to marketing, even the business folks are going to be making requests and suggestions. You have nothing to hide behind, it’s hard to wiggle out of work by throwing up a smokescreen of jargon when it’s just changing the appearance or basic functionality of a page. You’re suddenly just another member of a team working on a problem, not a gatekeeper, and the power relationship is very different. This can be a nasty shock at first, but it’s good for the soul, and will give you vital skills that will stand you in good stead.

A lot of programmers who’ve only worked on backend problems find their careers limited because nobody wants to work with them. Sure, you’ll be well paid if you have technical skills that are valuable, but you’ll be treated like a troll that’s kept firmly under a bridge, for fear you’ll scare other employees. Being successful in frontend work means that you’ve learned to play well with others, to listen to them, and communicate your own needs effectively, which opens the door to a lot of interesting work you’d never get otherwise. As a bonus, you’re also going to become a better human being and have more fun!

You’ll be able to build a complete product

There are a lot of reasons why being full-stack is useful, but one of my favorites is that you can prototype a fully-working side-project on your own. Maybe that algorithm you’ve been working on really is groundbreaking, but unless you can build it into a demo that other people can easily see and understand, the odds are high it will just languish in obscurity. Being able to quickly pull together an app that doesn’t make the viewer’s eyes bleed is a superpower that will make everything else you do easier. Plus, it’s so satisfying to take an idea all the way from a notepad to a screen, all by yourself.

You’ll understand how to integrate with different systems

One of the classic illusions of engineers early in their career is that they’ll spend most of their time coding. In reality, writing new code is only a fraction of our job, most of the time will go into debugging, or getting different code libraries to work together. The frontend is the point at which you have to pull together all of the other modules that make up your application. That requires a wide range of skills, not the least of which is investigating problems and assigning blame! It’s the best bootcamp I can imagine in working with other people’s code, which is another superpower for any developer. Even if you only end up working as a solo developer on embedded systems, there’s always going to be an OS kernel and drivers you rely on.

Frontend is harder than backend

The Donald Knuth world of algorithms looks a lot like physics, or maths, and those are the fields most engineers think of as the hardest and hence the most prestigious. Just like we’ve discovered in AI though, the hard problems are easy, and the easy problems are hard. If you haven’t already, find a way to get some frontend experience, it will pay off handsomely. You’ll also have a lot more sympathy for all the folks on your team who are working on the user experience!

Follow

Get every new post delivered to your Inbox.

Join 95 other followers