TinyML Book Released!

book_cover.jpg

I’ve been excited about running machine learning on microcontrollers ever since I joined Google and discovered the amazing work that the speech wakeword team were doing with 13 kilobyte models, and so I’m very pleased to finally be able to share a new O’Reilly book that shows you how to build your own TinyML applications. It took me and my colleague Dan most of 2019 to write it, and it builds on work from literally hundreds of contributors inside and outside Google. The most nerve-wracking part was the acknowledgements, since I knew that we’d miss important people, just because there were so many of them over such a long time. Thanks to everyone who helped, and we’re hoping this will just be the first of many guides to this area. There’s so much fascinating work that can now be done using ML with battery-powered or energy-harvesting devices, I can’t wait to see what the community comes up with!

To give you a taste, there’s a hundred-plus page preview of the first six chapters available as a free PDF, and I’m recording a series of screencasts to accompany the tutorials.

Why you must see Richard Mosse’s Incoming

mosse1.pngImage by Richard Mosse

When I went to SF MoMA over the holiday break I didn’t have a plan for what I wanted to see, but as soon as I saw the first photos of Richard Mosse’s Incoming exhibition I knew where I’d be spending most of my time. His work documents the European refugee crisis in a way I’ve never seen before. The innovation is that he’s using infrared and nightsight cameras designed for surveillance to document the lives and deaths of the people involved, on all sides. The work on show is a set of large stills, and a fifty minute film without narration. I highly encourage you to check out the exhibition yourself, and I’ll be going back to the museum for an artist talk tomorrow (Saturday January 11th at 2pm), but I wanted to share a few thoughts on why I found it so powerful, partly just to help myself understand why it had such an emotional impact on me.

The representation of the scenes is recognizable, but the nightsight makes them different enough that they broke through my familiarity with all the similar images I’ve seen in the news. It forced me to study the pictures in a new way to understand them, they live in an uncanny valley between abstract renders and timeworn visual cliches, and that made them seem much closer to my own life in a strange way. I couldn’t fit what I was seeing into the easy categories I guess I’ve developed of “bad things happening somewhere far away”.

mosse4.pngI realized that I rely on a lot of visual cues to understand the status and importance of people in images like these, and most of those were washed away by the camera rendering. I couldn’t tell what race people were, or even if they were wearing uniforms or civilian clothes, and I realized that I reach for those markers to make sense of images like these. Without those, I was forced to look at the subjects as people rather than just members of a category. I was a bit shocked at how jarring it was to lose that shorthand, and found myself searching for other clues to make up for my loss. Where does the woman in the cover image above come from? I can guess she’s a refugee from the shawl, but I can’t tell much else. My first guess when I saw the photo above is that it was migrants, but the caption told me these were actually crew members of a navy vessel.

It forced me to confront why I have such a need to categorize people to understand the stories. Part of it feels like an attempt to convince myself their predicament couldn’t happen to me or those I love, which is a lot easier if I can slot them into an “other” category

mosse3.png

A lot of the images are taken from a long distance away using an extreme zoom, or stitched together from panoramas. On a practical level this gives Mosse access to sites that he couldn’t reach, by shooting from nearby hills or shorelines, but it also affects the way I found myself looking at the scenes. The wide shots (like the refugee camp used for the cover of richardmosse.com) give you a god-like view, it’s almost isometric like old-school simulation games. I was able to study the layout in a way that felt voyeuristic, but brought the reality home in a way that normal imagery couldn’t have done. There are also closeups that feel intrusive, like paparazzi, but capture situations which normal filmmakers would never be able to, like refugees being loaded onto a navy boat. Everybody seems focused on their actions, working as if they’re unobserved, which adds an edge of reality to the scenes I’m not used to.

mosse2.png

Capturing the heat signatures of the objects and people in the scene brings home aspects of the experience that normal photography can’t. One of the scenes that affected me most in the film was seeing a man care for an injured person wrapped in a blanket. The detail that was heartbreaking was that you can see his handprints as marks of warmth as he pats and hugs the victim, and watch as they slowly fade. Handprints show up on the sides of inflatables too, where people tried to climb in. You can also see the dark splashes of frigid seawater as the refugees and crew struggle in a harsh environment, the sharpness of the contrast made me want to shiver as it brought home how cold and uncomfortable everyone had to be.

I can’t properly convey how powerful the exhibition is in words, I just hope this post might encourage some more people to check out Mosse’s work for themselves. The technical innovations alone make this a remarkable project (and I expect to see many other filmmakers influenced by his techniques in the future) but he uses his tools to bring a unfolding tragedy back in the spotlight in a way that only art can do. I can’t say it any better than he does himself in the show’s statement:

Light is visible heat. Light fades. Heat grows cold. People’s attention drifts. Media attention dwindles. Compassion is eventually exhausted. How do we find a way, as photographers and as storytellers, to continue to shed light on the refugee crisis, and to keep the heat on these urgent narratives of human displacement?

Chesterton’s shell script

wooden_cratesPhoto by Linnaea Mallette

I have a few days off over the holidays, so I’ve had a chance to think about what I’ve learned over the last year. Funnily enough, the engineering task that taught me the most was writing and maintaining a shell script of less than two hundred lines. One of my favorite things about programming is how seemingly simple problems turn out to have endless complexity the longer you work with them. It’s also one of the most frustrating things too, of course!

The problem I faced was that TensorFlow Lite for Microcontrollers depends on external frameworks like KissFft or the Gnu Compiler Toolchain to build. For TFL Micro to be successful it has to be easy to compile, even for engineers from the embedded world who are may not be familiar with the wonderful world of installing Linux or MacOS dependencies. That meant downloading these packages had to be handled automatically by the build process.

Because of the degree of customization we needed to handle cross-compilation across a large number of very different platforms, early on I made the decision to use ‘make’ as our build tool, rather than a more modern system like cmake or bazel. This is still a controversial choice within the team, since it involves extensive use of make’s scripting language to do everything we need, which is a maintenance nightmare. The bus factor in this area is effectively one, since I’m the only engineer familiar with that code, and even I can’t easily debug complex issues. I believe the tradeoffs were worth it though, and I’m hoping to mitigate the problems by improving the build infrastructure in the future.

A downside of choosing make is that doesn’t have any built-in support for downloading external packages. The joy and terror of make is that it’s a very low-level language for expressing build dependency rules which allow you to build almost anything on top of them, so the obvious next step was to implement the functionality I needed within that system.

The requirements at the start were that I needed a component that I could pass in a URL to a compressed archive, and it would handle downloading and unpacking it into a folder of source files. The most common place these archives come from is GitHub, where every commit to an open source project can be accessed as a ZIP or Gzipped Tar archive, like https://github.com/google/stm32_bare_lib/archive/c07d611fb0af58450c5a3e0ab4d52b47f99bc82d.zip.

The first choice I needed to make was what language to implement this component in. One option was make’s own scripting language, but as I mention above it’s quite obscure and hard to maintain, so I preferred an external script I could call from a makefile rule. The usual Google style recommendation is to use a Python script for anything that would be more than a few lines of shell script, but I’ve found that doesn’t make as much sense when most of the actual work will be done through shell tools, which I expected to be the case with the archive extraction.

I settled on writing the code as a bash script. You can see the first version I checked in at https://github.com/tensorflow/tensorflow/blob/e98d1810c607e704609ffeef14881f87c087394c/tensorflow/lite/experimental/micro/tools/make/download_and_extract.sh. This is a long way from the very first script I wrote, because of course I encountered a lot of requirements I hadn’t thought of at the start. The primordial version was something like this:

curl -Ls "${1}" > ${tempfile}
if [[ "${url}" == *gz ]]; then
  tar -C "${dir}" -xzf ${tempfile}
elif [[ "${url}" == *zip ]]; then
    unzip ${tempfile}

Here’s a selection of the issues I had to fix with this approach before I was able to even check in that first version:

  • A few of the dependencies were bzip files, so I had to handle them too.
  • Sometimes the destination root folder didn’t exist, so I had to add an ‘mkdir -p’.
  • If the download failed halfway through or the file at the URL changed, there wouldn’t always be an error, so I had to add MD5 checksum verification.
  • If there were any BUILD files in the extracted folders, bazel builds (which are supported for x86 host compilation) would find them through a recursive search and fail.
  • We needed the extracted files to be in a first-level folder with a constant name, but by default many of the archives had versioned names (like the project name followed by the checksum). To handle this for tars I could use strip-components=1, but for zipping I had to cobble together a more manual approach.
  • Some of the packages required small amounts of patching after download to work successfully.

All of those requirements meant that this initial version was already 124 lines long! If you look at the history of this file (continued here after the experimental move), you can see that I was far from done making changes even after all those discoveries. Here are some of the additional problems I tackled over the last six months:

  • Added (and then removed) some debug logging so I could tell what archives were being downloaded.
  • Switched the MD5 command line tool the script used to something that was present by default on MacOS, and that had a nicer interface as a bonus, so I no longer had to write to a temporary file.
  • Tried one approach to dealing with persistent ‘56′ errors from curl by adding retries.
  • When that failed to solve the problem, tried manually looping and retrying curl.
  • Pete Blacker fixed a logic problem where we didn’t raise an error if an unsupported archive file suffix was passed in.
  • During an Arm AIoT workshop I found that some users were hitting problems when curl was not installed on their machine on the first run of the build script, and then they re-ran the build script after installation but the downloaded folders had already been created but were empty, leading to weird errors later in the process. I fixed this by erroring out early if curl isn’t present.

What I found most interesting about dealing with these issues is how few of them were easily predictable. Pete Blacker’s logic fix was one that could have been found by inspection (not having a final else or default on a switch statement is a classic lintable error), but most of the other problems were only visible once the script became heavily used across a variety of systems. For example, I put in a lot of time to mitigating occasional ’56’ errors from curl because they seemed to show up intermittently (maybe 10% of the time or less) for one particular archive URL. They bypassed the curl retry logic (presumably because they weren’t at the http error code level?), and since they were hard to reproduce consistently I had to make several attempts at different approaches and test them in production to see which techniques worked.

A less vexing issue was the md5sum tool not being present by default on MacOS, but this was also one that I wouldn’t have caught without external testing, because my Mac environment did have it installed.

This script came to mind as I was thinking back over the year for a few reasons. One of them was that I spent a non-trivial amount of time writing and debugging it, despite its small size and the apparent simplicity of the problem it tackled. Even in apparently glamorous fields like machine learning, 90% of the work is nuts and bolts integration like this. If anything you should be doing more of it as you become more senior, since it requires a subtle understanding of the whole system and its requirements, but doesn’t look impressive from the outside. Save the easier-to-explain projects for more junior engineers, they need them for their promotion packets.

The reason this kind of work is so hard is precisely because of all the funky requirements and edge cases that only become apparent when code is used in production. As a young engineer my first instinct when looking at a snarl of complex code for something that looked simple on the surface was to imagine the original authors were idiots. I still remember scoffing at the Diablo PC programmers as I was helping port the codebase to the Playstation because they used inline assembler to do a simple signed to unsigned cast. My lead, Gary Liddon, very gently reminded me that they had managed to ship a chart-topping game and I hadn’t, so maybe I had something to learn from their approach?

Over the last two decades of engineering I’ve come to appreciate the wisdom of Chesterton’s Fence, the idea that it’s worth researching the history of a solution I don’t understand before I replace it with something new. I’m not claiming that the shell script I’m highlighting is a coding masterpiece, in fact I have a strong urge to rewrite it because I know so many of its flaws. What holds me back is that I know how valuable the months of testing in production have been, and how any rewrite would probably be a step backwards because it would miss the requirements that aren’t obvious. There’s a lot I can capture in unit tests (and one of its flaws is that the script doesn’t have any) but it’s not always possible or feasible to specify everything a component needs to do at that level.

One of the hardest things to learn in engineering is good judgment, because it’s always subjective and usually relies on pattern matching to previous situations. Making a choice about how to approach maintaining legacy software is no different. In its worst form Chesterton’s Fence is an argument for never changing anything and I’d hate to see progress on any project get stymied by excessive caution. I do try to encourage the engineers I work with (and remind myself) that we have a natural bias towards writing new code rather than reusing existing components though. If even a minor shell script for downloading packages has requirements that are only discoverable through seasoning in production, what issues might you unintentionally introduce if you change a more complex system?