Deep Learning is Eating Software

pacman

Photo by John Watson

When I had a drink with Andrej Karpathy a couple of weeks ago, we got to talking about where we thought machine learning was going over the next few years. Andrej threw out the phrase “Software 2.0”, and I was instantly jealous because it captured the process I see happening every day across hundreds of projects. I held my tongue until he got his blog post out there, but now I want to expand my thoughts on this too.

The pattern is that there’s an existing software project doing data processing using explicit programming logic, and the team charged with maintaining it find they can replace it with a deep-learning-based solution. I can only point to examples within Alphabet that we’ve made public, like upgrading search ranking, data center energy usage, language translation, and solving Go, but these aren’t rare exceptions internally. What I see is that almost any data processing system with non-trivial logic can be improved significantly by applying modern machine learning.

This might sound less than dramatic when put in those terms, but it’s a radical change in how we build software. Instead of writing and maintaining intricate, layered tangles of logic, the developer has to become a teacher, a curator of training data and an analyst of results. This is very, very different than the programming I was taught in school, but what gets me most excited is that it should be far more accessible than traditional coding, once the tooling catches up.

The essence of the process is providing a lot of examples of inputs, and what you expect for the outputs. This doesn’t require the same technical skills as traditional programming, but it does need a deep knowledge of the problem domain. That means motivated users of the software will be able to play much more of a direct role in building it than has ever been possible. In essence, the users are writing their own user stories and feeding them into the machinery to build what they want.

Andrej focuses on areas like audio and speech recognition in his post, but I’m actually arguing that there will be an impact across many more domains. The classic “Machine Learning: The High-Interest Credit Card of Technical Debt” identifies a very common pattern where machine learning systems become embedded in deep stacks of software. What I’m seeing is that the problem is increasingly solved by replacing the whole stack with a deep learning model! Taking the analogy to breaking point, this is like consolidating all your debts into a single loan with lower payments. A single model is far easier to improve than a set of deeply interconnected modules, and the maintenance becomes far easier. For many large systems there’s no one person who can claim to understand what they’re actually doing anyway, so there’s no real loss in debuggability or control.

I know this will all sound like more deep learning hype, and if I wasn’t in the position of seeing the process happening every day I’d find it hard to swallow too, but this is real. Bill Gates is supposed to have said “Most people overestimate what they can do in one year and underestimate what they can do in ten years“, and this is how I feel about the replacement of traditional software with deep learning. There will be a long ramp-up as knowledge diffuses through the developer community, but in ten years I predict most software jobs won’t involve programming. As Andrej memorably puts it, “[deep learning] is better than you”!

How do CNNs Deal with Position Differences?

An engineer who’s learning about using convolutional neural networks for image classification just asked me an interesting question; how does a model know how to recognize objects in different positions in an image? Since this actually requires quite a lot of explanation, I decided to write up my notes here in case they help some other people too.

Here’s two example images showing the problem that my friend was referring to:

CNN Position 0

If you’re trying to recognize all images with the sun shape in them, how do you make sure that the model works even if the sun can be at any position in the image? It’s an interesting problem because there are really three stages of enlightenment in how you perceive it:

  • If you haven’t tried to program computers, it looks simple to solve because our eyes and brain have no problem dealing with the differences in positioning.
  • If you have tried to solve similar problems with traditional programming, your heart will probably sink because you’ll know both how hard dealing with input differences will be, and how tough it can be to explain to your clients why it’s so tricky.
  • As a certified Deep Learning Guru, you’ll sagely stroke your beard and smile, safe in the knowledge that your networks will take such trivial issues in their stride.

My friend is at the third stage of enlightenment, but is smart enough to realize that there are few accessible explanations of why CNNs cope so well. I don’t claim to have any novel insights myself, but over the last few years of working with image models I have picked up some ideas from experience, and heard folklore passed down through the academic family tree, so I want to share what I know. I would welcome links to good papers on this, since I’m basing a lot of this on hand-wavey engineering intuition, so please do help me improve the explanation!

The starting point for understanding this problem is that networks aren’t naturally immune to positioning issues. I first ran across this when I took networks trained on the ImageNet collection of photos, and ran them on phones. The history of ImageNet itself is fascinating. Originally, Google Image Search was used to find candidate images from the public web by searching for each class name, and then researchers went through the candidates to weed out any that were incorrect. My friend Tom White has been having fun digging through the resulting data for anomalies, and found some fascinating oddities like a large number of female models showing up in the garbage truck category! You should also check out Andrej Karpathy’s account of trying to label ImageNet pictures by hand to understand more about its characteristics.

The point for our purposes is that all the images in the training set are taken by people and published on websites that rank well in web searches. That means they tend to be more professional than a random snapshot, and in particular they usually have the subject of the image well-framed, near the center, taken from a horizontal angle, and taking up a lot of the picture. By contrast, somebody pointing a phone’s live camera at an object to try out a classifier is more likely to be at an odd angle, maybe from above, and may only have part of the object in frame. This meant that models trained on ImageNet had much worse perceived performance when running on phones than the published accuracy statistics would suggest, because the training data was so different than what they were given by users. You can still see this for yourself if you install the TensorFlow Classify application on Android. It isn’t bad enough to make the model useless on mobile phones, since there’s still usually some framing by users, but it’s a much more serious problem on robots and similar devices. Since their camera positioning is completely arbitrary, ImageNet-trained models will often struggle seriously. I usually recommend developers of those applications look out for their own training sets captured on similar devices, since there are also often other differences like fisheye lenses.

Even still, within ImageNet there is still a lot of variance in positioning, so how do networks cope so well? Part of the secret is that training often includes adding artificial offsets to the inputs, so that the network has to learn to cope with these differences.

CNN Position 1

Before each image is fed into the network, it can be randomly cropped. Because all inputs are squashed to a standard size (often around 200×200 or 300×300), this has the effect of randomizing the positioning and scale of objects within each picture, as well as potentially cutting off sections of them. The network is still punished or rewarded for its answers, so to get good performance it has to be able to guess correctly despite these differences. This explains why networks learn to cope with positioning changes, but not how.

To delve into that, I have to dip into a bit of folklore and analogy. I don’t have research to back up what I’m going to offer as an explanation, but in my experiments and discussions with other practitioners, it seems pretty well accepted as a working theory.

Ever since the seminal AlexNet, CNN’s have been organized as consecutive layers feeding data through to a final classification operation. We think about the initial layers as being edge detectors, looking for very basic pixel patterns, and then each subsequent layer takes those as inputs and guesses higher and higher level concepts as you go deeper. You can see this most easily if you view the filters for the first layer of a typical network:

caffenet_learned_filters

Image by Evan Shelhamer from Caffenet

What this shows are the small patterns that each filter is looking for. Some of them are edges in different orientations, others are colors or corners. Unfortunately we can’t visualize later layers nearly as simply, though Jason Yosinski and others have some great resources if you do want to explore that topic more.

Here’s a diagram to try to explain the concepts involved:

CNN Position 2

What it’s trying to show is that the first layer is looking for very simple pixel patterns in the image, like horizontal edges, corners, or patches of solid color. These are similar to the filters shown in the CaffeNet illustration just above. As these are run across the input image, they output a heat map highlighting where each pattern matches.

The tricky bit to understand is what happens in the second layer. The heatmap for each simple filter in the first layer is put into a separate channel in the activation layer, so the input to the second layer typically has over a hundred channels, unlike the three or four in a typical image. What the second layer is looking for is more complex patterns in these heatmaps combined together. In the diagram we’re trying to recognize one petal of the sun. We know that this has a sharp corner on one end, and nearby will be a vertical line, and the center will be filled with yellow. Each one of these individual characteristics is represented by one channel in the input activation layer, and the second layer’s filter for “petal facing left” looks for parts of the images where all three occur together. In areas of the image where only one or two are present, nothing is output, but where all three are there the output of the second layer will show high activation.

Just like with the first layer, there are many filters in the second layer, and you can think of each one as representing a higher-level concept like “petal facing up”, “petal facing right”, and others. This is harder to visualize, but results in an activation layer with many channels, each representing one of those concepts.

As you go deeper into the network, the concepts get higher and higher level. For example, the third or fourth layer here might activate yellow circles surrounded by petals, by combining the relevant input channels. From that representation it’s fairly easy to write a simple classifier that spots whenever a sun is present. Of course real-world classifiers don’t represent concepts nearly as cleanly as I’ve laid out above, since they learn how to break down the problem themselves rather than being supplied with human-friendly components, but the same basic ideas hold.

This doesn’t explain how the network deals with position differences though. To understand that, you need to know about another common design trait of CNNs for image classification. As you go deeper into a network, the number of channels will typically increase, but the size of the image will shrink. This shrinking is done using pooling layers, traditionally with average pooling but more commonly using maximum pooling these days. Either way, the effect is pretty similar.

Max Pooling

Here you can see that we take an image and shrink it in half. For each output pixel, we look at a 2×2 input patch and choose the maximum value, hence the name maximum pooling. For average pooling, we take the mean of the four values instead.

This sort of pooling is applied repeatedly as values travel through the network. This means that by the end, the image size may have shrunk from 300×300 to 13×13. This shrinkage also means that the number of position variations that are possible has shrunk a lot. In terms of the example above, there are only 13 possible horizontal rows for a sun image to appear in, and only 13 vertical columns. Any smaller position differences are hidden because the activations will be merged into the same cell thanks to max pooling. This makes the problem of dealing with positional differences much more manageable for the final classifier, since it only has to deal with a much simpler representation than the original image.

This is my explanation for how image classifiers typically handle position changes, but what about similar problems like offsets in audio? I’ve been intrigued by the recent rise of “dilated” or “atrous” convolutions that offer an alternative to pooling. Just like max pooling, these produce a smaller output image, but they do it within the context of the convolution itself. Rather than sampling adjacent input pixels, they look at samples separated by a stride, which can potentially be quite large. This gives them the ability to pull non-local information into a manageable form quite quickly, and are part of the magic of DeepMind’s WaveNet paper, giving them the ability to tackle a time-based problem using convolution rather than recurrent neural networks.

I’m excited by this because RNNs are a pain to accelerate. If you’re dealing with a batch size of one, as is typical with real-time applications, then most of the compute is matrix time vector multiplications, with the equivalent of fully-connected layers. Since every weight is only used once, the calculations are memory bound rather than compute bound as is typically the case with convolutions. Hence I have my fingers crossed that this becomes more common in other domains!

Anyway, thanks for making it this far. I hope the explanation is helpful, and I look forward to hearing ideas on improving it in the comments or on twitter.

The Joy of an Indian Paradox

When I was growing up in England I never tasted garlic in my cooking, let alone any spice. Then I moved away to Manchester and found myself in a world of Indian food I’d never imagined! On a Saturday night I’d walk along the Curry Mile in Rusholme and find our group of students implored to enter the dozens of restaurants along the strip. The sheer joy of being able to devour succulent chicken on freshly baked naan has never left me, but I’ve also learned over the years how much more the subcontinent has to offer. Even as I’ve lived in suburbs like Simi Valley, I’ve always been able to find an Indian restaurant that taught me something new about the cuisine.

When I arrived in San Francisco, I have to confess I was a bit disappointed. Down near San Jose there were some amazing Indian experiences, but nothing I tried locally really hit the spot. That’s why I was so excited when Indian Paradox opened in my Divisadero neighborhood a couple of years ago. The owner Kavitha has a unique vision of pairing South Indian street food with the perfect wines in a combination I’ve never heard of anywhere else. She’s able to conjure up delicacies like Dabeli potato burgers and Kanda Batata Poha flattened rice, and pair them with delicious Zinfandels and Mosels to create something I’ve never been able to experience anywhere else.

I’ve been a frequent enough visitor to hear a little of Kavitha’s story, and her journey from Chennai to San Francisco, via Alabama. She’s driven by her love of the food, and when that’s combined with deep knowledge of wine, it gives an experience I don’t think you could find anywhere else in the world. I’ve never found great pairings with Indian food before. In Manchester the best I could hope for was a clean Kingfisher lager that wouldn’t clash with the spice, but somehow the right wines feel like the ingredient I’ve been missing in my Indian meals all these years.

Anyway, it’s a small local business that I love, so I wanted to share a little of my own enthusiasm with the world. If you’re ever in San Francisco and love food, I highly encourage you to make it along to Indian Paradox, and say hi from me!

Cross-compiling TensorFlow for the Raspberry Pi

raspberriesPhoto by oatsy40

I love the Raspberry Pi because it’s such a great platform for software to interact with the physical world. TensorFlow makes it possible to turn messy, chaotic sensor data from cameras and microphones into useful information, so running models on the Pi has enabled some fascinating applications, from predicting train times, sorting trash, helping robots see, and even avoiding traffic tickets!

It’s never been easy to get TensorFlow installed on a Pi though. I had created a makefile script that let you build the C++ part from scratch, but it took several hours to complete and didn’t support Python. Sam Abrahams, an external contributor, did an amazing job maintaining a Python pip wheel for major releases, but building it required you to add swap space on a USB device for your Pi, and took even longer to compile than the makefile approach. Snips managed to get TensorFlow cross-compiling for Rust, but it wasn’t clear how to apply this to other languages.

Plenty of people on the team are Pi enthusiasts, and happily Eugene Brevdo dived in to investigate how we could improve the situation. We knew we wanted to have something that could be run as part of TensorFlow’s Jenkins continuous integration system, which meant building a completely automatic solution that would run with no user intervention. Since having a Pi plugged into a machine to run something like the makefile build would be hard to maintain, we did try using a hosted server from Mythic Beasts. Eugene got the makefile built going after a few hiccups, but the Python version required more RAM than was available, and we couldn’t plug in a USB drive remotely!

Cross compiling, building on an x86 Linux machine but targeting the Pi, looked a lot more maintainable, but also more complex. Thankfully we had the Snips example to give us some pointers, a kindly stranger had provided a solution to a crash that blocked me last time I tried it, and Eugene managed to get an initial version working.

I was able to take his work, abstract it into a Docker container for full reproducibility, and now we have nightly builds running as part of our main Jenkins project. If you just want to try it out for Python 2.7, run:

sudo apt-get install libblas-dev liblapack-dev python-dev \
libatlas-base-dev gfortran python-setuptools
sudo ​pip2 install \
http://ci.tensorflow.org/view/Nightly/job/nightly-pi/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.4.0-cp27-none-any.whl

This can take quite a while to complete, largely because it looks like the SciPy compilation is extremely slow. Once it’s done, you’ll be able to run TensorFlow in Python 2. If you get an error about the .whl file not being found at that URL, the version number may have changed. To find the correct name, go to  http://ci.tensorflow.org/view/Nightly/job/nightly-pi/lastSuccessfulBuild/artifact/output-artifacts/ and you should see the new version listed.

For Python 3.4 support, you’ll need to use a different wheel and pip instead of pip2, like this:

sudo apt-get install libblas-dev liblapack-dev python-dev \
 libatlas-base-dev gfortran python-setuptools
sudo ​pip install \
 http://ci.tensorflow.org/view/Nightly/job/nightly-pi-python3/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.4.0-cp34-none-any.whl

If you’re running Python 3.5, you can use the same wheel but with a slight change to the file name, since that encodes the version. You will see a couple of warnings every time you import tensorflow, but it should work correctly.

sudo apt-get install libblas-dev liblapack-dev python-dev \
 libatlas-base-dev gfortran python-setuptools
curl -O http://ci.tensorflow.org/view/Nightly/job/nightly-pi-python3/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.4.0-cp34-none-any.whl
mv tensorflow-1.4.0-cp34-none-any.whl tensorflow-1.4.0-cp35-none-any.whl
sudo ​pip install tensorflow-1.4.0-cp35-none-any.whl

If you have a Pi Zero or One that you want to use TensorFlow on, you’ll need to use an alternative wheel that doesn’t include NEON instructions. This is a lot slower than the one above that’s optimized for the Pi Two and above, so I don’t recommend you use it on newer devices. Here are the commands for Python 2.7:

sudo apt-get install libblas-dev liblapack-dev python-dev \
libatlas-base-dev gfortran python-setuptools
​sudo pip2 install \
http://ci.tensorflow.org/view/Nightly/job/nightly-pi-zero/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.4.0rc1-cp27-none-any.whl

Here is the Python 3.4 version for the Pi Zero:

sudo apt-get install libblas-dev liblapack-dev python-dev \
 libatlas-base-dev gfortran python-setuptools 
sudo ​pip install \
 http://ci.tensorflow.org/view/Nightly/job/nightly-pi-zero-python3/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.4.0-cp34-none-any.whl

And here are the Python 3.5 instructions:

sudo apt-get install libblas-dev liblapack-dev python-dev \
 libatlas-base-dev gfortran python-setuptools
curl -O http://ci.tensorflow.org/view/Nightly/job/nightly-pi-zero-python3/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.4.0-cp34-none-any.whl
mv tensorflow-1.4.0-cp34-none-any.whl tensorflow-1.4.0-cp35-none-any.whl
sudo ​pip install tensorflow-1.4.0-cp35-none-any.whl

I’ve found the scipy compilation on Pi Zeros/Ones is so slow (many hours), it is unfeasible to wait for it to complete. Instead I’ve found myself pressing Control-C to cancel when it’s in the middle of a scipy-related compile step, and then re-running with ‘–no-deps’ flag after install to skip building dependencies. This is extremely hacky, but since scipy is only needed for testing purposes you should have a workable copy of TensorFlow at the end, provided all the other dependencies completed.

If you want to build your own copy of the wheels, you can run this line from within the TensorFlow source root on a Linux machine with Docker installed to build for the Pi Two or Three with Python 2.7:

tensorflow/tools/ci_build/ci_build.sh PI tensorflow/tools/ci_build/pi/build_raspberry_pi.sh

For Python 3.4:

CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.4" tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 tensorflow/tools/ci_build/pi/build_raspberry_pi.sh

For Python 2.7 on the Pi Zero:

tensorflow/tools/ci_build/ci_build.sh PI tensorflow/tools/ci_build/pi/build_raspberry_pi.sh PI_ONE

For Python 3.4 on the Pi Zero:

CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.4" tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 tensorflow/tools/ci_build/pi/build_raspberry_pi.sh PI_ONE

This is all still experimental, so please do file bugs with feedback if these don’t work for you. I’m hoping we will be able to provide official stable Pi binaries for each major release in the future, like we do for Android and iOS, so knowing how well things are working is important to me. I’m also always excited to hear about cool new applications you find for TensorFlow on the Pi, so do let me know what you build too!

A quick hack to align single-word audio recordings

As I’ve been training on the initial results of the speech gathering app, one of the challenges has been aligning the recordings. There can be a delay between somebody hitting record and saying a word, or they can say it very quickly and leave a large gap at the end of the audio file. To improve the results of the training, I wanted to find a way to standardize the start of a word in my input files, since that would also let me shorten the window of audio I’m looking at, and so reduce the overall compute time.

I looked into advanced speech alignment tools like Sphinx, but they had some pretty gnarly dependencies which I was hoping to avoid in a beginning tutorial. They also had a lot of assumptions built in that didn’t transfer well to single word commands, most didn’t have many prebuilt models, and in general they weren’t easy to integrate.

Looking at visualizations of the waveforms from the recordings using the great Fission app, it usually appeared pretty obvious which section had the word, and which parts were background.

waveform4.png

In this example, the word is in the highlighted portion, and the only other peaks are a noisy click near the end. I was hoping to find an existing tool that would recognize this kind of pattern and help me remove the background, leaving only the part I wanted. I looked at both sox and ffmpeg’s silenceremove filters, but I couldn’t find one that worked well:

– Sox clipped initial sections of the spoken word, since there was a delay before it recognized ‘non-silence’.

– There was an option to avoid this with ffmpeg, but reliably detecting silence meant normalizing all my clips to a standard volume level, which wasn’t something I wanted to do to speech samples.

I also couldn’t specify that I wanted a particular length of clip. In my case, I knew I wanted a second-long result, because that’s what my models take in, and all the words should fit in that length. Most of the tools out there seemed designed to remove gaps in recorded music, but intuitively it felt like my problem was more like ‘give me the second-long section with the most relevant audio in it’.

As I thought about this, I realized that the speech should be the loudest sustained part of the recording, so if I could slide a contiguous window through the audio data and pick the section that was loudest in total, I might get good results.

To visualize what I mean, imagine a simplified waveform of a two-second long clip:

waveform1.png

To my untrained eye, it’s clear that the middle section has the most going on. To turn that into a useful definition, I estimated the volume at each point in the file using the absolute of the PCM value (volume = abs(value)) and then walked through the clip looking at the total of those volumes for a one-second range. By picking the point where the sum total of the volumes is highest:

waveform2.png

You can clip down to a short section with the loudest audio in it:

waveform3.png

I’m sure this particular wheel has been invented many times before, but I couldn’t find it in my searches, so I wanted to leave a trail of breadcrumbs for anyone else stuck with a similar problem. Hopefully people with more experience in this domain will also leave comments offering other suggestions!

The code itself is very straightforward, and I’ve put it up at https://github.com/petewarden/extract_loudest_section. The command line interface has only been designed for my particular use case, with one second hardcoded as the desired window length, only folders of .wavs supported, and no build file for anything other than OS X. It should be easy to port to your own system though, it doesn’t have any dependencies outside of Posix and the C/C++ standard libraries.

The only real point of interest is that it doesn’t recalculate the whole sum at every sample, instead it keeps a running total by subtracting the value leaving the interval as it moves forward in time, and adding in the new volume, which keeps the latency very low.

float current_volume_sum = 0.0f;
for (int64_t i = 0; i < desired_samples; ++i) {
  const float input_value = input[i];
  current_volume_sum += fabsf(input_value);
}
 
int64_t loudest_end_index = desired_samples;
float loudest_volume = current_volume_sum;
for (int64_t i = desired_samples; i < input_size; ++i) {
  const float trailing_value = input[i - desired_samples];
  current_volume_sum -= fabsf(trailing_value);
  const float leading_value = input[i];
  current_volume_sum += fabsf(leading_value);
  if (current_volume_sum > loudest_volume) {
    loudest_volume = current_volume_sum;
    loudest_end_index = i;
  }
}

 

What I’ve learned about neural network quantization

Screen Shot 2017-06-22 at 1.06.20 PM

Photo by badjonni

It’s been a while since I last wrote about using eight bit for inference with deep learning, and the good news is that there has been a lot of progress, and we know a lot more than we did even a year ago. There are still a lot of unanswered questions too, which is why I’m waiting for a plane to take me to MobiSys, where I’ll be helping Nic Lane from UCL run a workshop for the research community to investigate some of them.

As a foundation for that, I’ll be giving a talk on what I know now, and what my hunches are. A lot of it is empirical, and we don’t have nearly enough rigorous experiments, let alone published papers, but if you take all this as provisional I hope it might still be useful. I’m also very happy to acknowledge my deep debt to my Google colleagues and others like Song Han who are the driving forces behind much of this work! Here are my notes on the areas I’ll be covering tomorrow.

Hardware implementations

Since the original TPU paper has been published, we can now use that as a successful example of using eight bit for inference across a wide variety of models within Google. There’s also the collaboration between the Qualcomm and TensorFlow teams that enables models to run up to seven times faster on the HVX DSP than on the CPU, thanks to the use of eight bit. This means we now have more evidence that this is a good approach to use on the hardware side.

Training with forward passes

I don’t have any published papers to hand, and we haven’t documented it well within TensorFlow, but we do have support for “fake quantization” operators. If you include these in your graphs at the points where quantization is expected to occur (for example after convolutions), then in the forward pass the float values will be rounded to the specified number of levels (typically 256) to simulate the effects of quantization. In the backward pass, this rounding won’t be performed, so gradients will be calculated using full float values. This has the effect of forcing the graph to adapt to the lower precision it will encounter during inference, and in practice we’ve seen this improve the accuracy of the quantized graph dramatically, sometimes to a level indistinguishable from float. It also gives precalculated min/max ranges for the 32-bit to 8-bit downscaling that needs to happen after many operations. This saves a step on the CPU, but for hardware implementations it’s even more important, since a dynamically-calculated range may be impossible to efficiently implement.

By the way, if you do want fixed ranges but can’t retrain, there are some options for running example data through a pretrained network to bake them in instead.

Exact zeroes are important

The current TensorFlow way of figuring out ranges just looks at the min/max of the float values and assigns those to 0 and 255. This means that real zero is almost always not exactly representable, and the closest encoded value may represent something like 0.046464, or some other arbitrary distance from exact zero. For most numbers this doesn’t matter, because the float values are assumed to occur in a ‘random’ enough way that the error on the representation of any individual value is also uniformly random. The idea is that as long as the errors generally cancel each other out, they’ll just appear as the kind of random noise that the network is trained to cope with and so not destroy the overall accuracy by introducing a bias.

The problem is that the real value of zero shows up a lot more often you’d expect in neural network calculations. Convolutions are padded with zeros at the edges when filters overlap, and the Relu activation function gates any negative numbers at zero. This means that any error in the zero representation contributes disproportionately to overall results.

The solution to this is to ensure that real values of zero are represented as exactly as possible in the quantized encoding. The way to do this is to nudge the overall min/max values so that zero is exact. We’re not (yet) doing this in TensorFlow, but hope to have it in soon. For much more information, Benoit Jacob has some excellent documentation in gemmlowp, and is the source of most of the information above.

Asymmetric ranges are inconvenient, but may be necessary

Constraining the min/max ranges so that the minimum is always the negative of the maximum is very convenient for a lot of purposes because it avoids having to apply an offset to the operands to the matrix multiply. Unfortunately the evidence for whether this allows for enough precision is mixed, with some models showing unacceptable loss of overall accuracy. This is still an open question, and an area where we need more experiments.

Excluding -128 can be useful

One practical issue that has come up in various contexts is that signed eight bit values run from -128 to +127. This is inconvenient because there’s one more value on the negative side than the positive, and so requires careful handling if we want to use symmetric ranges and ensure zero is exactly representable as encoded zero. Unrelatedly, it’s also been helpful with the ARM NEON CPU implementation to avoid -128 for the weights to allow a faster code path. There’s not all that much principle behind it yet, but there’s thus some evidence that avoiding -128 in general may be helpful.

Lower bit depths are promising, but unproven

There have been some fantastic papers around four bit, two bit, or even one bit precision for neural networks. Unfortunately so far they’ve all had some practical drawbacks that have prevented us from taking advantage of them. Song Han’s four-bit weights require a lookup table, which makes them hard to implement efficiently at runtime, though I’m intrigued to know if a simple function to handle the nonlinear distribution might work as well and be easier to optimize. We haven’t been able to achieve the accuracy we need on models we care about using lower bit depths, or even four-bit linear. The number of one-bit ops required also seems to scale in a way that negates the advantage of their lower precision. Unfortunately I don’t have any papers or documented experiments to share on this though, and I’m also hopeful that these issues can be overcome in the future, so I’ll be keeping a close eye on the literature.

Models are important

A lot of what I’m discussing above are fairly low-level optimizations, but as we know from software engineering, the biggest gains are often to be found higher up the stack. Switching to a more efficient sorting algorithm will probably do more for traditional code than rewriting a less-suited one in assembler. In the same spirit, altering the model architectures so that there’s less work to do is usually a much bigger win than tweaking the bit depth. That’s why I was very pleased that we could release the Mobilenet family of models. These substantially reduce the amount of computation needed, and also work well with quantization, thanks to hard work by Andrew Howard, Benoit Jacob, Dmitry Kalenichenko, and the rest of the Mobile Vision team.

As we keep pushing on quantization, this sort of co-design between researchers and implementers is crucial to get the best results. I think there’s a whole new field beginning to emerge, which I’m not sure whether to call ML Engineering or ML Systems, looking at the whole lifecycle of a deep learning solution, all the way from initial research through to deployment in production. It’s only with that sort of integrated view that we’re going to be able to solve some of the outstanding problems we’re still facing.

Can you help me gather open speech data?

Screen Shot 2017-06-12 at 3.18.46 PM

Photo by The Alien Experience

I miss having a dog, and I’d love to have a robot substitute! My friend Lukas built a $100 Raspberry Pi robot using TensorFlow to wander the house and recognize objects, and with the person detection model it can even follow me around. I want to be able to talk to my robot though, and at least have it understand simple words. To do that, I need to write a simple speech recognition example for TensorFlow.

As I looked into it, one of the biggest barriers was the lack of suitable open data sets. I need something with thousands of labelled utterances of a small set of words, from a lot of different speakers. TIDIGITS is a pretty good start, but it’s a bit small, a bit too clean, and more importantly you have to pay to download it, so it’s not great for an open source tutorial.  I like https://github.com/Jakobovski/free-spoken-digit-dataset, but it’s still small and only includes digits. LibriSpeech is large enough, but isn’t broken down into individual words, just sentences.

To solve this, I need your help! I’ve put together a website at https://open-speech-commands.appspot.com/ (now at https://aiyprojects.withgoogle.com/open_speech_recording) that asks you to speak about 100 words into the microphone, records the results, and then lets you submit the clips. I’m then hoping to release an open source data set out of these contributions, along with a TensorFlow example of a simple spoken word recognizer. The website itself is a little Flask app running on GCE, and the source code is up on github. I know it doesn’t work on iOS unfortunately, but it should work on Android devices, and any desktop machine with a microphone.

Screen Shot 2017-06-12 at 3.24.10 PM

I’m hoping to get as large a variety of accents and devices as possible, since that will help the recognizer work for as many people as possible, so please do take five minutes to record your contributions if you get a chance, and share with anyone else who might be able to help!