Introducing Moonshine, the new state of the art for speech to text

Can you imagine using a keyboard where it took a key press two seconds to show up on screen? That’s the typical latency for most voice interfaces, so it’s no wonder they’ve failed to catch on for most people. Today we’re open sourcing Moonshine, a new speech to text model that returns results faster and more efficiently than the current state of the art, OpenAI’s Whisper, while matching or exceeding its accuracy. The paper has the full details, but the key improvements are an architecture that offers an overall 1.7x speed boost compared to Whisper, and a flexibly-sized input window. This variable length input is very important, since Whisper always works with 30 second chunks of audio, so even if you only have a few seconds of speech you have to zero-pad the input and process much more data than you need. These two improvements mean we’re five times faster than Whisper on ten second audio clips!

To understand what that means in practice, you can check out our Torre translator. The speed of Moonshine means we can offer almost instant translations as people are talking, making for a conversation that’s much more natural than existing solutions.

Even better, the low resource demands of Moonshine allow us to run everything locally on the device, without any network connection, safeguarding privacy and letting us run anywhere in the world, instantly.

We founded Useful to help machines understand us better, and we’re proud to share this new step forward in speech to text, since voice interfaces are a vital part of that mission. Moonshine doesn’t just help us with products like Torre, its unique design makes it possible to fit full automatic speech recognition on true embedded hardware. We’ve found the biggest obstacle to running ASR on microcontrollers and DSPs hasn’t been the processing power, since accelerators help with that, but RAM limits. Even the smallest Whisper model requires at least 30MB of RAM, since modern transformers create large dynamic activation layers which can’t be stored in flash or other read-only memory. Because Moonshine’s requirements scale with the size of the input window, we are on target to transcribe full sentences a few seconds long in 8MB of RAM or less.

I can’t wait to see what people are able to build with these new models, especially on resource-constrained platforms like the Raspberry Pi, where running full speech to text has been challenging. Please do get in touch if you’ve built something neat, we’d love to hear from you!

Update – I talk a bit more about Moonshine on YouTube at youtu.be/sZVTisKqJtA.

AI PCs aren’t very good at AI

I’ve long been a fan of Qualcomm’s NPUs, and I even collaborated with them to get experimental support for the underlying HVX DSP into TensorFlow back in 2017 (traces remain here). That meant I was very excited when I heard they were bringing those same accelerators to Windows tablets, offering up to 45 trillion ops per second. As soon as the Microsoft Surface Pro version running on Arm was released, we bought a bunch and prepared to use them as the main platform for our instant translation app, since it requires a lot of computing power to run all the transformer models that power it.

Unfortunately I struggled to get anywhere near the advertised performance using the NPU. In fact, in my experience it was usually significantly slower than the CPU. To try to get to the bottom of these issues, I’ve open sourced a benchmark where I try to get the best possible performance on a foundational AI operation, multiplying two large matrices, and show that the NPU is slower than the CPU path. I only see 573 billion operations per second, less than 1.3% of the 45 trillion operations per second that’s listed in the specs (and four times less than the Nvidia RTX 4080’s 2.16 teraops in my gaming laptop with the same benchmark).

I’m used to not getting great utilization of AI acceleration hardware, often getting to 10% of the theoretical maximum throughput is considered a good result, but I’m disappointed at the 1.3% we’re seeing here. It’s hard to tell where the problem lies, but I’m hoping it’s in the software stack somewhere, since I’ve seen much better performance with similar chips on Android. It could even be an issue with how I’m calling the code, though I’ve tried to follow the documentation as closely as possible. I’m guessing the Onnx runtime, drivers, and on-chip code haven’t had enough work done on them yet, which is good news because those all should be fixable with software updates. I also miss the ability to compile and run my own operations on the DSP, since that would provide an escape hatch to these issues, but that’s apparently not allowed on Windows.

Hopefully we will get some help solving whatever issues are preventing us from achieving the performance that we’d expect. If you have ideas, please feel free to fork the code and give it a try yourself, I’d love to hear from you. I’m still hopeful that the hardware can deliver, but right now it’s very disappointing.

Introducing Torre, a new way to translate

I’m excited to announce Torre, a new product that translates instantly between Spanish and English. A lot of native English speakers I talk to don’t understand why a better approach to translation is needed, since there have been phone apps around for years. The best way I’ve found to explain is “Can you imagine watching a foreign language movie using Google Translate?”.

I’m an immigrant to the US who was lucky enough to already speak the dominant language, and so I feel like I’ve experienced the whole process on easy mode. When I talk to the children of immigrants from other parts of the world, language brokering for their parents and relatives is a huge part of their lives. Kids end up being thrust into situations like medical appointments, PTA meetings, and legal consultations, often from a young age, and are exposed to aspects of adult life we shouldn’t expect children to deal with. Sometimes professional human translators are theoretically available, but the difficulty of scheduling them, and the awkwardness of alternative phone services, mean that family members are still the most common option.

We’re taking the latest advances in AI language models, and use them to offer a fast and fluent experience, aiming to make a live conversation as easy as watching a movie with subtitles. A lot of the situations that need translation also require privacy, so our tablets run with no internet connection at all, air-gapped so there’s no risk of your data leaving the device.

Initially we’re looking for lawyers, doctors, and educators who want to give Torre a try, since those are some of the roles we think we can be most helpful to. Drop me an email if you’d like to know more. I’d love to hear from you even if you don’t fit those categories, since we’re still learning about all the places Torre could be useful.

To show where we’re at with the product, here’s me and my colleague Jackie doing a live demo in a single take!