AI PCs aren’t very good at AI

I’ve long been a fan of Qualcomm’s NPUs, and I even collaborated with them to get experimental support for the underlying HVX DSP into TensorFlow back in 2017 (traces remain here). That meant I was very excited when I heard they were bringing those same accelerators to Windows tablets, offering up to 45 trillion ops per second. As soon as the Microsoft Surface Pro version running on Arm was released, we bought a bunch and prepared to use them as the main platform for our instant translation app, since it requires a lot of computing power to run all the transformer models that power it.

Unfortunately I struggled to get anywhere near the advertised performance using the NPU. In fact, in my experience it was usually significantly slower than the CPU. To try to get to the bottom of these issues, I’ve open sourced a benchmark where I try to get the best possible performance on a foundational AI operation, multiplying two large matrices, and show that the NPU is slower than the CPU path. I only see 573 billion operations per second, less than 1.3% of the 45 trillion operations per second that’s listed in the specs (and four times less than the Nvidia RTX 4080’s 2.16 teraops in my gaming laptop with the same benchmark).

I’m used to not getting great utilization of AI acceleration hardware, often getting to 10% of the theoretical maximum throughput is considered a good result, but I’m disappointed at the 1.3% we’re seeing here. It’s hard to tell where the problem lies, but I’m hoping it’s in the software stack somewhere, since I’ve seen much better performance with similar chips on Android. It could even be an issue with how I’m calling the code, though I’ve tried to follow the documentation as closely as possible. I’m guessing the Onnx runtime, drivers, and on-chip code haven’t had enough work done on them yet, which is good news because those all should be fixable with software updates. I also miss the ability to compile and run my own operations on the DSP, since that would provide an escape hatch to these issues, but that’s apparently not allowed on Windows.

Hopefully we will get some help solving whatever issues are preventing us from achieving the performance that we’d expect. If you have ideas, please feel free to fork the code and give it a try yourself, I’d love to hear from you. I’m still hopeful that the hardware can deliver, but right now it’s very disappointing.

One response

  1. How much memory bandwidth have you got?

    You can do (probably something more or less like) 45 TOPS on data immediately available to the NPU. If your matrices aren’t in whatever cache the NPU has access to, you’re going to have to go out to DDR to get them.

    “45 TOPS” is a bullshit marketing number for a machine with one or two channels to RAM. You’re not going to be able to keep that NPU fed for most problems big enough to need it in the first place.

Leave a comment