All AI Benchmarks are Wrong, but some are Useful

Photo by Pixnio

When I was new to Google Brain, I got involved in a long and heated discussion about evaluation numbers for some models we were using. As we walked out of the room, the most senior researcher told me “Look, the only metrics that matter are app store ratings. Everything else is just an approximation.“.

The Word Lens team, who were acquired around the same time Jetpac was, soon gave me a vivid example of this. Google Translate already had a visual translation feature for signs and menus, and the evaluation scores on test datasets were higher than Word Lens’s model achieved. What surprised the Google product managers was that consumers still preferred the Word Lens app over Google Translate for this use case, despite the lower metrics. It turned out the key difference was latency. With Google Translate you snapped a picture, it was uploaded to the server, and a result was returned in a second or two. Word Lens ran at multiple frames per second. This meant that users got instant on-screen feedback about the results, and would jiggle the camera angle until it locked on to a good translation. Google Translate had a higher chance of providing the right translation for a single still image, but because Word Lens was interactive, users ended up with better results overall. Smart product design allowed them to beat Google’s best models, despite apparently falling short on metrics.

I was thinking of this again today as I prepared a data sheet for a potential customer. They wanted to know the BLEU score for our on-device translation solutions. Calculating this caused me almost physical pain because while it remains the most common metric for evaluating machine translation, it doesn’t correlate well with human evaluations of the quality of the results. BLEU is a purely textual measure, and it compares the actual result of the translation word by word against one or more expected translations prepared as ground truth by fluent speakers of the language. There are a lot of problems with this approach. For example, think of a simple French phrase like “Le lac est très beau en automne“. One translation could be “The lake is very beautiful in the autumn“. Another could be “The lake is very pretty in the fall“. “In the fall, the lake’s very pretty” would also be a fair translation that captures the meaning, and might read better in some contexts. You can probably imagine many more variations, and as the sentences get more complex, the possibilities increase rapidly. Unless the ground truth in the dataset includes all of them, any results that are textually different from the listed sentences will be given a low accuracy score, even if they convey the meaning effectively. This means that the overall BLEU score doesn’t give you much information about how good a model is, and using it to compare different models against each other isn’t a reliable way to tell which one users will be happy with.

So why does BLEU still dominate the machine translation field? Model creators need a number that’s straightforward to calculate to optimize towards. If you’re running experiments comparing changes to datasets, optimization techniques, and architectures, you need to be able to quickly tell which seem to be improving the results, and its impractical to evaluate all of these by A/B testing them with actual users. The only way to iterate quickly and at scale is with metrics you can run in an automated way. While BLEU isn’t great for comparing different models, relative changes do at least tend to correlate with improvements or declines for a single model. If an experiment shows that the BLEU score has dropped significantly, there’s a good chance that the users will be happier with this version of the model compared to the original. That makes it a helpful directional signal.

This is why people who are actively working on training models are obsessed with benchmarks and metrics. They sound boring to outsiders, and they’re inherently poor approximations to the actual properties you need for your actual product, but without them it’s impossible to make progress. As George Box said – “All models are wrong, but some are useful“. You can see this clearly with modern LLMs. In general I’m pretty skeptical about the advantages OpenAI and Anthropic gain from their scale, but they have millions of people using their products every day and have the data to understand which metrics correlate to customer satisfaction. There are lots of external efforts to benchmark LLMs, but it’s not clear what they tell us about how well they actually work, and which are best.

This is important because a lot of big decisions get made based on benchmarks. Research papers need to show they beat the state of the art on commonly accepted metrics to be published. Companies get investment funding from their benchmark results. The output and content of the LLMs we use in our daily lives are driven by which metrics are used during their training process. What the numbers capture and what they miss has a direct and growing impact on our world, as LLMs are adopted in more and more applications.

That’s a big reason why Natalie and I started the AI Benchmark Club meetup in SF. There are a lot of AI events in the Bay Area, but if you’re actually training models from scratch, it can be hard to find other people facing similar challenges amongst all the business, marketing, and sales discussions that often dominate. The nice thing about benchmarks is that they sound unimportant to everyone except those of us who rely on them to build new models. This works as a great filter to ensure we have a lot of actual researchers and engineers, with talks and discussions on the practical challenges of our job. As Picasso said – “When art critics get together they talk about content, style, trend and meaning, but when painters get together they talk about where can you get the best turpentine“. I think benchmarks are turpentine for ML researchers, and if you agree then come join us at our next meetup!

Why does a Local AI Voice Agent Running on a Super-Cheap Soc Matter?

Most recent news about AI seems to involve staggering amounts of money. OpenAI and Nvidia sign a $100b data center contract. Meta offers researchers $100m salaries. VCs invested almost $200b in AI startups in the first half of 2025.

Frankly, I think we’re in a massive bubble that dwarfs the dot-com boom, and we’ll look back on these as crazy decisions. One of the reasons I believe this is because I’ve seen how much is possible running AI locally, with no internet connection, on low-cost hardware. The video above is one of my favourite recent examples. It comes from a commercial contract we received to help add a voice assistant to appliances. The idea is that when a consumer runs into a problem with their dishwasher, they can press a help button and talk to get answers to common questions.

What I’m most proud of here is that this is cutting-edge AI actually helping out with a common issue that many of us run into in our daily lives. This isn’t speculative, it’s real and running, and it doesn’t pose a lot of the ethical dilemmas other AI applications face. Here’s why I think this matters:

  • The consumer doesn’t have to do anything beyond pressing a button to use it. There’s no phone app to download, no new account to create, and no Wifi to set up. The solution works as soon as they plug the appliance in. This is important because less than half of all smart appliances ever get connected to the internet.
  • It’s using Moonshine and an LLM to do a much better job of understanding natural speech than traditional voice assistants. The questions I asked in the demo were off-the-cuff, I deliberately used vague and informal language, and it still understood me.
  • It addresses a genuine problem that manufacturers are already paying money to solve. They are currently spending a lot on call centers and truck rolls to help consumers. This solution has the potential to reduce those costs, and increase consumer satisfaction, by offering quick answers in an easy way.
  • Running locally means that audio recordings never have to go to the cloud, increasing privacy.
  • Local also means fast. The response times in the video are real, this is running on actual hardware.
  • This doesn’t require a GPU or expensive hardware. It runs on a Synaptics chip that has just launched, and will be available in bulk for low-single-digit dollars. This means it can be added to mass-market equipment like appliances, and even toys. Since it’s also able to run all the regular appliance control functions,  it can replace similarly-priced existing SoCs in those products without raising the price.
  • More functionality, like voice-driven controls, can easily be added incrementally through software changes. This can be a gateway to much richer voice interactions, all running locally and privately.

All these properties give local AI a much better chance to change our daily lives in the long term, compared to a chat bot that you access through a text box on a web page. AI belongs out in the world, not in a data center! If you agree, I’d love to hear from you.

How to Try Chrome’s Hidden AI Model

A black dog with a pink towel over its head, against a background of white tiles.

There’s an LLM hiding in Chrome. Buried in the browser’s basement, behind a door with a “Beware of Leopard” sign.

But I’ll show you how to find it. In a couple minutes, you’ll have a private, free chatbot running on your machine.

Instructions
We’re going to enable some developer flags in desktop Chrome so you can get full access to the AI model. We have to do this because the functionality is only being slowly rolled out by Google, and by turning on these developer options we can skip to the front of the line. There’s also a screencast version of these instructions if you’d like to follow along on YouTube.

You’ll need access to Chrome’s internal debugging pages to try out the model, so enter chrome://chrome-urls/ into the URL bar, scroll down, and click on “Enable internal debugging pages”.

Next type or copy and paste chrome://flags/#prompt-api-for-gemini-nano-multimodal-input into the URL bar.

Click on the “Default” drop down menu, choose enabled, and then relaunch Chrome.

If you’re familiar with the console you can copy and paste “await LanguageModel.availability();” to trigger the next step, but I’ve also created this page to make it easier for non-developers to do it by just clicking a button.

Next, type or copy and paste the URL “chrome://on-device-internals/”. In that page, click on “Load Default” and you should see a message confirming that the model has been downloaded.

Now you have access to the Gemini Nano LLM running locally in Chrome! You can enter text in the input box, and it will respond just like a cloud-based chatbot.

To verify this is truly happening locally, you can turn off the wifi and enter new prompts. You can even use it to transcribe audio, or analyze images.

Why does this matter?

It’s free: These models work with the PC you have and require no subscriptions. Your usage is only limited by the speed of the model.

It’s 100% privacy-safe: None of your questions or answers leave your PC. Go ahead, turn off your WiFi and start prompting – everything works perfectly.

It works offline: The first time I used a local model to help with a coding task while flying on an airplane without WiFi, it felt like magic. There’s something crazy about the amount of knowledge these models condense into a handful of gigabytes.

It’s educational: This is the main reason you should bother with local LLMs right now. Just trying out this model demystifies the field, and should be an antidote to the constant hype the AI industry fosters. By getting your hands just slightly dirty, you’ll start to understand the real-world trajectory of these things.

It’s the future: Local models are only getting better and faster, while cloud-based chatbots like Claude and ChatGPT plateau. The market is inevitably going to shift to free models like this that are integrated into platforms and operating systems.