Online Gesture Sensor Demo using WASM

If you’ve heard me on any podcasts recently, you might remember I’ve been talking about a Gesture Sensor as the follow up to our first Person Sensor module. One frustrating aspect of building hardware solutions is that it’s very tough to share prototypes with people, since you usually have to physically send them a device. To work around that problem, we’ve been experimenting with compiling the same C++ code we use on embedded systems to WASM, a web-friendly intermediate representation that runs in all modern browsers. By hooking up the webcam as an input, instead of the camera, and displaying the output dynamically on a web page, we can provide a decent approximation to how the final device will work. There are obviously some differences, the webcam is going to produce higher-quality images than an embedded camera module and the latency will vary, but it’s been a great tool for prototyping. I also hope it will help spark makers’ and manufacturers’ imaginations, so we’ve released it publicly at gesture.usefulsensors.com.

On that page you’ll find a quick tutorial, and then you’ll have the opportunity to practice the four gestures that are supported initially. This is not the final version of the models or the interface logic, you’ll be able to see false positives that would be problematic in production for example, but it should give you an idea of what we’re building. My goal is to replace common uses of a TV remote control with simple, intuitive gestures like palm-forward for pause, or finger to the lips for mute. I’d love to hear from you if you know of manufacturers who would like to integrate something like this, and we hope to have a hardware version of this available soon so you can try it for your own projects. If you are at CES this year, come visit me at LVCC IoT Pavilion Booth #10729, where me and my colleagues will be showing off some of our devices together with Teksun.

Short Links

Years ago I used to write regular “Five Short Links” posts but I gave up as my Twitter account became a better place to share updates, notes, and things I found interesting from around the internet. Now that Twitter is Nazi-positive I’m giving up on it as a platform, so I’m going to try going back to occasional summary posts here instead.

Person Sensor back in stock on SparkFun. Sorry for all the delays in getting our new sensors to everyone who wanted them, but we now have a new batch available at SparkFun, and we hope to stay ahead of demand in the future. I’ve also been expanding the Hackster project guides with new examples like face-following robot cars and auto-pausing TV remote controls.

Blecon. It can be a little hard to explain what Blecon does, but my best attempt is that it allows BLE sensors to connect to the cloud using peoples’ phones as relays, instead of requiring a fixed gateway to be installed. The idea is that in places like buildings where staff will be walking past rooms with sensors installed, special apps on their phones can automatically pick up and transmit recorded data. This becomes especially interesting in places like hotels, where management could be alerted to plumbing problems early, without having to invest in extra infrastructure. I like this because it gets us closer to the idea of “peel and stick” sensors, which I think will be crucial to widespread deployment.

Peekaboo. I’ve long been a fan of CMU’s work on IoT security and privacy labels, so it was great to see this exploration of a system that gives users more control over their own data.

32-bit RISC-V MCU for $0.10. It’s not as cheap as the Paduak three-cent MCU, but the fact that it’s 32-bit, with respectable amounts of flash, SRAM, and I/O makes it a very interesting part. I bet it would be capable of running many of the Hackster projects for example, and since it supports I2C it should be able to talk to a Person Sensor. With processors this low cost, we’ll see a lot more hardware being replaced with software.

Hand Pose using TensorFlow JS. I love this online demo from MediaPipe, showing how well it’s now possible to track hands with deep learning approaches. Give the page permission to access your camera and then hold your hands up, you should see rather accurate and detailed hand tracking!