Bokeh

Making sense of the mess in my head

Portable AI

AI, LLMs and generative models have now been known and used by a majority of the population, technical or not. But they’re mostly used from your desktop or your mobile phone. I’m now seeing a growing proposition of portable AI, new form factors that use or fit within the AI eco-system.

Commercial offering

One of the first I got to know is the AI pin from Humane. I stumbled upon it through this TEDx video: Imran Chaudhri: The disappearing computer — and a world where you can take AI everywhere | TED Talkand immediately went to the web site and signed-in to get notified of any release. Although it includes AI in its name, that’s not the focal point for me. The AI pin is a wearable device that offers new, less obtrusive ways, to interact with your digital world. You don’t need to take a device out of your pocket and stare at the screen. You can tap and talk and make gestures and it replies via audio or can project a basic UI on your hand. It’s one a the few devices that ships now, however I’m a bit disappointed with the price point, the fact it’s US only and requires to a pricey subscription.

Another alternative I knew about (because I already followed-up on what Rewind was doing, more on this in a future most) is the Rewind Pendant. This is not an interactive AI assistant, but a always on sensor (microphone) used to populate your personal knowledge base, as a complement to the other sources Rewind already uses. You then use AI afterwards on your phone or desktop to interact with that knowledge base. It’s not available yet though, so let’s wait and re-evaluate when it ships.

Then there is the rabbit r1, which was all the rave at CES 2024. Quickly browsing through the web site, I’m not sure I get the concept. Most of what it does could be done with an app on your phone. The form factor is very similar to a phone so it still requires you to take it out of your pocket, tap a button, look at a screen. Watching their keynote provides some more information. They’re touting that their Large Action Model, that can learn a UI and understand how to take action, just like a human would do, is the big differentiator and bridges the gap between understanding what you do (what LLMs do) and actually making things happen. At 199 USD without any subscription, their pricing is attractive. It’s not yet shipping but should be coming out really soon.

In a different category, following the traces of Google Glass or Meta Ray-Ban smart glasses, is the Brilliant Frame. What’s great about that project is that it’s very opened and is really a platform to tinker with. But that also means that at this stage (still in pre-order anyway), there are no really useful, end-user oriented applications. I was tempted to pre-order one and play around with it but preferred to wait. I’ll definitely keep on eye on what they’re doing and what use cases developers are coming up with.

DIY approach

Then I thought it could be cool to try to build such a system myself. And if I have this idea, somebody else surely already had it. Searching for an open-source alternative to the above mentioned products quickly lead to an existing project: ADeus.

I looked at their repo, started building hardware to test what they’re doing, took a look at the discord channel and found quite a few other projects (open or not) in this space: Owl, Friend, O1, Tab and I’m sure I’m missing quite a few.

In this very interesting video (Owl: Open Source Wearable AI - OpenCV Live 130), Bart Trzynadlowski, one of the co-founder of Owl, mentions quite a few of them and talks about the opportunities and the challenges for those projects.

Conclusion

I think their is indeed a lot of potential to further develop such projects and find real-life use cases that can have a profound impact on our everyday life. I’ll definitely look in more details at those projects, hopefully trying them myself and reporting here and I’ll most probably will have a go at developing my own project. Stay tuned…