- Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
- Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
- AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
I’m confused by this revelation. What did everybody think the box was?
Magic
In all reality, it is a ChatGPTitty "fine"tune on some datasets they hobbled together for VQA and Android app UI driving. They did the initial test finetune, then apparently the CEO or whatever was drooling over it and said “lEt’S mAkE aN iOt DeViCe GuYs!!1!” after their paltry attempt to racketeer an NFT metaverse game.
Neither this nor Humane do any AI computation on device. It would be a stretch to say there’s even a possibility that the speech recognition could be client-side, as they are always-connected devices that are even more useless without Internet than they already are with.
Make no mistake: these money-hungry fucks are only selling you food cans labelled as magic beans. You have been warned and if you expect anything less from them then you only have your own dumbass to blame for trusting Silicon Valley.
If the Humane could recognise speech on-device, and didn’t require its own data plan, I’d be reasonably interested, since I don’t really like using my phone for structuring my day.
I’d like a wearable that I can brain dump to, quickly check things without needing to unlock my phone, and keep on top of schedule. Sadly for me it looks like I’ll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans
I think the issue is that people were expecting a custom (enough) OS, software, and firmware to justify asking $200 for a device that’s worse than a $150 phone in most every way.
The Rabbit OS is running server side.
I would expect bespoke software and OS in a $200 device to be way less impressive than what a multi billion dollar company develops.
Without thinking into it I would have expected some more custom hardware, some on device AI acceleration happening. For one to go and purchase the device it should have been more than just an android app
The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.
No small startup is going to revolutionize this space unless some kind of new physics is discovered.
I think the plausibility comes from the fact that a specialized AI chip could theoretically outperform a general purpose chip by several orders of magnitude, at least for inference. And I don’t even think it would be difficult to convert a NN design into a chip or that it would need to be made on a bleeding edge node to get that much more performance. The trade off would be that it can only do a single NN (or any NNs that single one could be adjusted to behave identically to, eg to remove a node you could just adjust the weights so that it never triggers).
So I’d say it’s more accurate to put it as “the easiest/cheapest way to do an AI device is to use a standard SoC”, but the best way would be to design a custom chip for it.
Custom hardware and software I guess?
Running the Spotify app and dozens of others on a custom software stack?
Most of the processing is done server side though.
Same. As soon as I saw the list of apps they support, it was clear to me that they’re running Android. That’s the only way to provide that feature.
It could have been a local AI and some special AI chip not found in all android phones, but since it is run in cloud, the privacy is really a problem