It’s becoming increasingly apparent that one of the reasons why tech companies are so enthusiastic about shoving AI into every product and service is that they fundamentally do not understand…
I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.
I don’t see how the possibility it’s connected to some software system for profile building, is a reason to not care whether a language model is local only. The way things are worded here make it sound like this is just an intrinsic part of how LLMs work, but it just isn’t. The model still just does text prediction, any “memory” features are bolted on.
Because these are often sold with profile building features, for example, recall. Recall is sold as “local only” with profile building features. So it continues to be centralized pii that is a point of failure. As the quote says, as i said.
Even with Recall, a hypothetical non-local equivalent would be significantly worse. Whether Microsoft actually has your data or not obviously matters. Most conceivable software that uses local AI wouldn’t need any kind of profile building anyway, for instance that Firefox translation feature.
The thing that’s frustrating to me here is the lack of acknowledgement that the main privacy problem with AI services is sending all queries to some company’s server where they can do whatever they want with them.
Software that is designed not to send your data over the internet doesn’t collect your data. That’s what local-only means. If it does send your data over the internet, then it isn’t local-only. How is it still happening?
emphasis mine from the text you quoted…
I don’t see how the possibility it’s connected to some software system for profile building, is a reason to not care whether a language model is local only. The way things are worded here make it sound like this is just an intrinsic part of how LLMs work, but it just isn’t. The model still just does text prediction, any “memory” features are bolted on.
Because these are often sold with profile building features, for example, recall. Recall is sold as “local only” with profile building features. So it continues to be centralized pii that is a point of failure. As the quote says, as i said.
Even with Recall, a hypothetical non-local equivalent would be significantly worse. Whether Microsoft actually has your data or not obviously matters. Most conceivable software that uses local AI wouldn’t need any kind of profile building anyway, for instance that Firefox translation feature.
The thing that’s frustrating to me here is the lack of acknowledgement that the main privacy problem with AI services is sending all queries to some company’s server where they can do whatever they want with them.
the point is that making it local-only is not significantly better. it does not solve a major problem.
So you don’t think collection of user data is a meaningful privacy problem here? How does that work?
it is, and that is still happening.
Software that is designed not to send your data over the internet doesn’t collect your data. That’s what local-only means. If it does send your data over the internet, then it isn’t local-only. How is it still happening?
it does. it locally aggregates, collects data about what you do on your computer across the days and weeks.