It’s becoming increasingly apparent that one of the reasons why tech companies are so enthusiastic about shoving AI into every product and service is that they fundamentally do not understand…
The use of local AI does not imply doing that, especially not the centralizing part. Even if some software does collect and store info locally (not inherent to the technology and anything with autosave already qualifies here), that is not close to as bad privacywise as filtering everything through a remote server, especially if there is some guarantee they won’t just randomly start exfiltrating it, like being open source.
I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.
I don’t see how the possibility it’s connected to some software system for profile building, is a reason to not care whether a language model is local only. The way things are worded here make it sound like this is just an intrinsic part of how LLMs work, but it just isn’t. The model still just does text prediction, any “memory” features are bolted on.
Because these are often sold with profile building features, for example, recall. Recall is sold as “local only” with profile building features. So it continues to be centralized pii that is a point of failure. As the quote says, as i said.
building and centralizing pii is indeed a privacy point of failure. what’s not to understand?
The use of local AI does not imply doing that, especially not the centralizing part. Even if some software does collect and store info locally (not inherent to the technology and anything with autosave already qualifies here), that is not close to as bad privacywise as filtering everything through a remote server, especially if there is some guarantee they won’t just randomly start exfiltrating it, like being open source.
emphasis mine from the text you quoted…
I don’t see how the possibility it’s connected to some software system for profile building, is a reason to not care whether a language model is local only. The way things are worded here make it sound like this is just an intrinsic part of how LLMs work, but it just isn’t. The model still just does text prediction, any “memory” features are bolted on.
Because these are often sold with profile building features, for example, recall. Recall is sold as “local only” with profile building features. So it continues to be centralized pii that is a point of failure. As the quote says, as i said.