• chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    15 hours ago

    I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.

    I don’t know if I’m understanding this argument right, but the idea that integrating locally run AI is inherently privacy destroying in the same way as live service AI doesn’t make a lot of sense to me.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      think of apple’s on-device image scanner ai that flagged people as perverts after they had taken photos of sand dunes.

    • Umbrias@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      building and centralizing pii is indeed a privacy point of failure. what’s not to understand?

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 hours ago

        The use of local AI does not imply doing that, especially not the centralizing part. Even if some software does collect and store info locally (not inherent to the technology and anything with autosave already qualifies here), that is not close to as bad privacywise as filtering everything through a remote server, especially if there is some guarantee they won’t just randomly start exfiltrating it, like being open source.

        • Umbrias@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.

          emphasis mine from the text you quoted…

          • chicken@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            21 minutes ago

            I don’t see how the possibility it’s connected to some software system for profile building, is a reason to not care whether a language model is local only. The way things are worded here make it sound like this is just an intrinsic part of how LLMs work, but it just isn’t. The model still just does text prediction, any “memory” features are bolted on.