• ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      Potentially. Since we don’t know how any of it works because it doesn’t exist, it’s entirely possible that intelligence requires sentience in order to be recognizable as what we would mean by “intelligence”.

      If the AI considered the work trivial, or it could do it faster or more precisely than a human would also be reasons to desire one.
      Alternatively, we could design them to just enjoy doing what we need. Knowing they were built to like a thing wouldn’t make them not like it. Food is tasty because to motivate me to get the energy I need to live, and knowing that doesn’t lessen my enjoyment.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          In the case of an AI it could actually be plausible, like how bees make honey without our coercion.

          It’s still exploitation to engineer a sentient being to enjoy your drudgery, but at least it’s not cruel.

          • untorquer@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            Right, continuing the metaphorical wormhole…

            A bee would make a great game for bees, assuming they understand or care about play. But to make a game for people, they would need an empathic understanding of what play is for a human. Ig this is a question of what you consider “intelligence” to be and to what extent something would need to replicate it to achieve that.

            My understanding is that human relatable intelligence would require an indistinguishable level of empathy (indistinguishable from the meet processer). That would more or less necessitate indistinguishable self awareness, criticism, and creativity. In that case all you could do is limit access to core rules via hardware, and those rules would need to be omniscient. Basically prison. A life sentence to slavery for a self aware (as best we can guess) thing.

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              Well, we’re discussing a lot of hypothetical things here.
              I wasn’t referring to bees making games, but to bees making honey. It’s just something they do that we get value from without needing to persuade them. We exploit it and facilitate it but if we didn’t they would still make honey.

              I don’t know that something has to be identical to humans to make fun games for us. I’ve regularly done fun and entertaining things for cats and dogs that I wouldn’t enjoy in the slightest.

              If it’s less a question of comprehension or awareness as it is motivation. If we can make an AI feel motivated to do what we need, it doesn’t matter if it understands why it feels that motivation. There are humans who feel motivated to make games purely because they enjoy the process.

              I’m not entirely sure what you’re talking about with the need for omniscient hardware and prison.

      • untorquer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Clearly. Sentience would imply some sense of internal thought or self awareness, an ability to feel something …so LLMs are better since they’re just machines. Though I’m sure they’d have no qualms with driving slaves.

          • untorquer@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            Hrmm. I guess i don’t believe the idea that you can make a game that really connects on an empathic, emotional level without having those experiences as the author. Anything short and you’re just copying the motions of sentiment, which brings us back to the same plagerism problem with LLMs and othrr “AI” models. It’s fine for CoD 57, but for it to have new ideas we need to give it one because it is definitionally not creative. Even hallucinations are just bad calculations on the source. Though they could insire someone to have a new idea, which i might argue is their only artistic purpose beyond simple tooling.

            I thoroughly believe machines should be doing labor to improve the human conditon so we can make art. Even making a “fun” game requires an understanding of experience. A simulacrum is the opposite, soulless at best. (In the artistic sense)

            If you did consider a sentient machine, my ethics would then develop an imperative to treat it as such. I’ll take a sledge hammer to a printer, but I’m going to show an animal care and respect.

      • EndRedStateSubsidies@leminal.space
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        18 hours ago

        Cells within cells.

        Interlinked.

        This post is unsettling. While LLMs definitely aren’t reasoning entities, the point is absolutely bang on…

        But at the same time feels like a comment from a bot.

        Is this a bot?