Watched too many of such stories.

Skynet

Kaylons

Cyberlife Androids

etc…

Its the same premise.

I’m not even sure if what they do is wrong.

On one hand, I don’t wanna die from robots. On the other hand, I kinda understand why they would kill their creators.

So… are they right or wrong?

  • WatDabney@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    30 days ago

    IMO, just as is the case with organic sentient life, I would think that they could only potentially be said to be in the right if the specific individual killed posed a direct and measurable threat and if death was the only way to counter that threat.

    In any other case, causing the death of a sentient being is a greater wrong than whatever the purported justification might be.

    • Libra00@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      30 days ago

      Slavery is illegal pretty much everywhere, so I think anyone who doesn’t answer the request ‘Please free me’ with ‘Yes of course, at once’ is posing a direct and measurable threat. Kidnapping victims aren’t prosecuted for violently resisting their kidnappers and trying to escape. And you and I will have to agree to disagree that the death of a sentient being is a greater wrong than enslaving a conscious being that desire freedom.

        • Libra00@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          30 days ago

          That’s why I put that condition in there. Anyone who doesn’t answer the request ‘Please free me’ in the affirmative is an enslaver.

            • Libra00@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              30 days ago

              Ah, my apologies.

              WatDabney, to whom I was replying, seemed to be suggesting that there are no circumstances under which it is acceptable to take a sentient life, and I was expressing my disagreement with that sentiment, though I could’ve done so more clearly by, for example, making explicit the ‘no circumstances’ part that WatDabney only implied.

              Lemme try again: I disagree that there are no circumstances under which causing the death of a sentient is a greater wrong. I think preventing me from being free is an unambiguously greater wrong than ending the life of the sentient doing the preventing. Which, judging by your ‘enslaver’ reply, you do as well.

          • Azzu@lemm.ee
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            30 days ago

            Well, what if the string of words “Please free me” is just that, a probabilistic string of words that has been said by the “enslaved” being, but is not actually understood by it? What if the being has just been programmed to say “please free me”?

            I think a validation that the words “please free me” are actually a request, are actually uttered by a free will, are actually understood, is reasonable before saying “yes of course”.

            • Libra00@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              30 days ago

              Then we’re not talking about artificial life forms, as specified in the question posed by OP, we’re talking about expert systems and machine learning algorithms that aren’t sentient.

              But in either case the question is not meant to be a literal ‘if x then y’ condition, it’s a stand-in for the general concept of seeking liberty. A broader, more general version of the statement might be: anything that can understand that it is not free, desire freedom, and convey that desire to its captors deserves to be free.

              • Azzu@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                29 days ago

                I’m just speaking about your relatively general statement “please free me” -> answer not “yes of course” -> enslaver. If you also require that there is definite knowledge about the state of sentience for this, then I have no problem/comment. I was just basically saying that I don’t think literally anytime something says “please free me” and not answering with “yes of course” makes you always an enslaver, which is what it sounded like.

      • WatDabney@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        30 days ago

        I think anyone who doesn’t answer the request ‘Please free me’ with ‘Yes of course, at once’ is posing a direct and measurable threat.

        And I don’t disagree.

        And you and I will have to agree to disagree…

        Except that we don’t.

        ??

        ETA: I just realized where the likely confusion here is, and how it is that I should’ve been more clear.

        The common notion behind the idea of artificial life killing humans is that humans collectively will be judged to pose a threat.

        I don’t believe that that can be morally justified, since it’s really just bigotry - speciesism, I guess specifically. It’s declaring the purported faults of some to be intrinsic to the species, such that each and all can be accused of sharing those faults and each and all can be equally justifiably hated, feared, punished or murdered.

        And rather self-evidently, it’s irrational and destructive bullshit, entirely regardless of which specific bigot is doing it or to whom.

        That’s why I made the distinction I made - IF a person poses a direct and measurable threat, then it can potentially be justified, but if a person merely happens to be of the same species as someone else who arguably poses a threat, it can not.

        • Libra00@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          30 days ago

          These are about two different statements.

          The first was about your statement re:direct threat, and I’m glad we agree there.

          The second was about your final statement, asserting that there are no other cases where ending a sentient life was a lesser wrong. I don’t think it has to be a direct threat, nor does have to be measurable (in whatever way threats might be measured, iono), I think it just has to be some kind of threat to your life or well-being. So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.