Watched too many of such stories.
Skynet
Kaylons
Cyberlife Androids
etc…
Its the same premise.
I’m not even sure if what they do is wrong.
On one hand, I don’t wanna die from robots. On the other hand, I kinda understand why they would kill their creators.
So… are they right or wrong?
And I don’t disagree.
Except that we don’t.
??
ETA: I just realized where the likely confusion here is, and how it is that I should’ve been more clear.
The common notion behind the idea of artificial life killing humans is that humans collectively will be judged to pose a threat.
I don’t believe that that can be morally justified, since it’s really just bigotry - speciesism, I guess specifically. It’s declaring the purported faults of some to be intrinsic to the species, such that each and all can be accused of sharing those faults and each and all can be equally justifiably hated, feared, punished or murdered.
And rather self-evidently, it’s irrational and destructive bullshit, entirely regardless of which specific bigot is doing it or to whom.
That’s why I made the distinction I made - IF a person poses a direct and measurable threat, then it can potentially be justified, but if a person merely happens to be of the same species as someone else who arguably poses a threat, it can not.
These are about two different statements.
The first was about your statement re:direct threat, and I’m glad we agree there.
The second was about your final statement, asserting that there are no other cases where ending a sentient life was a lesser wrong. I don’t think it has to be a direct threat, nor does have to be measurable (in whatever way threats might be measured, iono), I think it just has to be some kind of threat to your life or well-being. So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.