The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important.
If AIs have moral standing, it could follow that they have a right to life. The android Data demonstrates that he is self-aware in that he can monitor whether, for example, he is optimally charged or there is internal damage to his robotic arm.
However, Data most likely lacks phenomenal consciousness. Now, if Data doesn’t feel pain, at least one of the reasons giving a creature moral standing is not fulfilled.
Suffering might not require phenomenal consciousness the way pain essentially does. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.
Human beings don’t lose their claim to moral standing just because they act against the interests of another person.
You can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing.
If moral standing is given in virtue of the capacity to no phenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.
How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs – whether kind and helpful like Data, or set on destruction, like Skynet.