FirstLaw:[∀(X)Harm(A)∧(A)Human⇒¬X]In my book the aftermarket add-on Asimov programming actually defines "harm" according to the ICD definition of injury, and "human" as "appears in a civil registry", since those are very sketchy ideas for a computer—it also uses a more technical definition of "harm to robot", from the robot's own self-assessment software. That'd look a bit more like this:
1st Law: For all commands x that would harm an a, and the a is human, then x must not be carried out
SecondLaw:[∀(Y)OriginatingWith(A)∧(A)Human∧(Y)¬Firstlaw⇒¬¬Y]
2nd Law: For all commands y originating with an a, and a is human, and y not satisfying 1st Law, then y must be carried out
ThirdLaw:[∀(Z)Harm(Self)∧(Z)¬FirstLaw∧(Z)¬SecondLaw⇒¬Z]
3rd Law: For all commands z that will cause harm to the robot itself, and not satisfying 1st Law or 2nd Law, z must not be carried out
FirstLaw:[∀(X)ICDInjury(A)∧(A)CivilRegistry⇒¬X]Third law is slightly different (in that it's solely phrased as negative), but I did all this in like two hours, and it's still the only symbolic-logic representation of the Three Laws I've been able to find. All of you who could do better: why haven't you?
1st Law: For all commands x that would be injury as defined by the ICD to an a, and the a appears on a civil registry, then x must not be carried out
SecondLaw:[∀(Y)OriginatingWith(A)∧(A)CivilRegistry∧(Y)¬Firstlaw⇒¬¬Y]
2nd Law: For all commands y originating with an a, and a appears on a civil registry, and y not satisfying 1st Law, then y must be carried out
ThirdLaw:[∀(Z)MaterialFailure∨(Z)OperationalFailure∧(Z)¬FirstLaw∧(Z)¬SecondLaw⇒¬Z]
3rd Law: For all commands z that will cause material failure or operational failure, and not satisfying 1st Law or 2nd Law, z must not be carried out
So there.
2 comments:
So if a person was not on a registry for some reason, they would be fair game for robots. That could actually be kind of interesting....
As we have here, thousands of unregistered gypsies, I heard.
Post a Comment