2010/05/05

Has an Android Buddha Nature, Or Not?

.

Anyway, you can find videos on YouTube of Hatsune Miku, the Vocaloid, singing the Amatsu Norito and the Shariraimon—Shinto's equivalent of the Pater Noster or the Shema Yisrael, and the Mahayana school's Verse of Homage to the Buddha's Relics. It's kinda odd, because Miku...isn't a person, she's an entirely simulated character, but at the same time it's damn cool. If I ever manage to make my books into shows you can rest assured that both will feature in the soundtrack.

But that—a vocaloid participating, even fictitiously, in the spiritual life of a nation—set me off on a number of delightful SF-writer tangents. Another Vocaloid song, Kokoro Kiseki (usually performed by either or both of the Kagamine twins, Len and Rin), is basically everything that's good about Astroboy and none of what isn't. The song's conclusion also deals with...questions. Morphology, longevity...incept dates.

A BladeRunner quote for your delectation; expect more to follow.

As I sort of touched on in "What Is a Man?", there really aren't any real questions raised about "what it means to be human" if you could make AI. Oh, sure, there might be questions about "what is a mind?" if you could make one—assuming you found a software work-around to get past Lucas-Penrose (the work-around used in my book was deprecated in Deuteronomy 18:10-11)—but that question would either have to be solved before you could make one, or be answered when you made one. Not that real philosophy doesn't already know: mind, or intellect, is how a rational being interacts with concepts, just as the senses are how a sentient being interacts with stimuli. Interestingly, it'd actually be (in a sense) possible for an AI to be rational but not sentient. Yes it'd be weird to have an AI whose only data source is keyboards and computer networks—one with no haptic, auditory, or visual perception—but you could do it.

But there is still a question, "What is the spiritual nature of an AI?" Given that an AI is a rational being it would stand to reason it would have the same rights as a person...which would make those Asimov laws monstrously immoral. But at the same time other rational beings have a right to protect themselves, so it would be entirely moral to build in some kind of kill-switch for AIs. Counterintuitive, but there it is: an AI has all the rights of a human, including the right not to be killed without a good reason, but the rights of other people still make it permissible to render an AI easily killable, even though they don't make the 3 Laws okay. We let cops carry guns, we don't lobotomize the citizenry.

But what's an AI's role in society? It's a bad idea to make the things without first figuring that out. Though a company could just get involved for the publicity—"GiantEvilSoftCo: Other computer scientists talk about AI, but ours actually did something about it"—they're more likely to want to sell them. Or, well, hire them out, if we're going to pretend slavery will never come back. And then the question becomes, "What the hell you want one for?" Certainly you won't actually want univerzální roboti; not only don't you need AIs for menial tasks, not automating at all can be beneficial, at least in the "idle hands are the Devil's plaything" aspect of social policy.

Basically you'd want AIs for things where you want to minimize the possibility of human error, but still have control by an actually rational being, rather than an automated system. You'd probably get a lot of AIs in administrative positions, especially in the military and large-scale industry where lots of operations have to be coordinated by someone with an actual understanding. If they're right about the role of the observer in quantum physics, you might also use AIs for watching experiments. You might actually see them as something like super secretaries, acting as administrative assistants to every executive or officer in a body, simultaneously. The only difficulty there is secretaries can't be hacked, but AIs can't be tortured or blackmailed, so it probably balances out. You also probably can't have an affair with your AI (let's all keep exceptions to that rule to ourselves, hmm?). You might use them as spies or similar, depending how expensive they are to make.

All that is, of course, assuming the people who set AI policy understand them, which won't be the case. In my book, for instance, AI isn't widely understood, the process for making them a closely-kept secret. AIs can be bought and sold (they're not legally people, because the company that makes them hasn't told anyone they are), and, while they're mostly used for the roles I mentioned, people also sometimes put them in sex-'droid bodies (most sex-'droids only have video-game "AI", of course, but some people prefer more depth—of course, they seldom ask the AI's permission). Squickety doo-dah, squickety-day!

Finally, ahem (and yes, I know, replicants are bioroids, not really AIs):
I've...seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched c-beams...glitter in the dark...near the Tanhauser Gate. All those moments will be lost in time, like...tears in rain. Time to die.

No comments: