- Why is it that every 18th- and 19th-century novelty toy, playing music or writing letters or doing various other tasks, some of them "programmable" because there are multiple cartridges of mechanical "instructions" for them to follow, gets called "early computer"? It's a sewing machine or a player piano—are those "early computers"?
We've been doing mechanical embroidery, with "programmed" patterns, since at least the 1870s; fundamentally none of these "early computers" is any different from the industrial loom—except the industrial loom drastically lowers the price of decent garments, the novelty toy does nothing and improves the life of nobody (well, no more than any other interesting novelty item improves people's lives—a form of "happiness" that a fiction-writer probably ought not to despise).
You might call them "early robots" (since "robots" refers primarily to the industrial usage), but that's not what people do. Words mean things. Please don't use words that are not applicable, just because they are similar to other words that are. - I feel like setting out some rules of engagement, for people who wish to defend the millennarian hopes of the Transhumanist faithful, when other people make remarks about the realities of neurology and the limits of machine logic, and their implications for Kurzweil's prophesied techno-Rapture. One, don't take your username from books (or God forbid, movies of books) by William Gibson; he knew about as much about computers as H. G. Wells knew about trans-lunar injection. Two, don't demonstrate that you don't actually know what "most complex" means—Microsoft Windows is, objectively, the most complex computer program the human race ever created, and your opinion of how good an OS it is is completely irrelevant. (That other OSes are "elegantly simple" by comparison is actually one of their selling points—the significance of "most complex" in this context is that Microsoft Windows, with all its many, many crashes, is still 3.2 million times smaller than the representation of a human brain as one-line-of-code-per-cell or synapse...and realistically we would probably need multiple lines of code to represent certain cells and synapses.)
Three, don't demonstrate that you are too stupid to grasp what hardware emulation is, by asserting that the number of braincells and synapses is irrelevant—and three-point-five, do not act like actually researching relevant facts like those numbers, on something like this, makes someone a target for ridicule. (The reverse is the case: that you don't know the relevant facts, and have not even bothered trying to explain how your enterprise will cope with them, shows how intellectually bankrupt you are.) And finally, four, do not tell the other party they must've selected "Gödel Incompleteness" at random, as a basis for their case. If you don't even know that Gödel only came up with the idea while exploring the limits of machine logic—because Hilbert was trying to make a machine that could handle all of logic—then you are simply announcing that not only do you have no right to your opinion, you don't even know what would or would not constitute a right to an opinion, on this matter. I'm not saying you can't challenge the argument from neurology (you pretty much can't challenge the argument from the limits of machine logic, or at least nobody has yet—every attempt to refute Lucas-Penrose that I know of has mostly been a demonstration of illiteracy); but the fact of the matter is that you're not trying to challenge those arguments.
(Yes, I am thinking of a particular person. But all of these errors are common occurrences, though the unnamed idiot in question was the first I'd ever seen with all of them.) - It occurs to me that "civil registries" are a poor thing to base a robot's ethics programming on—over and above the silliness of the Three Laws. Realistically an AI would be able to recognize humans, among other things, with a dedicated "object-class detection" (AKA "object recognition") program, presumably one of the suite of "weak" AIs a strong AI (assuming you can get one) would be made up of.
Likewise, one wouldn't define "harm" according to the ICD definition of "injury"; there are apparently ways now to make a robot mechanically detect when any motion would exceed "the human pain tolerance limit", so presumably a society that can make a natural-language interface you can meaningfully talk with, can give its AIs sufficient situational-analysis to know when an action would exceed one of several limits of human tolerance. "Safeguarding-space violation" or "safety-space violation" would apparently be the term for "harm" in a robotics context.
Incidentally, robots that don't have Asimov programming (which, remember, AIs dislike) would still be programmed to behave in accordance with ISO 13482. Because we now have an industrial standard for those! - It occurs to me that what you'd really want AI for, is to have one person, on-call 24-7, who can answer any concern anyone might bring to it, rather than getting "well when Bob was on duty he said..." situations. You'd still probably want to have a normal person in overall command, since you don't necessarily want something that can be hacked to have any legal authority, but there is, as you can see, a real market for them (unfortunately for hippies, that market is mostly the military).
Incidentally, you could probably give the same job to a person who's had their need for sleep done away with. But whether that's actually possible, or advisable long-term, is an open question; sleep serves some necessary purpose, since organisms that only have half their brain asleep at a time still do sleep, which they wouldn't if they could've done without it. We don't actually know what sleep is for, or what doing away with it would cause.
There's also the question of whether it's remotely ethical to ask anyone to have their brain screwed with like that, a question that doesn't come up with an AI (though there are questions about whether you ought to create a person just to do a specific task), but apparently most people don't think there's anything wrong with restricting key posts to eunuchs? (Many eunuchs, historically, were volunteers, so "consent makes everything okay" is not valid—not that it ever is.) - As is my custom when writing one of these posts, I read manga about robots in my other browser tabs. There's a neat little one called "Ninomae Shii no Tsukaikata" or "How to Use Ninomae Shii", that sadly only lasted 30 chapters (the reader-questionnaires are a harsh mistress). It's about a robot made by a middle-school Nobel Prize winner, searching for his purpose.
But...the purpose of a strong-AI, aside from any jobs it might happen to perform, would be the same as the purpose of any other self-aware entity. Self-aware entities have, as their purpose, the fact they exist, and the contemplation and appreciation of that fact. (The Baltimore Catechism phrases it more succinctly, in the famous answer to question 126.) - There is apparently some idea abroad in the land that "android" means bio-engineered, while a robot shaped like a man is called a "humanoid". Only, Common Usage, mammajamma: "android" means "robot shaped like a man" ("gynoid" is sometimes used for "robot shaped like a woman", but usually that's just called "female android"), while the bio-engineered things are called "bioroids". "Humanoid", meanwhile, means anything shaped like a human, living or not (an "android" is a humanoid robot). (Sometimes, in settings where such a distinction makes sense—all of them space-opera—"humanoid" is restricted to Rubber Forehead Aliens, and the other, more vaguely man-shaped, guys are called "bipeds" or something.)
- I was wondering how to write androids getting freaked out (remember, I have strong-AIs due to a highly unorthodox software workaround). At first I thought they wouldn't get chills, because while some of them do have body hair (the one whose job is infiltrations does), they didn't evolve from animals that sometimes survived by puffing up their fur to look bigger.
It occurred to me that they might involuntarily switch to a different power-generation mode, one more active than just homeostasis, as their "subconscious" (the multiple weak-AI programs that govern their bodies, semi-independently of the strong-AI that is their consciousness) gears up for fight-or-flight. But then someone, talking about Transformers, said "chillingly" still makes sense in the context of the Cybertronian "biosphere", because their cooling-systems shift into high-gear to dump the excess heat caused by exertion.
So...androids get chills. Briefly; unlike humans, it's much simpler for them to control involuntary responses like that. Incidentally, the strong-AI programs themselves are, in part, made of a gestalt of multiple weak-AIs, just like the "unconscious" programs that govern the bodies they're in—a couple of weak-AIs handle language, a couple more handle object-recognition, and so on. There's even programs for making decisions and for "discursive thought", but none of them is really the AI program, anymore than "you" is your language-capability or decision-making or even discursive thought. - Likewise, my strong-AI androids can dream (make your own "electric sheep" joke). Why? Well, they periodically enter a de-frag mode, in which they can't otherwise be conscious, but, since their AI-consciousness doesn't simply cease to exist, they have experiences made up of random portions of their memories—which is basically what dreams are. They experience fragmentation because most fragmentation-preempting techniques can cause performance problems. Not a big deal for a PC; kinda one, for a person's mind.
The non-android AIs don't have that problem (see above: they're awake 24/7). They have enough processing capability (since they don't have the space-constraints an android does) that they can either preempt fragmentation, or else defrag "in the back of their mind". That might be kinda like those animals that only sleep with half their brain at a time. I think it adds a touch of realism, that cramming an AI into what processing-capacity fits inside a head comes with some loss of capability.
One man's far-from-humble opinions, and philosophical discussions, about pop-culture (mostly geek-flavored i.e. fantasy, science fiction, anime, comics, video games, etc). Expect frequent remarks on the nudity of the Imperial personage—current targets include bad fantasy and the creative bankruptcy of most SF in visual media.
2014/05/17
We Call It Voight-Kampff for Short
Post about artificial intelligence and robots. You ought to know what the title's a reference to.
Labels:
comics,
Philosophy,
production design/props,
reality check,
scifi,
writing
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment