2013/06/10

Thinking About Thinking...Machines

Thoughts on robots and AI (well, one of 'em's really more about automation).

This is post 479, which is a prime. It is also a safe number (it is equal to 2p+1, where p is also a prime), the sum of nine consecutive primes (37 + 41 + 43 + 47 + 53 + 59 + 61 + 67 + 71), a Chen prime (because if you add 2 to it the sum is either prime or the product of two primes, in this case 481 = 13 × 37), and a self number (which is a number that is not equal to any other number added to the sum of its own digits, i.e. 32 is not a self number because 32 = 25 + 2 + 5).
  • During one of the chick-flick terminal-patient scenes in Halo 4, Cortana says this:
    I can give you over forty thousand reasons why I know that sun isn't real. I know it because the emitter's Rayleigh effect is disproportionate to its suggested size. I know because its stellar cycle is more symmetrical than that of an actual star. But for all that, I'll never know if it looks real... if it feels real...
    Only...the way that you know it doesn't look real is that its Rayleigh effect is disproportionate and its stellar cycle is too symmetrical. If a human was sensitive enough to tell you it didn't look real, that would be what he was talking about, he just probably couldn't define it that precisely. If Cortana had a body with temperature sensors, the difference produced by that disproportionate Rayleigh effect (it would have an effect on the heat of the "sunlight") would be perceptible to those sensors; a human who reported that sensation didn't "feel real" would be reporting the same thing, they, again, just couldn't define it like that.

    This is key for the portrayal of AI (always assuming we suspend our disbelief about how they're logically impossible): nothing about being made of meat is special. An android might have more awareness of how he senses things than you, although the program that is his mind may be so massive that it has to run on separate hardware from those sensors, just like yours does (that's what the autonomic nervous system is, the other hardware). Fundamentally, though, he's still sensing the same things, and his experience of sensing them is comparable to yours. To, that is, the extent that any other person's experience of sense-perception is comparable to yours—just like how you can't describe color to a blind man, you have no way of knowing that anything another sighted person sees is anything like what you see, except in relation to physically quantifiable things. You can diagnose that a person can't distinguish the red wavelength from the green one; you can never be certain that his experience when he sees "blue" is anything like yours.

    Or to wrap it in buzzwords, there fundamentally would be no difference of "qualia" for a true AI, any more than there is for an alien or for another human being. There would only be the quantitative differences in function and sensitivity between the sensory apparatus, which exist also between humans.
  • What tropers call a Logic Bomb, seen e.g. here, is silly. Most people acknowledge that. What most don't seem to do is try to figure out what it should do. Most likely the AI would deal with it the way computers deal with divide-by-zero errors. Actually, a contradiction in terms is basically the logical equivalent of a not-a-number error, if you think about it.

    It occurs to me, true AI would be an absolute bitch to come up with exception-handling for. One problem, for example, is that the human mind seems to prefer resumption over termination, which is apparently a bad idea with computer programs. Also our exception-handling for major things appears to take the form of mental illness (for minor things, well, that's probably what "humor" is).
  • I realized, my workaround for getting "strong" AI is not just the handwave to have what I wanted in the story (even though "integrate your handwaves into the plot" is pretty much half of what science fiction is). It's also an important element of a theme that several other elements of my book also explore. Namely, science is not religion. If you thought science and religion were in conflict (rather than being engaged in a deadly game of "sea lion and squirrel"), or if you thought science now could answer questions once answered by religion, then you are guilty of what analytic philosophers call a category error.

    To put it one way, and use Soviet slogans to mean their opposite—always a delight—"The Earth is blue, but there is no God out there." Because space is not Heaven. 1600 years ago, St. Augustine mocked the kind of God you people think science can substitute for, and so did the Neoplatonists he learned philosophy from, 200 years before that. There's a reason every SF attempt to hijack the religious impulse is basically just a retread of Cosmism, which is just a retread of Gnosticism (as is everything in Transhumanism that's not directly copied from the Cosmists). Fundamentally they're thinking like the Manicheans (also Mormons and Scientologists), to whom "spirit" is actually ultra-rarefied matter, and who could seriously talk about spiritual "places"—and mean it literally, not just as an analogy for condition-of-being.

    Also, would you people please get a different set of ideas? Gnosticism is the Team Rocket of worldviews. Except Team Rocket is occasionally cool.
  • The "demographic winter" currently looming over the developed world is actually less of a problem than it seems...provided the developed country in question has low immigration, anyway. Then, instead of "watch your society transform into a colony of a different one, before your very eyes", it's more a matter of "certain hiccups as the old people die off". For one thing, most of the countries where declining birthrates don't go with high immigration have relatively small welfare states—in Japan and South Korea, old people get much more of their incomes from private savings and private pensions, so there won't be nearly as much of a social-security crisis.

    And (in case you were wondering why I bring it up here), the other thing is, any personnel gaps created by the demographic shifts could be covered by automation. Especially because Japan is currently massively overstaffed. You ever wonder why they still do so much business on paper, or why most of their gas stations aren't self-service? (Wait, let me back up: Japan is actually weirdly low-tech in a lot of ways. They do lots of business and record-keeping on paper, often with forms filled out by hand, and the majority of their gas-stations are still full-service.) It turns out it was all in order to create lots of make-work jobs for their populace, which might have something to do with why their unemployment is 4.1% (ours is 7.5%). Thus, a lot of those jobs? Not needed; the people they were keeping out of trouble will be put to better use elsewhere.

    Also, while it's probably true that, as that article says, the Japanese go a bit too far in classifying any programmable industrial machinery as "robots", the fact is that that's not much broader than the definition used by the rest of the world. I've actually worked with a programmable robot arm (my high school had a cool tech program, though I didn't take much of it), and the thing was honestly not that much different from the laser-etcher at the other end of the classroom.
  • I am fascinated by something that may well be the dumbest thing in all of Star Trek. Yes, guess what, that hole had a bottom. Also, it's not what you're thinking. Know what the dumbest thing is? They let Data play poker. That is, they let a being that not only doesn't show emotions, but hasn't got them, play a bluffing game. Not only does he have no tells, but he can probably hear their pulses change and see their body temperatures fluctuate when they bluff.

    I mean, seriously people. You've got a better chance beating Data at chess than you have at poker, and the human vs. computer chess ship has pretty much already sailed by our time. There would probably be a sign hanging over poker tables in any setting with AI, "Senses must be set to human norms and emotional programming must be present". Failing that, you don't get to play poker. Same goes for cyborgs (no that's not discrimination, it's a game, we handicap in games—anyone who doesn't understand that probably also thinks it'd be awesome for adults to play tackle football against peewee players).

    Of course, you're not going to be playing poker anyway, as I said here. You're going to be playing MonHan.
  • I always get weirded out when AIs/androids in things, that clearly have self-awareness, are asserted to not have souls. Now, admittedly, "soul" as commonly used is restricted to rational souls (animals, plants, and inanimate objects have "souls" as philosophers understand the concept, but not the same kind), but the AIs in question are rational. "Soul" just means "that which makes it what it is"; that which makes a rational being what it is, is, ipso facto, a rational soul.

    There's also, often, a lot of grandstanding about "oh, do we have the right to create intelligent beings with souls?", but, uh, apart from the fact you can do that without getting out of bed (if you know what I mean, nudge nudge wink wink), is the fact you have a multi-million dollar fertility industry mass-producing the aforementioned beings. While, by the way, destroying a very hefty proportion of the embryos it creates. And let's not even get into you people's strange idea that reproductive cloning isn't as bad as being cloned for spare parts. No, sorry; there's not a single argument against the creation of AI that doesn't go double for half the things we're already doing.

    As I've said before, when speculative fiction starts saying stupid things about ethics, it generally means that an intelligent discussion might step on something the writers are in favor of.
  • It's debatable whether or not the "neural net" method of trying to do artificial intelligence is a good idea—is it necessary to mimic the function of a human brain rather than making an ad hoc program from the ground up on more conventional hardware? Certainly it seems just as sensible and probably simpler to just create a program designed to do what people's minds do rather than to mimic the physical means by which they do them. But let's leave that to one side. Certainly any computer you could upload your mind to would need to work like your original hardware.

    Assuming that neural nets are the way to go, the human brain contains 100 billion neurons, about 3 trillion glial cells, and about 125 trillion synapses. The neurons are more important than the glial cells, and the synapses are probably more important than the neurons, but we can't say that any of them are trivial. So if you assign a single bit in a computer to one of those three things, you wind up with 128.1 trillion bits, or 14.5633 terabytes (the whole Library of Congress is usually quoted as being 20 terabytes). Only...brains aren't binary; what neurotransmitter carries a "signal" appears to be at least as important as what the "signal" is. Thus one perhaps ought to treat each cell not as a bit...but as a single line of code. Microsoft Windows, the single most complex computer program mankind has ever produced, is 40 million lines of code—3.2 million times smaller than the code necessary to code for braincells.

    Oh, it gets worse. Any time you've got code, you get coding errors. The industry average is between 10 and 50 (I averaged the several averages I found and got 38.75) per 1000; the best we've ever done (1 in 10,000—at NASA's Software Assurance Technology Center at Goddard Space Flight Center) was in highly specialized aerospace applications, not horrendously fuzzy things like natural-language processing, and in incredibly controlled conditions.

    That mind-uploading that they said was just around the corner? Leaving the unbelievably bad metaphysics to one side, and even if we had any way to code for anything remotely like "3 million times as complex as Windows", you'd still have to code each mind individually. The troubleshooting for each one would cost as much as the entire Space Shuttle program, and we'd still be basically giving you a stroke as an unavoidable part of the process. Similarly, with AI proper, every one of 'em gets Fetal Alcohol Syndrome simply as a door-prize. Even without the logical impossibility, AI (and mind-uploading) is still on par with faster-than-light travel—as in, we can just barely conceive of ways to do it at all, and so far none that are feasible to our, or any conceivable future, civilization.
  • How to work around this in science fiction? Well, you could have the audience's suspension of disbelief, that gets you AI, cover for the fact it's not feasible even if it weren't impossible. But personally I find that inelegant. In my book, the first AI basically did cost as much as the Space Shuttle program, although my AIs aren't built on a neural net model (the alternative, to approximate human levels of cognition, would still need to be that complex). But when it succeeded (by cheating), the AI had heaps of processing power and lots of spare time—and the company that made him wanted to make more. So they used him to code more of them, and double-check their coding.

    It's bizarre to me, by the bye, that people think that an AI copied from another one, or from a human (I mean the "neural-clone" type, like Cortana, not an uploaded mind), would in any sense "be" the same person. Hell, the question's actually whether an uploaded mind can really be regarded as the same person, or did you just neural-clone someone and wipe their brain at the same time? See also the metaphysical issues with Star Trek's transporters (it really is a boon neither one is possible). But I think the whole "copied from the other mind, therefore is same person" comes, oddly enough, from our lacking a belief in reincarnation. Watch an anime some time, you doofuses! (If you make an error that watching InuYasha would've prevented, it's time to take a break.) Even if one assumes reincarnation (for which one must either assume Platonic dualism or Buddhist anatman), the whole of the identity is not the same; Kagome isn't Kikyô, and Siaoran isn't Watanuki.

    Then again the fact there's more than one person in humanity at all is weird—Averroes really ought to be right, and we all share only one mind and soul. That Aquinas nevertheless said he was wrong, because of the observed phenomenon of human individuality, is why Aquinas' civilization invented science and Averroes' didn't. See also Buridan's response to peripatetic objections, when he said the celestial bodies were made of the same substances as the Earth and were themselves "worlds" (Aristotle says the Earth is the only world, and the celestial bodies are a radically different substance). Namely? "God can make as many worlds as he likes."

No comments: