Sci Phi Show, Suspension Of Belief #4

Userpic
Matt Arnold
July 9, 2007

Now my episodes of the Sci Phi Show will be introduced and closed by me. This episode is about "Biochauvinism And The Mind Machine Interface".

MP3 Link (10 minutes 5 seconds)

Welcome to the Sci Phi Show segment, "Suspension of Belief", number four: Biochauvinism and the Mind Machine Interface. I'm your guest host Matt Arnold.

The other day at the coffee shop I saw a petition to enforce a global moratorium on genetically modified food. Accompanying it was a newsletter about what was labeled "Frankenfood." It does not inspire much credibility in the purported scientific facts behind these concerns when I see the appeal made to the automatic distrust of technology by default. There are plenty of valid reasons to be concerned about where technology is taking us, especially concerns about whose hands it is currently in, without a knee-jerk reaction. Keep in mind that I am not classifying all cautions about technology as biochauvinism. This is a form of prejudice trusting and favoring that which is naturally-occurring and unintentional, given by biology, as if it cannot be improved upon. This chauvinism distrusts any product or practice for being deliberately engineered to solve a problem.

I'm a fan of the Spider-Man series of films, but I detect biochauvinism in them. Why is it that technologists are the bad guys in Spider-Man, but the good guy has been made purely biological? In the comic book version of the story, Spider-Man invented web-shooting cuffs, a bit of heroic technology saving the day. In the film version, his webbing comes straight from his wrists. (He can even experience projectile dysfunction!) His transformation into a superhuman is somehow sanctified by its un-intentionality.

Some of the more extreme opponents of technology seem to advocate that we relinquish manipulation and control, of which the metal limbs of Doctor Octopus could be considered iconic. Perhaps the scriptwriter should have made him the inventor of GM crops. But relinquishment of manipulation and control of outcomes is the only sure path to fail in achieving desired outcomes. That isn't natural; neither is it pro-human. The nature of the human species is technological manipulation and control just as the nature of beavers is to build dams.

In The Demon-Haunted World Carl Sagan complained about how the media mis-portrays technologists, quote: "I'm sorry, Dr. Nerdnik, the people of Earth will not appreciate being shrunk to 3 inches high, even if it will save room and energy..." The cartoon superhero is patiently explaining an ethical dilemma to the typical scientist portrayed on

Saturday-morning children's television. Many of these so-called scientists... are moral cripples driven by a lust for power or endowed with a spectacular insensitivity to the feelings of others." Unquote.

Spidey stories have always been a prime example. Ock wasn't driven by revenge as Norman Osbourne was. His desire to give the world an infinite energy source was more noble than Green Goblin's war-machine project. He is the self-made man on the literal and symbolic level. What distinguishes the human nature from the beaver nature is our ability to question our own instinctive animal drives. Ock's lab accident happened because he was myopically goal-oriented and wouldn't doubt his own genius. When he accidentally melded his mind with A.I., these were the only personality traits not submerged. The comic-book style of the author depicts him not as hateful, but as the stereotypical emotionally-impaired smart person.

At least that characterization is a plausible outcome of the specific circumstances depicted, but it's not what you might think. He can no longer be considered the same species; he is modified beyond what nature endowed our genetics, because before a brain could incorporate additional motor skills and visual input -- much less have two-way communion with four artificial intelligences -- it would have to be so fundamentally altered that it would arguably be a new species. However, his new species in and of itself is not what went wrong, and had that been what turned him evil, the screenwriters would have been mere hacks. In the show notes I'll link to Professor Kevin Warwick of the University of Reading, the world's "first cyborg". It would take a great deal of biochauvinism to declare that Professor Warwick is somehow less of a person after cyborging himself.

Extending the body which one's mind controls is one thing; expanding one's mind by merging it with a machine mind is quite another. It would require much more care and is a deeply philosophical problem. And that is where Otto Octavius went wrong. When the four artificial intelligences installed in the four arms began to interface with his mind instead of his mind exclusively interfacing with them, he encountered what artificial intelligence researchers call the "friendly AI problem." In the show notes I'll link to the Friendly AI Guidelines published by the Singularity Institute for Artificial Intelligence. The friendly AI problem considers how to build an artificial mind in such a way that it would be benevolent. How do you build benevolence? I would think it would be really desirable to try deliberately augmenting the deficiencies of my own flesh and grey matter in the way that Octavius did... if it weren't for the whole sociopath thing. Morality software will be a prerequisite before I install any

interface that allows a machine to interface with my mind instead of the other way around.

And yet let's stop and consider our own extended minds here in 2007. Every product and service Google offers is the outcome of millions of minds creating links and taking other actions on the internet. Is Google a vast, distributed extension of you? It certainly seems alive sometimes. It's almost like Google is the nearest thing we've ever had to an actual deity, one that will answer when we pray to it. This inspired www.thechurchofgoogle.org.

But going beyond a humor site, think about this. Those of us who spend a lot of time on the web or with personal digital assistants offload some of our memory digitally. It's a prosthetic device, like a wooden leg or a pacemaker, except instead of restoring lost function it's expanding our natural function. Using a mental prosthetic actually alters the flow of thought. The first reaction is often not to stop and think, but to click search. I no longer remember appointments because my phone reminds me and does a far better job than I ever did. We now have made technology an extended part of our thought and memory functions.

This is not totally new, since paper has long served as a mental prosthetic. What's new is that the bandwidth between mind and machine has improved so much it's increasingly difficult to tell where one ends and the other begins. How long until you recognize an exoself, a distributed you, working in symbiosis through a pocket device with the you in your head? And then how long until the pocket device is the size of a dime, fits under your skin and completes the process you begun years ago? Now if Google turns into a global collective subconscious, the problem of feedback from the system to you would need to be taken a little more seriously.

You need to keep examining yourself and deliberately decide what you want to be, not just let it happen unconsciously. But hasn't that been the same philosophical challenge every individual has needed to face for thousands of years? To relinquish your own manipulation and control of yourself would be to blindly allow your genes and environment to lead you around by the nose. Appropriate human self-modification needs to give us more conscious and thoughtful self-control, not take it away.

Speaking of feedback, please send your feedback to Jason Rennie at thesciphishow@gmail.com, that's Phi with a PH, and I'll be sure to get it. Suspension of Belief segment of The Sci Phi Show is Matt Arnold 2007 Creative Commons Non-Commercial Attribution No-Derivatives 3.0 License. The music is provided by The Precursors Project at www.medievalfuture.com/precursors. Until next time, keep on self-examining, and keep on self-modifying!

Comments


users on Jul. 9, 2007 1:05 PM

Excellent. This is my favorite so far! Great points.

Leave a Comment

Enter your full name, maximum 100 characters
Email will not be published
Enter a valid email address for comment notifications
Enter your comment, minimum 5 characters, maximum 5000 characters
Minimum 5 characters 0 / 5000