Strengthening Constitutional Self-Government

No Left Turns

In favor of stupid computers

I like this statement by Douglas R. Hofstadter (the rest of his interview is here):

"Am I disappointed by the amount of progress in cognitive science and artificial intelligence in the past 30 years or so? Not at all. To the contrary, I would have been extremely upset if we had come anywhere close to reaching A.I. — it would have made me fear that our minds and souls were not deep. Reaching the goal of A.I. in just a few decades would have made me dramatically lose respect for humanity, and I certainly don’t want (and never wanted) that to happen.

I am a deep admirer of humanity at its finest and deepest and most powerful — of great people such as Helen Keller, Albert Einstein, Ella Fitzgerald, Albert Schweitzer, Frederic Chopin, Raoul Wallenberg, Fats Waller, and on and on. I find endless depth in such people ... and I would hate to think that all that beauty and profundity and goodness could be captured — even approximated in any way at all! — in the horribly rigid computational devices of our era.

Do I still believe it will happen someday? I can’t say for sure, but I suppose it will eventually, yes. I wouldn’t want to be around then, though. Such a world would be too alien for me. I prefer living in a world where computers are still very, very stupid. And I get a huge kick out of laughing at the hilariously unpredictable inflexibility of the computer models of mental processes that my doctoral students and I codesign. It helps remind me of the immense subtlety and elusiveness of the human mind."

Discussions - 10 Comments

In context with the post Julie has below, about the quantities of data that "cyberspace" holds on each of us, I am really grateful that computers have not got artificial intelligence. Although, as most science fiction does, I am presuming that artificial intelligence would be malevolent. It might not be. It might conveniently forget what I would rather it did not remember. Human intelligence can be forgiving, maybe the A.I. kind can be, too?

Of course, Julie personifies "cyberspace" in her piece, as if cyberspace does those things to us. It doesn't. People do such things, not computers, which are just complex machines. Couldn't Hofstadter's students design their computers to be benevolent? If we ever achieve A.I., we are probably stuck with computers that are merely as humane as their own creators. Reading that article, the idea of computer creator as admirer of humanity at its finest is a comfort, but if I consider most of humankind and that other computer creators might be of that sort, A.I. is a scary thought.

Let us assume that Artificial Intelligence implies self-awareness for computers. Self-awareness is, apart from imposing the danger that science fiction authors are fascinated with, problematic for us. I explored the topic extensively last year writing a paper on Philip Dick's Do Androids Dream of Electric Sheep? and its film counterpart, Blade Runner. I also pulled stuff from I, Robot, which covers predominantly the issue of human compassion and being unable to pass that over to a rigid, complex machine that operates solely on logic.


The issue I am more concerned about is not what Artificial Intelligence will be like-- if it will turn against us or continue to serve us. I am more concerned with the questions that it will raise among us, primarily-- what does it mean to be human? This is why I've come to greatly enjoy the new edition of the series Battlestar Galactica, watching the first few seasons over the course of the last year with some friends and watching the final season on television right now. The series is about Artificial Intelligence, called Cylons, becoming self-aware and rebelling against the human race. The humans and robots fight a war, reach an armistice, and split up from each other, no talking to one another for decades. Then, suddenly, the Cylons reappear and launch an attack that nearly wipes out the human race, dropping the population from tens of billions to fewer than 50,000 stragglers surviving in a ragtag space fleet. The difference with this war is that the artificial intelligence has managed to create several human models, making themselves completely human in physical form. The only difference is that when the AI's body dies, its "consciousness" is transferred to a new body waiting in storage.


So, the predominant question throughout the whole series is what does it mean to be human? Technology advances exponentially, and the AI moved from not only being self-aware machines to figuring out how to complete mimic humans physically (and emotionally)-- something not so "sci fi" with our recent advances in cloning and embryo research (they are even able to sexually reproduce, with children lacking the "immortality of consciousness" aspect of the robots). Humans in the series suddenly become torn, many seeing the AI as merely be "toasters" and machines that could be destroyed. But humans who interact with them see them to be remarkably similar to them. The robots themselves, being intelligent and self-aware, find themselves having issues regarding who they are, some of them voluntarily cutting themselves off from their "immortality" and ability to download into new bodies so that they can embrace their "humanity."


If we create artificial intelligence, if machines become self-aware, if they gain the ability to reason like human beings (and not like mathematically logical machines)--something I believe to be more and more possible--the danger will not be them "rebelling" against us or being unforgiving. I think the danger will be human beings losing a sense of the wonder that Hofstadter speaks of, and our race having an identity crisis like no other.

Kate, I think the danger is that "cyberspace" (personified as it has become) allows real persons to do malevolent things they might not otherwise consider doing if the object of their tortures were alive to them in something other than the ether. It has the potential--as Caldwell noted in his piece--of becoming gossip writ large. In places such as "myspace" it already has achieved this dubious distinction. There, it makes cowards bold in their cowardice. The danger with A.I. is likely to be similar to this.

I loved Blade Runner and the questions it raised about being human. However, we associate intelligence with our emotions, but we can't know if those are really connected, in that a robotic being, and android, will "feel" murderous or defensive, because it carries intelligence.

Viruses are machine-like in that they are not exactly organic. I read some story (and I apologize for my bad old memory) that suggested that we must think of aliens of any sort as if they were a viral infection. They must be eradicated for our health. It doesn't matter what they are and we can't be sympathetic to something that will kill us, or, well - or it will kill us.

R.O.B., you make me want to watch B.G. which tears me as I have a lot of work to do. You are not the first person who has told me about that series. Still, I wonder if self-awareness is the same as emotion. I feel a vested interest in the uniqueness of human beings.

Good and interesting post R.O.B... Beware the University of Alberta...actually I would have been in the mood to tackle this post, but I have been having spectacular luck catching large mouth bass. Ever notice how leisure will completly change your thinking? (which brings me back to thinking about thinking.) Which is why I play no-limit poker and read Hegel... my basic take on AI is that if a problem can be transposed to fit within certain parameters...basically if there are absolute rules to the game...then in games with rules or in situations that are mathmatically proven then AI is superior. So AI dominates blackjack, checkers, chess, limit poker, and stock trading systems like CRT created by folks like Mark Richie...

Julie, you are quite right. Cyberspace gives nasty people the means to express themselves in unkind and nasty ways. But I was also thinking that we might have just as much to fear from the well-meaning around us. I relate this to the post about the Japanese government keeping track of people's weight in the interest of keeping costs down in their national health system. It is for the good of the people, of course. God save us from the good and well-intentioned around us who feel they must hurt us for our own good and good of society. If I think in those terms then, yes, even benevolent A.I. could be simply awful.

Interesting that you were thinking that Kate...that was the post I commented on second after this one...but I think I disagreed with you. I think the Japanese system is the best system if you are going to have something like socialized medicine. The Japanese have a sort of honor ethic that prevents welfare from being abused, it is dishonarable to believe that the government should come in and bail you out if you aren't at least making an effort. I call it prussianized, (lets see AI figure out my meaning) but you could call it Japanese...as it turns out I was in both systems as an assembly line worker at Honda of America and the U.S. Army and neither suit my carefree disposition, by no means do I believe that either institution is perfect, but both institutions demand discipline and share a similar corporate culture, both corporate climates require a certain level of commitment, a certain sense of honor and dishonor. Now call me crazy, but I am still not over comparing Obama to Hegel, or Hegel and the U.S. Army and Japanese honor/warrior culture. I honestly get the sense that Obama is in favor of a lot of welfare programs, in favor of a lot of room for government action, but he is also very much market oriented if by market oriented you mean culture oriented.(the market being just a mechanism for the satisfaction of cultural desires)

By the way Obama explicitly says in the Audacity of Hope that americans would not stomach government controls on diet, but in the overall context of the chapter he is talking about matching government up to culture, and discussing why liberals are wrong to downplay "values" and the cultural war. The essence of the chapter is really that a virtuous person is the product of a complex interplay between dispositions/good laws and good government.

In essence I take Obama to be someone who is market oriented in so far as he seems to adhere rather strongly to the proposition that there is no such thing as a free lunch, if citizens have rights then they also have duties. Maybe Obama does buy into chicago economics...that is saying too much but the key proposition is that goverment and the people exist is a mutual relationship to virtue, Obama's audacious hope in the american people is also an audacious hope in american government.

"I still think you're wrong," he said, "but at least it seems like you've thought about it. Hell, you'd probably disapoint me if you agreed with me all the time."

"Thanks," I said. As he walked away, I was reminded of something Justice Louis Brandeis once said: that in a democracy, the most important office is the office of citizen."

I don't disagree that with socialized medicine the state has to keep costs down by making everyone stay healthy. I do think that and for just the reason you mention. I just do not want to live with that well-intentioned, prussianized nanny state watching my waistline. I'd rather govern myself, thank you.

His "health care as moral obligation of government" is one reason I do not like Obama. He can have an audacious hope in government as the answer to every problem. I can have my audacious hope that the American people will virtuously decide to lean toward self-government. Health police, no matter how well-intentioned are more authoritarianism than I care to live with.

Here is something. My daughter-in-law takes my granddaughter to the county health department for well-baby clinics. There, she was told that Emily is obese. The child is not obese. (I'll send you a picture, if you like.) She is in the 95 percentile in height for her age. She is four years old and is as tall as the average seven year old. She is of average weight for an average seven year old, but she is only four and so she is overweight for a four year old. She does not fit government parameters for the age/weight charts, so my daughter-in-law is supposed to put the child on a diet. I have told Emily to stop growing, but she does not listen.

By your plan on the other post, must she meet the federal government's specs for four year olds? I know you say to high school the program is mandatory, but if she doesn't fit government guidelines, then what?

Maybe I lack compassion, but I do not see federal government health care as a moral obligation of government. There are some things government can do and some things it can't do well and some things it ought not touch at all because it just mucks those things up. I think the last is what is wrong with American health care.

If it comes to logging our weights Kate, I am hopeful that this could be the tipping point where American women finally begin to see the value of liberty over security. Perhaps this attempted invasion of our privacy could be a boon to our liberties as it may be the one thing that causes women finally to stop calling for more creeping authoritarianism from government in medical care. Why do I say so? Well, I have a story too:

One of my mother's good friends (we'll call her Carol) was going aboard a private airline with her husband. These were very small planes and they had a strict weight limit. Before going aboard, the airline required each passenger to report his or her weight to the stewardess. They had to do this in a very public way . . . I mean, everyone could hear what they said. So Carol grabbed her husband and whispered into his ear, "Bob . . . you have to add 30 pounds to your weight." "Why," Bob asked (a little confused . . . but, after so many years of marriage, not at all astonished by his wife's request). "You add it because I'm going to lie and I know all these other women are lying too," said Carol. So, like all the women on that airplane, Bob told a lie that day. No wonder he's so successful in his marriage.

Julie, I can see the headline on Drudge, "Airlines Force Marital Deceptions of Collective Honesty". The new amazing and awful airport screening machines ought to have weight estimators in them, or why not put a scale in the floor of those booths we walk through?

But I am sure you are right about that tipping point. My fifteen year old daughter is taking her first airplane trip, to Kenya. She is packing now and wants to take her hard-bound Harry Potter books to re-read on the journey. She carries books like other kids take stuffed animals, for comfort. I discussed the weight issues with her, that the airline had weight restrictions on passenger luggage. Her suitcase could only weigh so much; there are fines for excess. (By her response, you would think I had asked her to leave her brassieres behind.) Then I mentioned a plan I had read about that would charge passengers extra per excess pound for a fare. Or perhaps even charge BY the pound to figure your fare, though how that would work in the ticketing process I cannot imagine. My little girl is a pretty big woman. (You can use "pretty" in both senses of the word and it applies.) She was outraged and appalled. How can they even ask? I know she would lie. I absolutely know it. She did it yesterday at the travel medicine clinic where we got her shots for the trip, writing an estimated weight 20-30 pounds less than truth. They weighed her and knew that truth. I told her they would, but she could no more bring herself to honestly estimate her weight in even that fairly private public way than your mother's friend could.

If people lie on their income taxes, the penalties can be terrible, even in our "voluntary" tax system. A difference of opinion with the government in a tax matter can send you to jail. When the government gets involved in our health, where will that take us? Are government mandated weight-loss penitentiaries too much?

Leave a Comment

* denotes a required field
 

No TrackBacks
TrackBack URL: https://nlt.ashbrook.org/movabletype/mt-tb.cgi/12500


Warning: include(/srv/users/prod-php-nltashbrook/apps/prod-php-nltashbrook/public/sd/nlt-blog/_includes/promo-main.php): failed to open stream: No such file or directory in /srv/users/prod-php-nltashbrook/apps/prod-php-nltashbrook/public/2008/06/in-favor-of-stupid-computers.php on line 658

Warning: include(): Failed opening '/srv/users/prod-php-nltashbrook/apps/prod-php-nltashbrook/public/sd/nlt-blog/_includes/promo-main.php' for inclusion (include_path='.:/opt/sp/php7.2/lib/php') in /srv/users/prod-php-nltashbrook/apps/prod-php-nltashbrook/public/2008/06/in-favor-of-stupid-computers.php on line 658