Economicus.Ru » Галерея экономистов » Герберт Саймон

Герберт Саймон
Herbert Simon
Источник: Omni, Jun94, Vol. 16 Issue 9, p70, 8p, 1 chart, 2bw
Stewart, Doug & Clark, Rob
January 1956: Ike was in his first term in the White House and electric typewriters were a luxury when Herbert Simon strolled into a mathematical-modeling class he was teaching at Pittsburgh's Carnegie Tech and announced he'd built a machine that could think. Simon, with two colleagues, had created what is now regarded as the first artificial-intelligence (AI) program. In finding proofs for logical theorems, it automated a task that hitherto only human logicians had been smart enough to perform. But to the future Nobel laureate, his program's most important proof was something far grander: proof the human brain wasn't so special after all.
Still teaching at what is now Carnegie-Mellon University, Simon is an academic jack-of-all-trades: computer and social scientist, cognitive psychologist, and philosopher. To Edward Feigenbaum, an AI pioneer at Stanford University, "Herb Simon is first and foremost a behavioral scientist. His genius lies in cutting through the immense complexities of human behavior to build elegantly simple models that work, that explain the data. He might well be the greatest behavioral scientist of the twentieth century."
Fast talking and combative at 77, Simon remains an unapologetic "left winger" in the AI world he helped found. Brusque to the point of arrogance, he insists that everything a brain does can be adequately explained in terms of information processing. A computer, he argues (and Simon argues a lot), could do these things just as well.
Herb Simon has always argued. His first publication, in grade school, was a letter to the editor in the Milwaukee Journal defending atheism. A civil libertarian and New Deal Democrat, he's been known to dampen conversations at dinner parties by asking guests whether they'd prefer having real children or disease-resistant AI programs that were otherwise identical. He doesn't take criticism well, he confesses, nor is he gracious in defeat-the sort of chess player who'll lose a game, then tell his opponent the next day he'd have won but for a single move.
Until the mid Fifties, Simon was an economist and political scientist. His 1978 Nobel Prize was in economics. He helped push conventional economics beyond neat (and accurate) supply-and-demand charts and toward the real-world complexity of psychology and behavioral science. His theory of "bounded rationality" subverted the classical view that organizations always make decisions that maximize profits and that, more broadly, individuals always pick the best choice among numerous alternatives. Instead, he observed, people are saddled with too much information and not enough brain power. As a result, whether setting prices or playing chess, they settle for the first choice that's "good enough." In Darwinian terms, it's survival of the fitter.
Despite Simon's nominal shift to AI and cognitive science 40 years ago, the central question underlying all of his research has never changed: How do people make decisions? His explorations of how people wade through a mass of information by making a series of split-second decisions, like a person playing Twenty Questions, led him logically to computers. What tool could better test his theories than programs that mimicked a human's search-and-select strategies?
Unlike many of his peers, Simon isn't interested in electronic superbrains. The human brain is obviously limited in how fast and how capably it can handle information. So Simon scrupulously builds into his artificial systems those same limitations. Computers for him are merely means to an end-understanding how a brain can think.
For his first interview with Omni's Doug Stewart, Simon wore a crisp blue Mao jacket, a souvenir of a trip to China. A self-confident man, he is voluble and unrepentant about his many past pronouncements. To anyone who would challenge his assertion that creativity can be automated, he points to his office walls which are dressed up with computer-made figure drawings. Although he evidently admires the drawings, he also finds them useful as exhibits A, B, and C when making his case to skeptical visitors.
Omni: So you believe computers think?
Simon: My computers think all the time. They've been thinking since 1955 when we wrote the first program to get a computer to solve a problem by searching selectively through a mass of possibilities, which we think is the basis for human thinking. This program, called the Logic Theorist, would discover proofs for a theorem. We picked theorems from Whitehead and Russell's foundation work in logic, Principia Mathematica, because it happened to be on my shelf. To prove a theorem, a human mathematician will start with axioms and use them to search for a proof. The Logic Theorist did quite a similar search, we think, to end up with a proof-when it was lucky. There were no guarantees it would find one, but there are none for the human logician either.
A year or two later, we embodied these ideas in the General Problem Solver, which wasn't limited to logic. Given a problem like, "How do I get to the airport?" it starts with, "What's the difference between where I want to be and where I am now? That's a difference in location, one of 20 miles. What tools do I know that reduce differences like that? You can ride a bike, take a helicopter or taxi. If I pick a taxi, how do I find one?" Again, GPS asks, "How do you get taxis? You telephone them." And so on.
Every time you set up a problem, it thinks of some method or tool already stored in memory that can remove the difference between where it is and where it wants to be. Each tool requires that certain conditions be met before that tool can be applied, so it then searches its memory for a tool for doing that. Eventually, it finds one it can apply: You call the taxi, it comes, you get in it, and the first thing you know you're delivered to the airport. Notice GPS doesn't try everything-not walking or a helicopter. It knows all sorts of things about walking or helicopters that help it decide they don't work in this situation.
Omni: Did you tell Bertrand Russell, Principia's surviving author, what you had done with Logic Theorist?
Simon: Yes, and he wrote back that if we'd told him this earlier, he and Whitehead could have saved ten years of their lives. He seemed amused and, I think, pleased.
Omni: Wouldn't most people feel demeaned that a computer-a primitive one by today's standards-could do what they'd devoted ten years of their lives to?
Simon: You know, sometimes I feel terribly demeaned that a horse can run so much faster than I can. But we've known for a long time that there are creatures bigger, stronger, and faster than we are.
Omni: But Principia Mathematica was a celebrated cerebral accomplishment, nothing like an animal's brawn!
Simon: It's true that thinking seems a peculiarly human capability, one we're proud of. Cats and dogs think, but they think little thoughts. Why should it be demeaning to us to try to understand how we do something? That's what we're really after. How's thinking done? The farther we go in understanding ourselves, the better off we are.
Still, people feel threatened when ever the uniqueness of the human species is challenged. These kinds of people made trouble for Copernicus and Galileo when they said the earth wasn't the center of the universe, for Darwin when he said maybe various species descended from a few ancestors. I don't know that anybody's been hurt by our not being in the center of the universe, although there are some who continue to lose sleep about Darwin. We'll get used to the fact that thinking is explainable in natural terms just like the rest of our abilities.
Omni. A program you worked or in the Seventies rediscovered Kepler's third law of motion. How?
Simon: We called it BACON, in honor of Sir Francis, because it's inductive. Kepler in the seventeenth century knew the distances of the planets from the sun and their periods of revolution. He thought there ought to be a pattern to these numbers, and after ten years he found it. We gave BACON the same data and said look for the pattern. It saw that when the period got bigger, the distance got bigger. So it divided the two to see if the ratio might be constant, That didn't work, so it tried dividing the distance again. That didn't work either. But now it had two ratios and found that as one got larger, the other got smaller. So it tried multiplying these maybe their product was a constant. And by golly. it was. In three tries, BACON got the answer.
Omni: A lucky guess!
Simon: It wasn't luck at all. BACON was very selective in what it looked at. If two quantities varied together, it looked at their ratio. If they varied in opposite directions, it looked at their product. Using these simple heuristics, or rules of thumb, it found that the square of a planet's period over the cube of its distance is a constant: Kepler's third law. Using those same tricks, BACON found Ohm's law of electrical resistance. It will invent concepts like voltage, index of refraction, specific heat, and other key new ideas of eighteenth- and nineteenth-century physics and chemistry, although, of course, it doesn't know what to call them.
This tells you that using a fairly simple set of rules of thumb allows you to replicate many first-rank discoveries in physics and chemistry. It thereby gives us an explanation, one that gets more convincing every time BACON gives us another example, of how people ever made these discoveries, It gets rid of these genius theories and tells us this is a normal process, People have to be intelligent, but their discoveries are not bolts from the blue.
Omni: Why are rules of thumb so important for computers and humans?
Simon: Take something limited like a chessboard, Every time you make a move, you're choosing from maybe 20 possibilities. If your opponent can make 20 replies, that's 400 possibilities. The 20 replies you can then make gets you 8,000 possible positions, Searching through 8,000 things is already way beyond a human's limits, so you limit your search. You need rules to select which possibilities are good ones. To exhaust all the possibilities on a chessboard, a player would have to look at more positions than there are molecules in the universe. We have good evidence that grand masters seldom consider more than 100 possibilities at once.
Omni: You and Allen Newell wrote the world's first chess program in the Fifties. How well did it play?
Simon: Not well. Hubert Dreyfus, in his book What Computers Can't Do, seemed pleased that it was beaten by a ten-year-old kid. A pretty bright one, I should add. Shortly after Dreyfus observed that, he was beaten by Greenblatt's machine at MIT, but that's a different story. Later in the Sixties, George Baylor and I built MATER, a program specializing in mating situations, going in for the kill. Its criteria tested whether a given move was powerful and explored only those, never looking at more than 100 choices. Chess books report celebrated games where brilliant players made seemingly impossible mating combinations, looking eight or so moves deep. MATER found most of the same combinations.
Omni. It had the same insight as the human champion, so to speak?
Simon: You don't have to say "so to speak"! It had the same insight as a human player. We were testing whether we had a good understanding of how human grand masters select their moves in those situations. And we did.
Omni. You talk about a string of serial decisions. Don't grand masters get a chessboard's gestalt by seeing its overall pattern?
Simon: A Russian psychologist studying the eye movements of good chess players found that grand masters looked at all the important squares in the first five seconds and almost none of the unimportant ones, That's "getting a gestalt of a position." We wrote a little computer program that did this following a simple rule. For starters it picked the biggest piece near the center of the chessboard, then the program found another piece it either attacked or defended. Then the program would focus on the second piece and repeat the process. Lo and behold, it quickly looked at all the important squares and none of the unimportant ones. Show me a situation where ordinary cue-response mechanisms-call them intuitions if you like-can't reproduce those gestalt phenomena!
Omni: But can't good players see several pieces at a glance?
Simon: Experiments on perception show we take in all our visual information in a very narrow area. And there's something else: A colleague, Bill Chase, and I did experiments where we took the board of a well-played game after the twentieth move, say, and let chess players look at it for five seconds. A grand master will reproduce the board almost perfectly maybe 24 or 25 pieces correct. A weekend player will get six or seven correct, You say, "Grand masters have great vision, don't they?"
Now put the same 25 pieces on the board but completely at random, with no regard for the rules of chess. Again, the ordinary player puts six or seven pieces back. This time, the grand master puts six or seven pieces back, maybe one more. Clearly, what the grand master is seeing isn't pieces, but familiar patterns of pieces-Fianchetto's castled-king position or whatever. It's an act of recognition, just as you'd recognize your mother coming down the street. And with that recognition comes all sorts of information.
A grand master can play chess with 50 partners, moving from board to board every few seconds, and at the end of the evening, he's won 48 of the games. How? He doesn't have time to look ahead, so he looks for cues. He plays ordinary opening moves, hardly looking at the board until he notices an opponent has created a situation he knows is an error. He recognizes it as a feature on the chessboard, just as a doctor sees a symptom and says, "Oh, you've got the measles." The grand master says, "A doubled pawn! He's in bad trouble."
Omni: You've argued that empirical knowledge, not theoretical postulates, must guide computer-system design. Why? What's the matter with theory?
Simon: It's claimed that you can't have an empirical computer science because these are artificial objects: therefore, they're whatever you make them. That's not so. They're whatever you can make them. You build a system you hope has a certain behavior and see if it behaves that way. In computer science, the only way we'll know what assumptions to start with is through experience with many systems. Humans are at their best when they interact with the real world and draw lessons from the bumps and bruises they get.
Omni: Is this analogous to objections you voiced to classical economics early in your career?
Simon: It certainly is. Economists have become so impressed with what mathematics has done for physicists that they spend much of their time building big mathematical models and worrying about their rigor. This work usually proves fruitless, because they're allowed to sit down in an armchair and put any kind of crazy assumptions they want into those models.
Not inconsequentially, I started out in political science, not economics. Political scientists have a deep respect for facts going out and observing, which I did a lot of. When I was 19, I did a study of how people working for the Milwaukee city government made budget decisions-how they chose between planting trees and hiring a recreation director. That work led to my Ph.D. thesis and first book, Administrative Behavior, in the late Forties.
Classical economic theory assumes that decision makers, whether groups or individuals, know everything about the world and use it all to calculate the optimal way to behave. Well, in the case of a firm, there are a zillion things that firm doesn't know about its environment, two zillion things it doesn't know about possible products or marketing methods that nobody's ever thought of, and more zillions of calculations it can't make, even if it had all the facts needed to dump into the calculations. This is a ridiculous view of what goes on.
To go into a firm and evaluate the actual decision-making process, you must find out what information they have, choose to focus on, and how they actually process that information. That's what I've been doing all these years, That's why my AI work is a natural continuation of what I did earlier in economics. It's all an attempt to see how decision making works: first at the individual level-how is it possible to solve problems with an instrument like a human brain?-and at the group level, although I've never gotten back to that level.
Omni: You've said human decision makers, instead of making the "best choice," always settle for "what's good enough." Even in choosing a spouse?
Simon: Certainly. There are hundreds of millions of eligible women in the world at any given time. I don't know anybody who's gone the rounds before making the choice. As a result of experience, you get an idea of which women will tolerate you and which women you will tolerate, I don't know how many women I looked at before I met my wife. I doubt it was even 1,000. By the way, I've stayed married for 56 years.
Omni: Congratulations. Why did you shift from economics to AI and cognitive psychology?
Simon: When I looked at the social sciences as fresh territory. They needed a good deal more rigor, so I studied applied mathematics and continued to study it even after I left the university. In economics, you can always turn prices and quantities into numbers, but how do you add rigor to concepts in political science like political power and natural language?
I saw the limits of using tools like differential equations to describe human behavior. By chance, I'd had contact with computers almost from the time they were invented in the Forties, and they fascinated me. At a think tank on the West Coast, called the Rand Corporation in the early Fifties, I'd seen Allen Newell and Cliff Shaw using a computer to superimpose pictures of planes flying over a map. Here was a computer doing much more than cranking out numbers; it was manipulating symbols. To me, that sounded a lot like thinking. The idea that computers could be general-purpose problem solvers was a thunderclap for me. I could use them to deal with phenomena I wanted to talk about without turning to numbers. After that, there was no turning back.
Omni: But beneath these symbolic representations, isn't a computer just crunching numbers?
Simon: [loudly] No, of course the computer isn't! Open up the box of a computer, and you won't find any numbers in there. You'll find electromagnetic fields. Just as if you open up a person's brain case, you won't find symbols, you'll find neurons. You can use those things, either neurons or electromagnetic fields, to represent any patterns you like. A computer could care less whether those patterns denote words, numbers, or pictures. Sure, in one sense, there are bits inside a computer, but what's important is not that they can do fast arithmetic but that they can manipulate symbols. That's how humans can think, and that's the basic hypothesis I operate from.
Omni: Are there decisions you'd never leave to a computer, even an advanced future machine?
Simon: Provided I know how the computer is programmed, the answer is no. Years ago, when I flew a great deal, and particularly if I were landing at La Guardia on a bad day, I'd think, I hope there's a human pilot on board. Now, in similar weather, I say, "I hope this is being landed by a computer." Is that a switch of loyalty? No, just an estimate that computers today have advanced to the point where they can land planes more reliably than humans.
Omni: Would you let a computer be the jury in a criminal trial?
Simon: Again, I'd want to know what that computer knew about the world, what kinds of things it was letting enter into its judgment, and how it was weighing evidence. As to whether a computer could be more accurate in judging a person's guilt, I don't lack confidence that it could be done. Standardized tests like the Minnesota Multiphasic [Personality] inventory can already make better predictions about people than humans can. We predict how well students will do at Carnegie-Mellon using their high-school test scores and grade-point averages. When you compare those predictions with the judgments after an interview. the tests win every time.
Omni: Is creativity anything more than problem-solving?
Simon: I don't think so. What's involved in being creative? The ability to make selective searches. For that, you first need knowledge and then the ability to recognize cues indexed to that knowledge in particular situations. That lets you pull out the right knowledge at the right time. The systems we built to simulate scientific or any kind of creativity are based on those principles.
Omni: What about an artist's ability to create something beautiful?
Simon: Like a painting? Harold Cohen, an English painter at the University of California at San Diego, wanted to understand how he painted, so he tried writing a computer program that could paint in an aesthetically acceptable fashion. This program called AARON has gone through generations now. AARON today makes really smashing drawings. I've got a number of them around my house. It's now doing landscapes in color with human figures in them [pulling a book from his shelf].
These were all done on the same day, a half hour apart. These figures seem to be interacting with each other. Aren't they amazing? There's a small random element in the program, otherwise, it would just keep reproducing the same drawing. Clearly, Cohen has fed AARON a lot of information about how to draw-don't leave too much open space, don't distribute objects too evenly, and so forth-whereas human artists have to learn these things on their own. The interesting question is, what does a computer have to know in order to create drawings that evoke the same responses from viewers that drawings by human artists evoke? What cues have to be in the picture?
Omni: Why does this strike me as rather unethical?
Simon: I don't know. You'll have to explain it to me because it doesn't strike me as unethical.
Omni. Vincent Van Gogh's great creativity supposedly sprang from his tortured soul. A computer couldn't have a soul, could it?
Simon: I question whether we need that hypothesis. I wouldn't claim AARON has created great art. That doesn't make AARON subhuman. One trap people fall into in this "creative genius" game is to say, "Yes, but can you do Mozart?" That isn't the right test. There are degrees of creativity. If Mozart had never lived, we would regard lesser composers as creative geniuses because we wouldn't be using Mozart as a comparison.
As to whether a human being has to be tortured to make great art, I don't know of any evidence that Picasso was tortured. I do know he had a father who taught him great technique. The technique he used as a kid just knocks your eyes out; it helped make his Blue Period possible a few years later in Paris. I don't know what that last little bit of juice is-yet. I always suspect these "soul" theories because nobody will tell me what the soul is. And if they do, we'll program one. [laughs]
Here's our friend van Gogh with his ear missing [opens another book]. I don't know whether you need a soul to paint that.... The colors of these sunflowers are intense, certainly. There's a forsythia hedge I pass every morning when I walk to my office. When it blooms in the spring, especially if there's a gray sky behind it, the flowers just knock me out. I don't think that hedge has a soul. It has intensity of color, and I'm responding to that.
Omni: Van Gogh shot himself soon after he painted Wheat Field with Crows, so my emotional response to seeing it is inseparable from that knowledge. AARON's at a disadvantage in that sense.
Simon: Well, Cohen could invent a history for AARON. It could shoot its ear off.
Omni: Can a machine automate creativity then?
Simon: I think AARON has. I think BACON has.
Omni. Could a computer program have come up with your theory of bounded rationality?
Simon: [testily] In principle, yes. If you ask me if I know how to write that program this month, the answer is no.
Omni. You say people never have correct intuitions in areas where they lack experience. What about child prodigies? How can a 12-year-old violin virtuoso pack so much practice into so few years?
Simon: They do. But when a kid 12 years old makes it on the concert circuit, it's because he or she is a kid. Was Yehudi Menuhin ever really an adult artist? We have data on this; we don't have to speculate. Either out of conviction or a desire to earn money, the teacher says, "Gee, your kid is doing well at the piano." The kid gets gratification from being complimented and from not having to do other things because they have to practice instead.
Then the teacher says, "I've brought this kid along as far as I can. You'd better find a more experienced teacher." So they find the best teacher in town. Then they go national. It goes that way without exception. A study comparing top solo musicians with people good enough to teach or play in orchestras found an enormous difference in the numbers of hours each group puts in. Does that mean you can make people into geniuses by beating them into working 80 hours a week? No, But a large percentage of the difference between human beings at these high levels is just differences in what they know and how they've practiced.
Omni: So Albert Einstein didn't invent the theory of relativity in a blaze of insight, but rather prepared himself by amassing experience and learning to recognize patterns?
Simon: Einstein was only 26 when he invented spatial relativity in 1905, but do you know how old he was when he wrote his first paper on the speed of light?-15 or 16. That's the magic ten years. It turns out that the time separating peoples first in-depth exposure to a field and their first world-class achievement in that field is ten years, neither more nor less by much. Einstein knew a hell of a lot about light rays and all sorts of odd information related to them by the time he turned 26.
Omni. You talk about machines thinking and humans thinking as interchangeable, but could a machine simulate human emotion?
Simon: Some of that's already been done. Psychiatrist Kenneth Colby built a model of a paranoid patient called PARRY. Attached to some of the things in its memory are symbols that arouse fear or anger, which is the way we think emotions are triggered in humans. You hear the word father, and that stirs up fear or whatever fathers are supposed to stir up. When you talk to PARRY, the first thing you know it's getting angry at you or refusing to talk. PARRY is very hard to calm down once it gets upset.
Omni: Some say AI has had a disappointing record of progress. What about all the rosy predictions from AI researchers?...
Simon: Starting with mine. In 1957, I predicted four things would happen within ten years. First, music of aesthetic interest would be composed by a computer. Second. most psychological theories would take the form of computer programs. Third, a significant mathematical theorem would be proved by a computer. Fourth, a computer would be chess champion of the world. We could quibble about the word most in the psychological-theory predictions-our GPS program is widely accepted as are a number of others-otherwise, all but my chess prediction actually took place in the following ten years.
Omni: Hmm. Isn't the music verdict pretty subjective?
Simon: Not at all. Hiller and Isaacson at the University of Illinois used a computer to compose the ILIAC Suite and the Computer Cantata. Without identifying the music, I played records of these for several professional musicians, and they told me they found it aesthetically interesting-I didn't say it had to be great music-so that passed my test. So what's subjective?
Omni: You don't back down at all on your predictions?
Simon: No. And on my chess prediction, I was off by a factor of four. It'll take 40 years, not 10, for a computer to be world champion. My alibi is that I thought the field was so exciting that there would be a huge increase in effort on computer chess, and there wasn't.
Omni: Do you ever admit you're wrong?
Simon: Oh sure, I do it all the time. My wife couldn't live with me if I didn't. But on these things I wasn't wrong.
Omni: Except the chess.
Simon: Except the chess... by a factor of four.
Herbert A. Simon
VOCATION: Cognitive psychologist, computer scientist, sociologist, philosopher
HIGHEST HONOR: Nobel Prize in economics, 1978
RECENTLY WRITTEN: Sciences of the Artificial, Models of My Life
BEST KNOWN CREATIONS: Theory of unbounded rationality and computer chess
ON COMPUTERS: The machines that taught us how a mind could be housed in a material body
ON ARTIFICIAL MINDS: It's going to be easier to simulate professors than bulldozer drivers
Новости портала
Рекомендуем посетить
Лауреат конкурса

Номинант конкурса
Как найти и купить книги
Возможность изучить дистанционно 9 языков
 Copyright © 2002-2005 Институт "Экономическая школа".
Rambler's Top100