Saturday, December 17, 2005

From 1990: the late, great Neil Postman.

Informing Ourselves To Death

By Neil Postman

The following speech was given at a meeting of the German Informatics Society (Gesellschaft fuer Informatik) on October 11, 1990 in Stuttgart, sponsored by IBM-Germany.

The great English playwright and social philosopher George Bernard Shaw once remarked that all professions are conspiracies against the common folk. He meant that those who belong to elite trades -- physicians, lawyers, teachers, and scientists -- protect their special status by creating vocabularies that are incomprehensible to the general public. This process prevents outsiders from understanding what the profession is doing and why -- and protects the insiders from close examination and criticism. Professions, in other words, build forbidding walls of technical gobbledegook over which the prying and alien eye cannot see.

Unlike George Bernard Shaw, I raise no complaint against this, for I consider myself a professional teacher and appreciate technical gobbledegook as much as anyone. But I do not object if occasionally someone who does not know the secrets of my trade is allowed entry to the inner halls to express an untutored point of view. Such a person may sometimes give a refreshing opinion or, even better, see something in a way that the professionals have overlooked.

I believe I have been invited to speak at this conference for just such a purpose. I do not know very much more about computer technology than the average person -- which isn't very much. I have little understanding of what excites a computer programmer or scientist, and in examining the descriptions of the presentations at this conference, I found each one more mysterious than the next. So, I clearly qualify as an outsider.

But I think that what you want here is not merely an outsider but an outsider who has a point of view that might be useful to the insiders. And that is why I accepted the invitation to speak. I believe I know something about what technologies do to culture, and I know even more about what technologies undo in a culture. In fact, I might say, at the start, that what a technology undoes is a subject that computer experts apparently know very little about. I have heard many experts in computer technology speak about the advantages that computers will bring. With one exception -- namely, Joseph Weizenbaum -- I have never heard anyone speak seriously and comprehensively about the disadvantages of computer technology, which strikes me as odd, and makes me wonder if the profession is hiding something important. That is to say, what seems to be lacking among computer experts is a sense of technological modesty.

After all, anyone who has studied the history of technology knows that technological change is always a Faustian bargain: Technology giveth and technology taketh away, and not always in equal measure. A new technology sometimes creates more than it destroys. Sometimes, it destroys more than it creates. But it is never one-sided.

The invention of the printing press is an excellent example. Printing fostered the modern idea of individuality but it destroyed the medieval sense of community and social integration. Printing created prose but made poetry into an exotic and elitist form of expression. Printing made modern science possible but transformed religious sensibility into an exercise in superstition. Printing assisted in the growth of the nation-state but, in so doing, made patriotism into a sordid if not a murderous emotion.

Another way of saying this is that a new technology tends to favor some groups of people and harms other groups. School teachers, for example, will, in the long run, probably be made obsolete by television, as blacksmiths were made obsolete by the automobile, as balladeers were made obsolete by the printing press. Technological change, in other words, always results in winners and losers.

In the case of computer technology, there can be no disputing that the computer has increased the power of large-scale organizations like military establishments or airline companies or banks or tax collecting agencies. And it is equally clear that the computer is now indispensable to high-level researchers in physics and other natural sciences. But to what extent has computer technology been an advantage to the masses of people? To steel workers, vegetable store owners, teachers, automobile mechanics, musicians, bakers, brick layers, dentists and most of the rest into whose lives the computer now intrudes? These people have had their private matters made more accessible to powerful institutions. They are more easily tracked and controlled; they are subjected to more examinations, and are increasingly mystified by the decisions made about them. They are more often reduced to mere numerical objects. They are being buried by junk mail. They are easy targets for advertising agencies and political organizations. The schools teach their children to operate computerized systems instead of teaching things that are more valuable to children. In a word, almost nothing happens to the losers that they need, which is why they are losers.

It is to be expected that the winners -- for example, most of the speakers at this conference -- will encourage the losers to be enthusiastic about computer technology. That is the way of winners, and so they sometimes tell the losers that with personal computers the average person can balance a checkbook more neatly, keep better track of recipes, and make more logical shopping lists. They also tell them that they can vote at home, shop at home, get all the information they wish at home, and thus make community life unnecessary. They tell them that their lives will be conducted more efficiently, discreetly neglecting to say from whose point of view or what might be the costs of such efficiency.

Should the losers grow skeptical, the winners dazzle them with the wondrous feats of computers, many of which have only marginal relevance to the quality of the losers' lives but which are nonetheless impressive. Eventually, the losers succumb, in part because they believe that the specialized knowledge of the masters of a computer technology is a form of wisdom. The masters, of course, come to believe this as well. The result is that certain questions do not arise, such as, to whom will the computer give greater power and freedom, and whose power and freedom will be reduced?

Now, I have perhaps made all of this sound like a wellplanned conspiracy, as if the winners know all too well what is being won and what lost. But this is not quite how it happens, for the winners do not always know what they are doing, and where it will all lead. The Benedictine monks who invented the mechanical clock in the 12th and 13th centuries believed that such a clock would provide a precise regularity to the seven periods of devotion they were required to observe during the course of the day. As a matter of fact, it did. But what the monks did not realize is that the clock is not merely a means of keeping track of the hours but also of synchronizing and controlling the actions of men. And so, by the middle of the 14th century, the clock had moved outside the walls of the monastery, and brought a new and precise regularity to the life of the workman and the merchant. The mechanical clock made possible the idea of regular production, regular working hours, and a standardized product. Without the clock, capitalism would have been quite impossible. And so, here is a great paradox: the clock was invented by men who wanted to devote themselves more rigorously to God; and it ended as the technology of greatest use to men who wished to devote themselves to the accumulation of money. Technology always has unforeseen consequences, and it is not always clear, at the beginning, who or what will win, and who or what will lose.

I might add, by way of another historical example, that Johann Gutenberg was by all accounts a devoted Christian who would have been horrified to hear Martin Luther, the accursed heretic, declare that printing is "God's highest act of grace, whereby the business of the Gospel is driven forward." Gutenberg thought his invention would advance the cause of the Holy Roman See, whereas in fact, it turned out to bring a revolution which destroyed the monopoly of the Church.

We may well ask ourselves, then, is there something that the masters of computer technology think they are doing for us which they and we may have reason to regret? I believe there is, and it is suggested by the title of my talk, "Informing Ourselves to Death." In the time remaining, I will try to explain what is dangerous about the computer, and why. And I trust you will be open enough to consider what I have to say. Now, I think I can begin to get at this by telling you of a small experiment I have been conducting, on and off, for the past several years. There are some people who describe the experiment as an exercise in deceit and exploitation but I will rely on your sense of humor to pull me through.

Here's how it works: It is best done in the morning when I see a colleague who appears not to be in possession of a copy of The New York Times. "Did you read The Times this morning?," I ask. If the colleague says yes, there is no experiment that day. But if the answer is no, the experiment can proceed. "You ought to look at Page 23," I say. "There's a fascinating article about a study done at Harvard University." "Really? What's it about?" is the usual reply. My choices at this point are limited only by my imagination. But I might say something like this: "Well, they did this study to find out what foods are best to eat for losing weight, and it turns out that a normal diet supplemented by chocolate eclairs, eaten six times a day, is the best approach. It seems that there's some special nutrient in the eclairs -- encomial dioxin -- that actually uses up calories at an incredible rate."

Another possibility, which I like to use with colleagues who are known to be health conscious is this one: "I think you'll want to know about this," I say. "The neuro-physiologists at the University of Stuttgart have uncovered a connection between jogging and reduced intelligence. They tested more than 1200 people over a period of five years, and found that as the number of hours people jogged increased, there was a corresponding decrease in their intelligence. They don't know exactly why but there it is."

I'm sure, by now, you understand what my role is in the experiment: to report something that is quite ridiculous -- one might say, beyond belief. Let me tell you, then, some of my results: Unless this is the second or third time I've tried this on the same person, most people will believe or at least not disbelieve what I have told them. Sometimes they say: "Really? Is that possible?" Sometimes they do a double-take, and reply, "Where'd you say that study was done?" And sometimes they say, "You know, I've heard something like that."

Now, there are several conclusions that might be drawn from these results, one of which was expressed by H. L. Mencken fifty years ago when he said, there is no idea so stupid that you can't find a professor who will believe it. This is more of an accusation than an explanation but in any case I have tried this experiment on non-professors and get roughly the same results. Another possible conclusion is one expressed by George Orwell -- also about 50 years ago -- when he remarked that the average person today is about as naive as was the average person in the Middle Ages. In the Middle Ages people believed in the authority of their religion, no matter what. Today, we believe in the authority of our science, no matter what.

But I think there is still another and more important conclusion to be drawn, related to Orwell's point but rather off at a right angle to it. I am referring to the fact that the world in which we live is very nearly incomprehensible to most of us. There is almost no fact -- whether actual or imagined -- that will surprise us for very long, since we have no comprehensive and consistent picture of the world which would make the fact appear as an unacceptable contradiction. We believe because there is no reason not to believe. No social, political, historical, metaphysical, logical or spiritual reason. We live in a world that, for the most part, makes no sense to us. Not even technical sense. I don't mean to try my experiment on this audience, especially after having told you about it, but if I informed you that the seats you are presently occupying were actually made by a special process which uses the skin of a Bismark herring, on what grounds would you dispute me? For all you know -- indeed, for all I know -- the skin of a Bismark herring could have made the seats on which you sit. And if I could get an industrial chemist to confirm this fact by describing some incomprehensible process by which it was done, you would probably tell someone tomorrow that you spent the evening sitting on a Bismark herring.

Perhaps I can get a bit closer to the point I wish to make with an analogy: If you opened a brand-new deck of cards, and started turning the cards over, one by one, you would have a pretty good idea of what their order is. After you had gone from the ace of spades through the nine of spades, you would expect a ten of spades to come up next. And if a three of diamonds showed up instead, you would be surprised and wonder what kind of deck of cards this is. But if I gave you a deck that had been shuffled twenty times, and then asked you to turn the cards over, you would not expect any card in particular -- a three of diamonds would be just as likely as a ten of spades. Having no basis for assuming a given order, you would have no reason to react with disbelief or even surprise to whatever card turns up.

The point is that, in a world without spiritual or intellectual order, nothing is unbelievable; nothing is predictable, and therefore, nothing comes as a particular surprise.

In fact, George Orwell was more than a little unfair to the average person in the Middle Ages. The belief system of the Middle Ages was rather like my brand-new deck of cards. There existed an ordered, comprehensible world-view, beginning with the idea that all knowledge and goodness come from God. What the priests had to say about the world was derived from the logic of their theology. There was nothing arbitrary about the things people were asked to believe, including the fact that the world itself was created at 9 AM on October 23 in the year 4004 B.C. That could be explained, and was, quite lucidly, to the satisfaction of anyone. So could the fact that 10,000 angels could dance on the head of a pin. It made quite good sense, if you believed that the Bible is the revealed word of God and that the universe is populated with angels. The medieval world was, to be sure, mysterious and filled with wonder, but it was not without a sense of order. Ordinary men and women might not clearly grasp how the harsh realities of their lives fit into the grand and benevolent design, but they had no doubt that there was such a design, and their priests were well able, by deduction from a handful of principles, to make it, if not rational, at least coherent.

The situation we are presently in is much different. And I should say, sadder and more confusing and certainly more mysterious. It is rather like the shuffled deck of cards I referred to. There is no consistent, integrated conception of the world which serves as the foundation on which our edifice of belief rests. And therefore, in a sense, we are more naive than those of the Middle Ages, and more frightened, for we can be made to believe almost anything. The skin of a Bismark herring makes about as much sense as a vinyl alloy or encomial dioxin.

Now, in a way, none of this is our fault. If I may turn the wisdom of Cassius on its head: the fault is not in ourselves but almost literally in the stars. When Galileo turned his telescope toward the heavens, and allowed Kepler to look as well, they found no enchantment or authorization in the stars, only geometric patterns and equations. God, it seemed, was less of a moral philosopher than a master mathematician. This discovery helped to give impetus to the development of physics but did nothing but harm to theology. Before Galileo and Kepler, it was possible to believe that the Earth was the stable center of the universe, and that God took a special interest in our affairs. Afterward, the Earth became a lonely wanderer in an obscure galaxy in a hidden corner of the universe, and we were left to wonder if God had any interest in us at all. The ordered, comprehensible world of the Middle Ages began to unravel because people no longer saw in the stars the face of a friend.

And something else, which once was our friend, turned against us, as well. I refer to information. There was a time when information was a resource that helped human beings to solve specific and urgent problems of their environment. It is true enough that in the Middle Ages, there was a scarcity of information but its very scarcity made it both important and usable. This began to change, as everyone knows, in the late 15th century when a goldsmith named Gutenberg, from Mainz, converted an old wine press into a printing machine, and in so doing, created what we now call an information explosion. Forty years after the invention of the press, there were printing machines in 110 cities in six different countries; 50 years after, more than eight million books had been printed, almost all of them filled with information that had previously not been available to the average person. Nothing could be more misleading than the idea that computer technology introduced the age of information. The printing press began that age, and we have not been free of it since.

But what started out as a liberating stream has turned into a deluge of chaos. If I may take my own country as an example, here is what we are faced with: In America, there are 260,000 billboards; 11,520 newspapers; 11,556 periodicals; 27,000 video outlets for renting tapes; 362 million TV sets; and over 400 million radios. There are 40,000 new book titles published every year (300,000 world-wide) and every day in America 41 million photographs are taken, and just for the record, over 60 billion pieces of advertising junk mail come into our mail boxes every year. Everything from telegraphy and photography in the 19th century to the silicon chip in the twentieth has amplified the din of information, until matters have reached such proportions today that for the average person, information no longer has any relation to the solution of problems.

The tie between information and action has been severed. Information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one's status. It comes indiscriminately, directed at no one in particular, disconnected from usefulness; we are glutted with information, drowning in information, have no control over it, don't know what to do with it.

And there are two reasons we do not know what to do with it. First, as I have said, we no longer have a coherent conception of ourselves, and our universe, and our relation to one another and our world. We no longer know, as the Middle Ages did, where we come from, and where we are going, or why. That is, we don't know what information is relevant, and what information is irrelevant to our lives. Second, we have directed all of our energies and intelligence to inventing machinery that does nothing but increase the supply of information. As a consequence, our defenses against information glut have broken down; our information immune system is inoperable. We don't know how to filter it out; we don't know how to reduce it; we don't know to use it. We suffer from a kind of cultural AIDS.

Now, into this situation comes the computer. The computer, as we know, has a quality of universality, not only because its uses are almost infinitely various but also because computers are commonly integrated into the structure of other machines. Therefore it would be fatuous of me to warn against every conceivable use of a computer. But there is no denying that the most prominent uses of computers have to do with information. When people talk about "information sciences," they are talking about computers -- how to store information, how to retrieve information, how to organize information. The computer is an answer to the questions, how can I get more information, faster, and in a more usable form? These would appear to be reasonable questions. But now I should like to put some other questions to you that seem to me more reasonable. Did Iraq invade Kuwait because of a lack of information? If a hideous war should ensue between Iraq and the U.S., will it happen because of a lack of information? If children die of starvation in Ethiopia, does it occur because of a lack of information? Does racism in South Africa exist because of a lack of information? If criminals roam the streets of New York City, do they do so because of a lack of information?

Or, let us come down to a more personal level: If you and your spouse are unhappy together, and end your marriage in divorce, will it happen because of a lack of information? If your children misbehave and bring shame to your family, does it happen because of a lack of information? If someone in your family has a mental breakdown, will it happen because of a lack of information?

I believe you will have to concede that what ails us, what causes us the most misery and pain -- at both cultural and personal levels -- has nothing to do with the sort of information made accessible by computers. The computer and its information cannot answer any of the fundamental questions we need to address to make our lives more meaningful and humane. The computer cannot provide an organizing moral framework. It cannot tell us what questions are worth asking. It cannot provide a means of understanding why we are here or why we fight each other or why decency eludes us so often, especially when we need it the most. The computer is, in a sense, a magnificent toy that distracts us from facing what we most needed to confront -- spiritual emptiness, knowledge of ourselves, usable conceptions of the past and future. Does one blame the computer for this? Of course not. It is, after all, only a machine. But it is presented to us, with trumpets blaring, as at this conference, as a technological messiah.

Through the computer, the heralds say, we will make education better, religion better, politics better, our minds better -- best of all, ourselves better. This is, of course, nonsense, and only the young or the ignorant or the foolish could believe it. I said a moment ago that computers are not to blame for this. And that is true, at least in the sense that we do not blame an elephant for its huge appetite or a stone for being hard or a cloud for hiding the sun. That is their nature, and we expect nothing different from them. But the computer has a nature, as well. True, it is only a machine but a machine designed to manipulate and generate information. That is what computers do, and therefore they have an agenda and an unmistakable message.

The message is that through more and more information, more conveniently packaged, more swiftly delivered, we will find solutions to our problems. And so all the brilliant young men and women, believing this, create ingenious things for the computer to do, hoping that in this way, we will become wiser and more decent and more noble. And who can blame them? By becoming masters of this wondrous technology, they will acquire prestige and power and some will even become famous. In a world populated by people who believe that through more and more information, paradise is attainable, the computer scientist is king. But I maintain that all of this is a monumental and dangerous waste of human talent and energy. Imagine what might be accomplished if this talent and energy were turned to philosophy, to theology, to the arts, to imaginative literature or to education? Who knows what we could learn from such people -- perhaps why there are wars, and hunger, and homelessness and mental illness and anger.

As things stand now, the geniuses of computer technology will give us Star Wars, and tell us that is the answer to nuclear war. They will give us artificial intelligence, and tell us that this is the way to self-knowledge. They will give us instantaneous global communication, and tell us this is the way to mutual understanding. They will give us Virtual Reality and tell us this is the answer to spiritual poverty. But that is only the way of the technician, the fact-mongerer, the information junkie, and the technological idiot.

Here is what Henry David Thoreau told us: "All our inventions are but improved means to an unimproved end." Here is what Goethe told us: "One should, each day, try to hear a little song, read a good poem, see a fine picture, and, if it is possible, speak a few reasonable words." And here is what Socrates told us: "The unexamined life is not worth living." And here is what the prophet Micah told us: "What does the Lord require of thee but to do justly, and to love mercy and to walk humbly with thy God?" And I can tell you -- if I had the time (although you all know it well enough) -- what Confucius, Isaiah, Jesus, Mohammed, the Buddha, Spinoza and Shakespeare told us. It is all the same: There is no escaping from ourselves. The human dilemma is as it has always been, and we solve nothing fundamental by cloaking ourselves in technological glory.

Even the humblest cartoon character knows this, and I shall close by quoting the wise old possum named Pogo, created by the cartoonist, Walt Kelley. I commend his words to all the technological utopians and messiahs present. "We have met the enemy," Pogo said, "and he is us."

The question no one can answer...or perhaps wants to.

What's the Return on Education?

By ANNA BERNASEK

SOCRATES once said that the more he learned, the more he became convinced of his own ignorance. It's a familiar feeling for anyone who tries to make sense of the American education system.

This academic year, the better part of $1 trillion will be spent on education in the United States. That's an awful lot of spending, approaching 10 percent of the overall economy. But what exactly is the return on all of that money?

While the costs are fairly simple to calculate, the benefits of education are harder to sum up.

Much of what a nation wants from its schools has nothing to do with money. Consider the social and cultural benefits, for instance: making friends, learning social rules and norms and understanding civic roles.

But some of the most sought-after benefits from education are economic. Specialized knowledge and technical skills, for example, lead to higher incomes, greater productivity and generation of valuable ideas.

Those benefits are vital to a nation's growth. In recent years, Americans have become keenly aware of the impact of education as freshly educated workers from China and India compete for good jobs once held in the United States.

Today, many parents have a gut feeling that education is the way to ensure prosperity for their children, yet there is surprisingly little certainty about how much education contributes to the nation's overall wealth.

It is largely a problem of measurement. Economists have tried for decades to quantify the impact of education. They still don't have all the answers, but their work can shed some light on what Americans are getting for their investment. That information could serve as a backdrop for debates on how much government should spend on education and what should be left to individuals.

Start with what economists are confident about: the payoff to individuals. By measuring the relationship between the number of years of schooling and income earned in the job market, economists think that they have a good idea of what it's worth.

Alan B. Krueger, an economics professor at Princeton, says the evidence suggests that, up to a point, an additional year of schooling is likely to raise an individual's earnings about 10 percent.

For someone earning the national median household income of $42,000, an extra year of training could provide an additional $4,200 a year. Over the span of a career, that could easily add up to $30,000 or $40,000 of present value. If the year's education costs less than that, there is a net gain.

The payoff, of course, varies by individual. Another year of education will not have the same benefit for everyone. And school resources matter as well. According to studies by Professor Krueger and others, class size, teacher quality and school size can make a difference in the outcome. They have found that the effect of better schools is most pronounced for disadvantaged students.

There is less certainty about the big picture. That is partly because educational benefits accrue to the economy gradually, often showing up years after schooling is complete. Another problem is the difficulty of quantifying indirect benefits. One unknown, for example, is the degree to which formal education fosters new commercial ideas and technological breakthroughs.

While there is little doubt that there are benefits, those measurement challenges have led to big shifts in the conclusions of economic studies over time. In the early 1990's, economists calculated big economic rewards from additional investment in education. A decade later, the conclusions were different: studies suggested that while one individual might gain advantage over another through greater education, there might be no overall economic benefit.

Today, economists suspect that the truth is somewhere in the middle. Jonathan Temple, an economist at the University of Bristol in England, says the research trend is moving back toward the earlier findings. The latest attempts to quantify the impact of education on total economic growth have tended to conclude that it is at least as significant as that measured for individuals.

Because indirect benefits can't be counted accurately, Professor Temple suspects an even bigger impact. Insofar as education enhances worker productivity, there is a clear benefit to the economy.

Two Harvard economists, Lawrence F. Katz and Claudia Goldin, studied the effect of increases in educational attainment in the United States labor force from 1915 to 1999. They estimated that those gains directly resulted in at least 23 percent of the overall growth in productivity, or around 10 percent of growth in gross domestic product.

The most important factor was the move to universal high school education from 1910 to 1940. It expanded the education of the work force far more rapidly than at any other time in the nation's history, creating economic benefits that extended well into the remainder of the century, according to Professors Katz and Goldin. That moved the United States ahead of other countries in education and laid the foundation for the expansion of higher education.

Today, more Americans attend college than ever before, but the rest of the world is catching up. The once-large educational gap between the United States and other countries is closing - making it increasingly important to understand what education is really worth to a nation.

If economists are right, it is not just part of the cost of maintaining a functioning democracy, but a source of wealth creation for all. That means that investing in the education of every American is in everyone's self-interest.

Still, we're a long way from being able to judge the right level of spending on education - and how to achieve it. With a college degree more important than ever, the cost of higher education is rising steeply, creating growing stress for many American families. With more study, researchers may be able to identify ways of reducing costs while increasing the payoff from education.

Taking our cue from Socrates, the first step may be to recognize what we don't know.

Copyright 2005The New York Times Company

Friday, December 16, 2005

Do we really have to live here?

link to original piece.

Worlds without end
Dec 14th 2005
From The Economist print edition

The online game industry is an excellent way to study the economics of fun

THE sci-fi dream of the computer-generated virtual reality—so familiar to readers of “Neuromancer” and viewers of “The Matrix”—has finally come true. But, as is often the case with guesses about future technologies, it has not come true in quite the way that many people expected.

While scientists developed sensory-input devices to mimic the sensations of a virtual world, the games industry eschewed this hardware-based approach in favour of creating alternative realities through emotionally engaging software. “It turns out that the way humans are made, the software-based approach seems to have much more success,” writes Edward Castronova in an illuminating guide to these new synthetic worlds.

Millions of people now spend several hours a week immersed in “massively multiplayer online role-playing games” (MMORPGs). These are often Tolkienesque fantasy worlds in which players battle monsters, go on quests, and build up their virtual power and wealth. Some synthetic worlds are deliberately escapist; others are designed to be as lifelike and realistic as possible. Many have a strong libertarian bent. Sociologists and anthropologists have written about MMORPGs before, but Mr Castronova looks at the phenomenon from a new perspective: economics.

Mr Castronova's thesis is that these synthetic worlds are increasingly inter-twined with the real world. In particular, real-world trade of in-game items—swords, gold, potions, or even whole characters—is flourishing in online marketplaces such as eBay. This means in-game items and currency have real value. In 2002, Mr Castronova famously calculated the GNP per capita of the fictional game-world of “EverQuest” as $2,000, comparable to that of Bulgaria, and far higher than that of India or China. Furthermore, by “working” in the game to generate virtual wealth and then selling the results for real money, it is possible to generate about $3.50 per hour. Companies in China pay thousands of people, known as “farmers”, to play MMORPGs all day, and then profit from selling the in-game goods they generate to other players for real money.

Land and other in-game property has been sold for huge sums: one “Project Entropia” player paid $26,500 for an island in the game's virtual world last year, and has already made his money back by selling hunting and mining rights to other players. Trade in virtual items is now worth more than $100m each year. In some Asian countries, where MMORPGs are particularly popular, in-game thefts and cheats have led to real-world arrests and legal action. In one case in South Korea, the police intervened when a hoard of in-game money was stolen and sold, netting the thieves $1.3m. In-game money is, in short, no less real than the dollars and pounds stored in conventional bank accounts.

Virtual economies are an integral part of synthetic worlds. The buying and selling of goods, as the game's inhabitants go about their daily business, lends realism and vibrancy to the virtual realm. But in-game economies tend to be unusual in several ways. They are run to maximise fun, not growth or overall wellbeing. And inflation is often rampant, due to the convention that killing monsters produces a cash reward and the supply of monsters is unlimited in many games. As a result, the value of in-game currency is constantly falling and prices are constantly rising.

Mr Castronova's analysis of the economics of fun is intriguing. Virtual-world economies are designed to make the resulting game interesting and enjoyable for their inhabitants. Many games follow a rags-to-riches storyline, for example. But how can all the players end up in the top 10%? Simple: the upwardly mobile human players need only be a subset of the world's population. An underclass of computer-controlled “bot” citizens, meanwhile, stays poor for ever. Mr Castronova explains all this with clarity, wit and a merciful lack of academic jargon.

Some of his conclusions may sound far-fetched. In particular, he suggests that as synthetic worlds continue to grow in popularity, substantial numbers of people will choose to spend large parts of their lives immersed in them. Some players could then fall victim to what Mr Castronova calls “toxic immersion”, in which their virtual lives take precedence, to the detriment of their real-world lives.

But perhaps this is not so implausible. It is already possible to make a living by working in a virtual world, as the “farmers” demonstrate. In one survey, 20% of MMORPG players said they regarded the game world as their “real” place of residence; Earth is just where they eat and sleep. In July, a South Korean man died after a 50-hour MMORPG session. And the Chinese government has recently tried to limit the number of hours that can be spent playing MMORPGs each day.

As technology improves, players could make enough money to pay for the upkeep of their real-world bodies while they remain fully immersed in the virtual world. Mr Castronova is right when he concludes that “we should take a serious look at the game we have begun to play.”

Synthetic Worlds: The Business and Culture of Online Games.
By Edward Castronova.
University of Chicago Press; 332 pages; $29

The Great One.

Special thanks to A.C. for reminding about this one.

link to original piece.

Tough, meticulous, honest, fair: Gretzky has it all

By Scott Burnside

ESPN.com

GLENDALE, Ariz. -- The first time Wayne Gretzky stepped onto the ice and called his troops to attention as an NHL head coach, the whistle emitted a sad-sounding "blapht."

There was a brief pause on the ice.

A harbinger of things to come? Foreshadowing of a monumental career error?

Players have found Wayne Gretzky to be tough but fair in his first few months as coach of the Coyotes.

Hardly.

"Barely a peep came out of it," said Phoenix Coyotes' analyst and former NHL netminder Darren Pang with a smile. "The guys on the coaching staff jabbed him right away."

While skeptics quietly shook their heads when the greatest player in the history of the game decided to be a coach, Gretzky merely wonders why he didn't do it years ago.

In the face of widespread skepticism from many -- including close friends, other coaches and the media -- Coach Gretzky has followed the same other-worldly career arc that defined him as a player. In a matter of months, Gretzky has achieved what it takes many coaches years to achieve. He has assembled a highly motivated coaching staff to which he freely delegates authority, while at the same time leaving no doubt as to who represents the last word on each and every decision affecting the Phoenix Coyotes.

One-third of the way into the season, Gretzky has the Coyotes playing disciplined, dedicated hockey, which should translate into their first playoff berth in three seasons.

Not bad for a guy who had never coached a game in his life, let alone an NHL game, and who, many believed, would instantly regret having taken on the role.

"What's really been the fun part is that I've never felt that way at all this year. Quite the opposite," Gretzky told ESPN.com in a recent interview. "I've had many days where I've thought, 'Why didn't I do this earlier? Why didn't I do this before?' Or 'I wish I would have done this three years ago.'"

So what's he like to play for? Let's just say the Coyotes found out in short order that Gretzky is not exactly the mild-mannered guy smiling at them from those Ford commercials or impishly stealing his father Walter's french fries in the McDonald's spots.

"He's a little bit tough, you know," said Ladislav Nagy, the team's most talented player and leading scorer.

It's a statement offered not so much to surprise or shock, but is merely a fact.

"If you don't play good, he'll just bench you," Nagy added, as though it were the most natural thing in the world.

And it is, in Gretzky's world.

Half the players have either been benched or seen their ice time cut dramatically at one point or another this season. On many teams, there is a tiered system in which elite players are forgiven certain sins because they're too important to the end result. Here, no one is above the law, and the law is Gretzky.

"It's all about hard work, speed and winning," said Gretzky's longtime Edmonton teammate Grant Fuhr, who serves as the Coyotes' goaltending coach. "A guy doesn't pull his weight, he's out."

---

A look at Gretzky's staff

• Barry Smith (The Architect): After 11 seasons as an associate coach with the Detroit Red Wings, Smith joined the Coyotes on Aug. 8 with Gretzky. Smith has been a part of five Stanley Cup teams -- three with the Red Wings (1997, 1998, 2002) and two with the Penguins (1991, 1992). He is usually the first to the rink, looking at videotape. Smith has helped design a variation on the left-wing lock system for the Coyotes, a system that was instrumental to Detroit's string of Cup wins.

• Rick Bowness (The Veteran): This is Bowness' seventh season with the Coyotes. He was the interim coach for the club for the last two months of the 2003-04 season, and before that, served as head coach for four other NHL clubs (Winnipeg, Boston, Ottawa and the New York Islanders). Bowness says Gretzky coaches with "tough love."

• Rick Tocchet (The Tough Guy): Tocchet, Gretzky's longtime friend and former teammate, is in his first season as an associate coach for the Coyotes. Given his style when he played, Tocchet is the in-your-face guy, the guy who will grab a player after a game or practice and talk about things like accountability and work ethic.

• Grant Fuhr (Net Master): Fuhr, a longtime teammate of Gretzky's (especially from the glory days of the Edmonton Oilers), has been the team's goaltending coach since July 2004. He has to be a grounding force for veteran Curtis Joseph, who seems rejuvenated between the pipes, and young backup David LeNeveu.

---

Among those who have come to understand that simple truth is forward Mike Comrie. The Edmonton native grew up admiring Gretzky. Their families are close, and as a boy, Comrie spent considerable time around the Oilers' dressing room.
Didn't matter. Comrie wasn't anteing up and he took a seat. Since that time, Comrie has been given more on-ice responsibility and entered play this week with 24 points in 28 games and a plus-8 rating.

"Wayne is holding people accountable and it's been a good transition and it's something this organization needs," Comrie said.

Rick Bowness, a former NHL head coach and one of Gretzky's assistants, figures Gretzky's benching of Comrie was a defining moment.

Many new coaches waste time and energy trying to make sure everyone is happy because they want to make a good impression, to win over the dressing room, Bowness explained. But you can't make everyone happy, and Gretzky understood that from the get-go.

"The job demands that you think with your head, and Wayne's very good at that, despite what your heart is telling you," Bowness said. "That's how it has to be. It's called tough love."

And with Gretzky, the player is still close to the surface. He knows the effect being benched or moved to a less prominent role has on a player.

"It's the worst part of the job. I don't like it. I've got to be honest with you," Gretzky said. "Ultimately, obviously, I'm the man responsible. But we as a group, the coaching staff, talk about each and every situation. I would never make a decision without 100-percent support of my staff."

With a full complement of healthy players, Gretzky must make those choices on a nightly basis. On a recent night, it was talented Oleg Saprykin, who was acquired by the Coyotes prior to the lockout. The next game, it was Mike Leclerc. And so on.

"It's not fun. Oleg is another example. He's been one of the pleasures for me to be part of this year, coaching him and being around him. He's always at the rink early. He loves the game of hockey," Gretzky said. "And then, on the other side of it, it shows you the evolution of our hockey club about how much better we're becoming as a team and maybe the depth that we're getting in this organization now, that a guy of that caliber, we couldn't make room for last night."

Whatever approach, it has paid dividends in the standings. After starting the season 1-5, the Coyotes have gone 11-4-1 from the start of November through Sunday's 2-1 overtime win in Boston. They began play this week in ninth place in the Western Conference, one point out of a playoff berth, and part of a pack of teams that will battle for five or six playoff spots.

"You could see it coming together, even when we lost early in the season," said Bowness, now in his seventh season with the Coyotes. "It all came together at a very crucial time. The difference in this team from two years ago? It's sure a lot more fun to come to the rink, that's for sure. Night and day."

Given Gretzky's offensive prowess, one might have imagined the Coyotes would be dedicated to hair-on-fire offensive hockey. But Gretzky has imposed a strict defensive consciousness, and in spite of a very young defensive corps, the team ranks fifth overall with a goals-against average of 2.50. Assistant coach Rick Tocchet has joked with Gretzky that for a guy that didn't know much about defense as a player, he sure is embracing it as a coach.

"We're also very conscious of the fact that championships in this league are won on defensive play," Gretzky said. "Teams that have, not the best, but a top-10 goals-against and top-10 defense, are usually the teams that are the most successful. You can be a great offensive team without taking it away from your defensive play, and that's one of the things we've tried to stress."

All about the details

On a December morning at Glendale Arena, players filed onto the ice in their game uniforms and socks 15 minutes before the scheduled start of practice for a promotional photo shoot. Behind the players' benches and moving among the arena's seats was a camera crew waiting to interview Gretzky for an HBO special with Bob Costas. On the ice in street clothes, vice president of communications Rich Nairn was trying to get the players in the right order for the photo.

The first poster shot had the players sitting cowboy-like along the benches. Gretzky didn't like it and they quickly moved to a more traditional shot of players kneeling in front of the bench with others sitting on the edge of the boards behind them.

Gretzky darted in to get the players in the right spot, arms at their sides, so the Coyotes' logo was clearly visible.

Wayne Gretzky has brought many of his experiences as a player to his role behind the bench.

"Take the morning off Rich," someone quipped.

This is Gretzky. For him, it has always been about the details. As a player, his ability to recognize and internalize even the smallest nuances of a complex game made him the greatest of all time. As a coach, he employs that vision in preparing his team.

"He sees things at a different level than the rest of us," Comrie said. "I think sometimes he has to catch himself because we might not think the game as he does."

Forward Boyd Devereaux recalls Gretzky's coming down to the locker room, moments before an exhibition game in San Jose during the team's woeful preseason. Players were scattered about, working on sticks in the hallway, in the coaches' office or training room.

When the team returned to Phoenix, Gretzky went to the players and told them he wanted them all in the dressing room before the game and that they were all to follow netminder Curtis Joseph onto the ice as one unit. They were to do that every night without fail.

"It's like you're going out to battle kind of thing," Devereaux said.

When Gretzky described his rationale for taking the job, much of it had to do with returning to the game at its basic level; on the ice, in the dressing room. One might have assumed that meant hanging out with the guys, suggesting an ultra-casual approach to the game. That notion has long been dispelled.

"This is as hard as I've worked in practice on any team I've been with," said Mike Ricci, who played for taskmasters Marc Crawford and Darryl Sutter. "He's forging his own style of hockey. He's very intense. He works us hard."

The brisk, up-tempo practices and clearly defined demands to compete on a daily basis have helped in taking some of the luster off of the awe factor, which was inevitable for players who had trouble reconciling the man who was leading the drills with the man many of them grew up watching wide-eyed on "Hockey Night in Canada."

"He made us realize he's not Wayne Gretzky the player anymore. He's the coach," Ricci said.

"It's pretty cool. It's pretty special to be here," added rookie defenseman Keith Ballard, who has blossomed under Gretzky. "Sometimes you step back and you realize how fortunate you are to play for the greatest player ever and to be around him every day. If he says anything to you, it's kind of a bonus."

For many years, Phoenix GM Mike Barnett was Gretzky's agent. He has seen him play more games than perhaps any other person in the world. He recalls Gretzky going to the bench after a shift and, while other players would be sitting with their heads down trying to catch their breath, always sitting, arm on the bench in front of him, eyes intently watching the play unfold on the ice, watching for errors or breakdowns that he might exploit on his next shift.

Those skills, that passion, is now evident as a coach. "I see a guy who's becoming more confident every day in the environment," Pang said. "A great hockey mind doesn't change."

A case in point: During a recent road victory against Dallas, Gretzky suddenly imposed a completely different game plan between the second and third periods.

"We changed really our whole system in the third period. We went to a one forecheck and four guys back, which we hadn't done. And we've had situations where we sent two guys in and one guy back," Gretzky explained.

"Our biggest thing is, we don't want one pass to beat three guys. Against Anaheim the other night [a 6-1 loss, the team's worst of the season], it happened a couple of times where one pass beat three guys who were in sort of a no-zone. And that was what we were trying to work on today.

"We were trying to have our forecheckers understand that not necessarily the first guy is always going to get in to get the puck, but you're going in to force the opposition to go in a direction that (a) they don't want to go in and (b) that you're ready to defend. So that's the thing that we were trying to get across to our players more than anything."

Given the close relationship between Barnett and Gretzky, it's not surprising that when Gretzky wanted to make alterations to the lineup early and bring in players that more closely fit Gretzky's vision, Barnett accommodated him.

With defenseman Paul Mara playing a more complete, physical game to complement an explosive offensive side, and Ballard and fellow rookie defenseman Zbynek Michalek bulling their way onto the team, the Coyotes were able to acquire gritty Dave Scatchard, veteran scorer Geoff Sanderson and enigmatic forward Jamie Lundmark, while jettisoning defensemen David Tanabe and Cale Hulse.

Scatchard is playing on a line with captain Shane Doan and Petr Nedved as the three attempt to work free of scoring problems, while Lundmark has become the team's power-play quarterback.

Have there been mistakes? Sure.

He played Sanderson one night when he had the flu and there was a healthy body in the press box.

He used defenseman Derek Morris too much when he wasn't fully recovered from injury.

How do we know? Because Gretzky admits the missteps.

"He battled through the leg injury and it's probably my fault that I didn't sit him down. Rookie coaching mistake probably,"
Gretzky said of Morris. "He wanted to play so badly and he's so tough. Then he injured his rib cage."

Does he fret about mistakes like that? Does he wake up at night wondering what happened, or how to fix things?
"Oh yeah. Oh yeah. There's been many times this year after a game I've said, 'Maybe I should have dressed so and so,' or 'Maybe that line combination that I wanted to go with this morning, we should [have gone] with it.' You always question yourself when you don't win. You're always looking for answers.

"But then you get back to square one, and we've got to stay within what we're trying to accomplish and the long-term goal here, which is to win a championship," he said. "It's not going to happen overnight. It doesn't happen overnight. It's hard to win a championship and it takes a lot of hard work, but you have to stay the course."

No one questioned Gretzky's hockey sense. But when it comes to coaching in the NHL, there are no breaks, no off days. That is especially true of Gretzky, who maintains his role as executive director of Canada's Olympic team. During a recent break in the schedule, Gretzky left Phoenix for a tour of Eastern cities to scout players for the Turin Olympics. As much as Gretzky prepared for the demands, they have exceeded even his estimation.

"The biggest difference probably is the time, it's more time consuming, but I anticipated that," Gretzky said. "The flow of the game, you get excited. The day before the game, you start thinking about it, similar thoughts that you had as an actual player. It seems that there's never enough time to prepare for the next game because you're trying to go over so many things."

A group effort

After a dominating 8-4 win over Southeast Division-leader Carolina recently, many of the Coyotes players arrived for practice at the team's suburban facility dressed in shin pads and pants. They appeared as slightly larger versions of the minor-league hockey players who occupied the ice before the Coyotes' session.

On the ice, the five coaches moved as one unit, laughing, joking. When the surprisingly lively practice began, Gretzky often stood to the side watching, maroon toque perched on his head. It's not a stretch to imagine him wearing one just like it a thousand times as he scampered across the snow to the homemade rink in the backyard of the family's Brampton, Ontario, home.

Although he was known mostly for his offense as a player, Wayne Gretzky has tried to make the Coyotes a defensively conscious team.

Many of the drills were administered by Barry Smith, the widely heralded tactician who came over from Detroit after helping the Wings win three Stanley Cups in a decade. Other drills were run by Tocchet, a longtime Gretzky friend and teammate.

Given his style when he played, it's not surprising that Tocchet is the in-your-face guy, the guy who will grab a player after a game or practice and talk about things like accountability and work ethic.

Smith is often at the rink by 8 a.m., working on video, watching the Coyotes' previous game or looking at tape of the team's next opponent. He and Tocchet generally plan out the practice based on problem areas they've encountered or areas they know an opposing team will try to exploit. They then meet with Gretzky to go over the plan.

On the ice, Gretzky will occasionally interrupt to add a comment or reinforce a point.

"I'm the first guy to tell you I don't everything about the game and I don't have all the answers. Rick [Tocchet] knew that coming in and I think Barry and Rick [Bowness] found that out real quickly. This is a group effort, it's not one person," Gretzky said.

If it's true that Gretzky missed the feel of the team, the give and take of the locker room, he seems to have found it with his coaching staff. The coaches can often be found in their office or in the team lounge, drinking coffee and kibitzing or swapping stories.

"I knew how it was going to be," said Tocchet, who played with Gretzky in Los Angeles. "He's not afraid to delegate. He doesn't mind standing on the sidelines and watching, which he does. That's the key to successful people."

He doesn't have to kick chairs or scream at people to get his point across, added Smith, who was responsible for
implementing the famous left-wing lock system that helped the Wings to Cup wins in 1997, 1998 and 2002.

"We looked at our team and we felt that our speed was more on our left side. Obviously, the center men on teams are usually the most versatile player and we utilize that on the basis that the left winger or the center mainly forecheck," Gretzky explained. "I guess you'd call it a right-wing lock or right-wing responsibility -- whatever you want to call it.

"But we felt our right defensemen were extremely mobile and able to go back and pick up pucks and dump-ins, so that's the system we came up with. Barry was obviously a big part of that and it's worked pretty well for us."

The coaching staff, along with providing Gretzky a comfort zone for his development, have been crucial to the team's success, and ergo, to Gretzky's success.

"I'm really lucky. There's no way I could do this job if I didn't have the group around me that I have. They're truly phenomenal," Gretzky said. "The good thing about this is we debate every issue. Nobody's right or wrong, we try to talk every issue through and what is best for our team. What time we practice or how long a practice is or what lines are together. So communication between us is outstanding and that's made the job a lot more fun."

Said Barnett: "No question they have clout and authority. They don't just move pucks. When they speak, they're speaking for Wayne."

It was so cold inside the practice facility that Gretzky would rather talk outside. Nearby, there's a major-league baseball spring training facility. Palm trees dot the landscape. In the parking lot, a minor-league hockey team was doing dry-land training.

Does Gretzky figure he's a better coach now than when he hopped up behind the bench Oct. 5? He squinted into the afternoon sun and smiled.

"Oh, 10 times better. Trust me. That's life. You get better the more experience you have," Gretzky said. "The more you work with it, the more you're going to learn. And I learned that when I was 17. At 17, I remember Gordie Howe told me that he learned something new about hockey every day and he was about 45, 46 years old then. So, I always took that advice to heart and that's the way I feel as a coach.

"And that's why I like the staff that I have, they feel the same way. That's the part that's fun for me. It's exciting, and I do learn something every day and I'm a better coach today than I was two months ago, and hopefully I'll be a better coach next September than I was this year."

Scott Burnside is an NHL writer for ESPN.com.

Thursday, December 15, 2005

The new religion?

Hopelessly devoted

Ian Wylie
Saturday December 10, 2005
Guardian

The LED on his answerphone read zero when Sam returned to his flat last night following after-work drinks with colleagues. Today he'll sleep in, watch a DVD or two, play Xbox and call his mum. Tomorrow he plans to slip back into the office for a couple of hours.

Sam has few friends outside work and the last two (brief) relationships he had originated in the office. He's carrying over holidays to next year (again) and the five-day Christmas shutdown is, he believes, too long.

But Sam is not like his father, a workaholic who became an alcoholic. Sam believes he is different because he finds meaning in his work. He is an agency "creative" and some of his clients are NGOs and voluntary sector organisations. He loves what he does. He wakes up at 3am buzzing with ideas. His dad hated his job and lay awake worrying about work.

People are taking their work more personally than ever. And it's healthy to find meaning and fulfilment in our work, to be passionate about what we do. But what happens when our work becomes too personal?

Phillip Hodson meets people like Sam - lawyers, bankers, journalists, actors - on an almost daily basis at his north London counselling and psychotherapy practice. "I see a lot of clients who work double-digit days but don't understand why they have started walking in front of buses, why they can't sleep any more, why they have no partners - why they have no life." Many are depressed, says Hodson, but the reason is not some horrible trauma. It's because they love their work too much.

Research by consultancy Penna suggests a quarter of British workers are so passionate about their job they believe it defines who they are and gives their life meaning. For a significant 12%, work is the single biggest provider of "community and belonging". A fifth have built up a close network of friends through their workplace, a figure that rises in London, where the workplace is an even more important social agent. A quarter of directors say they get more meaning from work than at home or socially. Another poll says more than 90% of employees would rather find a new job than love this Christmas.

British workers spend 60% of their waking hours in work, but many put in long hours because they want to, rather than out of economic necessity. The workplace, instead of the home, is where they make friends, feel supported and find opportunities to "make a difference".

Older workers - those in their 40s and 50s - were brought up on a live-to-work ethic: work harder, work longer, earn lots of money. In contrast, surveys suggest younger workers focus on quality, not quantity of work. They take it for granted that they will be paid well. But they want work to mirror their values, too.

Younger workers hope to avoid the mistakes their parents made. But the lure of work that is "significant" or "challenging" can be incredibly seductive. You know who they are - friends for whom work has become their sole passion, their primary source of self-esteem, recognition and respect. Maybe it's you.

Having a job that makes you feel valued and part of a community does not in itself render you unbalanced. The imbalance occurs when work is the only place where your needs are being met.

Workplace shocks such as redundancy, being passed over for promotion or poor appraisals can be devastating - an awful epiphany when you discover that work is not a meritocracy, your boss is not your friend and your colleagues are not your family. In short, bring your heart and soul to work, and there's a good chance you'll end up feeling betrayed.

For people with partners and families, marital or sexual problems are often an early warning sign, says Hodson. "Many of my clients have become so adrenalised by their job that everything else, even playing with their children or looking after their partners, gets lost. Young entrepreneurs tell me that they're 'only doing it for the family'. But it's bollocks. They're doing it for themselves. I find it very strange that you would build a family and then spend most of your time away from it."

Employers, meanwhile, are joining the dots, ready to believe that more engaged workers are more productive workers. "Employees will be more motivated, loyal, creative and productive in an organisation that has helped them find meaning at work," says Gary Browning, chief executive of Penna.

Nokia, Unilever, McKinsey, Shell, Coca-Cola, Hewlett Packard, Merck Pharmaceuticals, Starbucks and the Co-operative Bank are among the big employers exploring the concept of "spiritual intelligence". While emotional intelligence is about what we feel, spiritual intelligence (SI) concerns the soul and the quest for purpose and meaning.

For some employers, the interest is as perfunctory as rewriting mission statements to include "social responsibility" or "the environment". But others are introducing the SI concept into their management processes and business practices.

Danah Zohar, the academic who wrote the book (literally) on spiritual intelligence, defends employers' right to create meaning at work. "The point is not to find all our meaning at work, but to find more meaning at work," she says. "It's not a case of making work more meaningful than home, or vice versa. The more meaning the better, wherever we find it.

"Most executives have to work long hours; that's what is expected of them. But I know many who are now happy to work a 16- or 17-hour-day because they've found meaning."

Should we accept that work is our new religion, where we worship and sacrifice our time? Or should we put our work back in context? The Spanish word for work, "trabajo", comes from a Latin word for an instrument of torture. Even the Puritans considered work a means to an end.

Successful companies are usually the ones that stick to their core competencies. Maybe the smart ones should trust their employees with the time and space to find more meaning outside work, and leave spiritual experiences to babies in mangers.

· Spiritual Intelligence by Danah Zohar and Ian Marshall is published by Bloomsbury at £7.99

Order out of chaos?

link to original piece.

Special Report

Internet encyclopaedias go head to head

Jim Giles

Jimmy Wales' Wikipedia comes close to Britannica in terms of the accuracy of its science entries, a Nature investigation finds.

One of the extraordinary stories of the Internet age is that of Wikipedia, a free online encyclopaedia that anyone can edit. This radical and rapidly growing publication, which includes close to 4 million entries, is now a much-used resource. But it is also controversial: if anyone can edit entries, how do users know if Wikipedia is as accurate as established sources such as Encyclopaedia Britannica?

Several recent cases have highlighted the potential problems. One article was revealed as falsely suggesting that a former assistant to US Senator Robert Kennedy may have been involved in his assassination. And podcasting pioneer Adam Curry has been accused of editing the entry on podcasting to remove references to competitors' work. Curry says he merely thought he was making the entry more accurate.

However, an expert-led investigation carried out by Nature — the first to use peer review to compare Wikipedia and Britannica's coverage of science — suggests that such high-profile examples are the exception rather than the rule.

The exercise revealed numerous errors in both encyclopaedias, but among 42 entries tested, the difference in accuracy was not particularly great: the average science entry in Wikipedia contained around four inaccuracies; Britannica, about three.

Considering how Wikipedia articles are written, that result might seem surprising. A solar physicist could, for example, work on the entry on the Sun, but would have the same status as a contributor without an academic background. Disputes about content are usually resolved by discussion among users.

But Jimmy Wales, co-founder of Wikipedia and president of the encyclopaedia's parent organization, the Wikimedia Foundation of St Petersburg, Florida, says the finding shows the potential of Wikipedia. "I'm pleased," he says. "Our goal is to get to Britannica quality, or better."

Wikipedia is growing fast. The encyclopaedia has added 3.7 million articles in 200 languages since it was founded in 2001. The English version has more than 45,000 registered users, and added about 1,500 new articles every day of October 2005. Wikipedia has become the 37th most visited website, according to Alexa, a web ranking service.

But critics have raised concerns about the site's increasing influence, questioning whether multiple, unpaid editors can match paid professionals for accuracy. Writing in the online magazine TCS last year, former Britannica editor Robert McHenry declared one Wikipedia entry — on US founding father Alexander Hamilton — as "what might be expected of a high-school student". Opening up the editing process to all, regardless of expertise, means that reliability can never be ensured, he concluded.

Yet Nature's investigation suggests that Britannica's advantage may not be great, at least when it comes to science entries. In the study, entries were chosen from the websites of Wikipedia and Encyclopaedia Britannica on a broad range of scientific disciplines and sent to a relevant expert for peer review. Each reviewer examined the entry on a single subject from the two encyclopaedias; they were not told which article came from which encyclopaedia. A total of 42 usable reviews were returned out of 50 sent out, and were then examined by Nature's news team.

Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively.

Editors at Britannica would not discuss the findings, but say their own studies of Wikipedia have uncovered numerous flaws. "We have nothing against Wikipedia," says Tom Panelas, director of corporate communications at the company's headquarters in Chicago. "But it is not the case that errors creep in on an occasional basis or that a couple of articles are poorly written. There are lots of articles in that condition. They need a good editor."

Several Nature reviewers agreed with Panelas' point on readability, commenting that the Wikipedia article they reviewed was poorly structured and confusing. This criticism is common among information scientists, who also point to other problems with article quality, such as undue prominence given to controversial scientific theories. But Michael Twidale, an information scientist at the University of Illinois at Urbana-Champaign, says that Wikipedia's strongest suit is the speed at which it can updated, a factor not considered by Nature's reviewers.

"People will find it shocking to see how many errors there are in Britannica," Twidale adds. "Print encyclopaedias are often set up as the gold standards of information quality against which the failings of faster or cheaper resources can be compared. These findings remind us that we have an 18-carat standard, not a 24-carat one."

The most error-strewn article, that on Dmitry Mendeleev, co-creator of the periodic table, illustrates this. Michael Gordin, a science historian at Princeton University who wrote a 2004 book on Mendeleev, identified 19 errors in Wikipedia and 8 in Britannica. These range from minor mistakes, such as describing Mendeleev as the 14th child in his family when he was the 13th, to more significant inaccuracies. Wikipedia, for example, incorrectly describes how Mendeleev's work relates to that of British chemist John Dalton. "Who wrote this stuff?" asked another reviewer. "Do they bother to check with experts?"

But to improve Wikipedia, Wales is not so much interested in checking articles with experts as getting them to write the articles in the first place.

As well as comparing the two encyclopaedias, Nature surveyed more than 1,000 Nature authors and found that although more than 70% had heard of Wikipedia and 17% of those consulted it on a weekly basis, less than 10% help to update it. The steady trickle of scientists who have contributed to articles describe the experience as rewarding, if occasionally frustrating (see 'Challenges of being a Wikipedian').

Greater involvement by scientists would lead to a "multiplier effect", says Wales. Most entries are edited by enthusiasts, and the addition of a researcher can boost article quality hugely. "Experts can help write specifics in a nuanced way," he says.

Wales also plans to introduce a 'stable' version of each entry. Once an article reaches a specific quality threshold it will be tagged as stable. Further edits will be made to a separate 'live' version that would replace the stable version when deemed to be a significant improvement. One method for determining that threshold, where users rate article quality, will be trialled early next year.

Additional research by Declan Butler, Jenny Hogan, Michael Hopkin, Mark Peplow and Tom Simonite.

What makes a legend most?

Pundits pundicate about punditry.

link to original article.

Our 10 Most Enduring Ideas

by Art Kleiner

To celebrate s+b’s 10th anniversary, we looked back at the conceptual breakthroughs that appeared in this magazine — and invited our readers to vote on which were most likely to last.

From its inception in 1995, strategy+business has been a magazine dedicated to the value and power of ideas. It has embodied the view that, as Victor Hugo once put it, “An invasion of armies can be resisted, but not an idea whose time has come.” We like to think that our readers are real-world users of ideas, pragmatists who understand that a conceptual breakthrough can make enormous day-to-day difference.

Thus, for our 10th-anniversary issue, we took the question head-on: Of all the ideas strategy+business has covered, which are most likely to endure for (at least) another 10 years? After reviewing the magazine’s back issues (all available free on our Web site, www.strategy-business.com), Deputy Editor Amy Bernstein and I winnowed out a manageable list of 35 key contenders. We invited two different groups to vote: the electronic subscribers to our e-mail newsletters enews and Resilience Report (also available free on the Web site); and the thought leaders — writers, subjects of profiles, and interviewees — who have been featured in our pages during the past 10 years. We set no limit on the number of votes any individual could cast. Voters were also given a chance to comment online, and many did. Additionally, we asked for contributions from two of s+b’s authors: Harvard Business School professor and author Rosabeth Moss Kanter (who wrote the first major “idea piece” for Issue 1 of strategy+business, in 1995), and MIT lecturer and contributing editor Michael Schrage (one of the most consistently cogent connoisseurs of management ideas we know).

Some comment writers took us to task for superficiality. “To be honest,” Charles Handy wrote, “I think a lot of these are just glorified common sense.” Others accused us of rehashing old concepts (“That chestnut again?”) or carelessness with our categorization. “Some of these aren’t ‘ideas,’” noted Warren Bennis. “They’re strategy or action steps.” On the other hand, the survey enthralled many; as one anonymous voter put it, “There are a wealth of choices!” In the end, we were gratified that so many people (including Professors Handy and Bennis, who are keenly original and highly influential generators of management thinking in their own right) felt drawn to participate in our modest and informal, but ultimately thought-provoking, survey.

Here, then, are the winners — the ideas voted most likely to affect the way businesses, including your business, are conducted in the long run.

Top 10 Concepts

1. Execution (1,911 votes; 49.3 percent of the voters chose this concept). It’s not your strategic choices that drive success, but how well you implement them. As Larry Bossidy and Ram Charan pointed out in their book Execution, the most critical quality for managers is the ability to put ideas into action. Almost half the people who took the survey, in voting for this concept, explicitly affirmed the conceptual importance of a facility for well-disciplined action. “After 22 years in business,” wrote one anonymous correspondent, “across a number of roles, it continues to amaze me how many businesses fail at the basics.” To our readers, execution does not mean attention to numbers and metrics, but, as another correspondent wrote, “looking at your whole process, finding small ways to improve each part individually, really implementing the improvements, tracking the results to judge effectiveness, and then repeating the process.” (See “Execution: The Un-Idea,” by Rosabeth Moss Kanter, below.)

Execution: The Un-Idea
by Rosabeth Moss Kanter

Twenty-five years ago, management meant control. Managers put in controls, handed workers specifications, and established formal structures that ensured that people did what they were told. Companies operated alone, rather than being part of partner networks or plugging their people into informal relationships. It was an ineffective way to operate, especially after the information technology revolution took place, and to break out of it, companies needed management ideas. Innovation and intrapreneurship, Total Quality Management, Six Sigma, reengineering, networked organizations — these were all conceptual handles that allowed executives to justify and develop new breakthrough practices.

Today, companies don’t need new ideas in the same way they did 25 years ago (although they still need new business strategies). They’ve been through the paradigm shift. They have sustained tremendous improvement in productivity, effectiveness, and attentiveness to opportunities. That doesn’t mean they’ve been successful; indeed, as they’ve explored new ways of working, we have all learned how hard it is to put these ideas into practice. Executives routinely say that the hardest thing they do is improve people and corporate culture. It’s still much easier to let such matters slip, to neglect them. And in the past few years we’ve seen what happens as a result: Ethical standards, and our ability to groom future leaders, inevitably decline.

That’s why execution, or “making it happen,” is so important. Execution is the un-idea; it means having the mental and organizational flexibility to put new business models into practice, even if they counter what you’re currently doing. That ability is central to running a company right now. So rather than chasing another new management fad, or expecting still another “magic bullet” to come along, companies should focus on execution to effectively use the organizational tools we already have.

2. The Learning Organization (1,807; 46.6 percent). A learning organization is one that is deliberately designed to encourage everyone in it to keep thinking, innovating, collaborating, talking candidly, improving their capabilities, making personal commitments to their collective future, and thereby increasing the firm’s long-term competitive advantage. In putting forth this idea, we invoked such influential authors as John Seely Brown (The Social Life of Information, The Only Sustainable Edge); Arie de Geus (The Living Company), and Peter Senge (The Fifth Discipline, Presence). The high ranking of this concept and many of the comments about it confirm something we see out in the world. Even the most hard-nosed managers are aware that they can gain sustained competitive advantage only by developing the learning capacity of their people, separately and together. This doesn’t just mean sharing knowledge and skills; it means cultivating the habits of personal character that lead people, up and down the hierarchy, to become more capable. Organizations that help their people do that will reap enormous benefits in the future (or so almost half of the respondents seemed to feel).

3. Corporate Values (1,555; 40.1 percent). Companies that care about ethics, trust, citizenship, and even meaning and spirituality in the workplace (or that simply articulate their values carefully) perform better in the marketplace than companies that care just about “making money.” So goes the concept — but does it correspond with on-the-ground reality? Skeptics abounded: “After so many scandals,” wrote one anonymously, “I doubt if this principle is really true!” But the concept ranked third in the vote, and our articles about normative ethics (such as “The Value of Corporate Values,” by Reggie Van Lee, Lisa Fabish, and Nancy McGaw, s+b, Summer 2005) have consistently ranked among our most popular features. Respondents regarded scandals like those at Enron and Tyco as proof that “in the long run, corporate and social agendas must converge. Relationships matter.” At the same time, many questions remain unanswered about the nature and role of values in corporations. For example, as one respondent wrote: “Given the ever smaller number of individuals with respect for ethics and values, how are corporations and governments expected to develop them in their DNA?”

4. Customer Relationship Management (1,554; 40.1 percent). The cultivation of long-term relationships with customers, including awareness of their needs, leads to highly focused, capable companies that try to make consumers “part of the family.” Over the last decade, strategy+business has singled out such customer-centric organizations as Snap-on Tools, Virgin Atlantic Airways, Apple Computer, Starbucks, and the Boston Red Sox (a mention of which cost this idea the vote of one Yankees fan). Readers added more exemplars to that list: Sonic, Petco, Medline, HSBC’s First Direct division, and, with several mentions, Tesco. A large number of readers commented that despite 30 years of exhortations to “put the customer first,” many companies don’t manage to adequately meet their customers’ needs (or these days, give them an experience that reinforces their ongoing relationship with the brand or company). One fascinating qualification came from correspondent Malcolm Wicks: “Being customer-centric is not the same as CRM, which is more likely to be sales-centric. Being customer-centric is all about doing things that most benefit your targeted customers, even when there is no direct benefit for your company. As everything gets more commoditized, companies that are most customer-centric will be the most successful.”

5. Disruptive Technology (1,513; 39.0 percent). As Clayton Christensen noted in The Innovator’s Dilemma, technological innovation radically alters markets by undermining incumbent companies — which are vulnerable because their offerings are all tailored to the needs of their existing customers. Change feels like a betrayal of those customer relationships. Thus the makers of personal computers trumped Digital Equipment; Wal-Mart trumped Sears; and downloadable music is trumping the recording industry. “You can be doing everything for your customer,” one reader wrote, “and not see a market shift while it is occurring.” Professor Christensen’s idea lives on, to an extent, because of its two-part form. First, there is a warning: Your most cherished policies and practices — in this case, the hallowed sanctity of a successful customer relationship — can include the seeds of your undoing. Second, there is a way out: Preempt your own comfort zone, adopting a disruptive technology yourself before others beat you to it.

6. Leadership Development (1,432; 37.0 percent): You don’t have to rely on “putting the right people in place.” You can train all employees to be better choosers, better strategists, better managers, and in the end, better leaders. More than a third of the respondents were drawn to this because they saw leverage here: Companies can be both more effective and more responsible with smart leadership development practices in place (several people referred to emotional intelligence in this vein). Leadership is important, not because of the leaders’ actions in themselves, but because of the actions that everyone else takes on their behalf. (For an extended view of this argument, see “The Realist’s Guide to Moral Purpose,” by Nikos Mourkogiannis, s+b, Winter 2005.) Skeptics protested that leadership development, as a concept, represented a veneer masking the dog-eat-dog realities of corporate life: “The person at the top doesn’t want an organization full of leaders and enthusiastic achievers. This puts too much strain on the CEO and his or her ability to control.” And some, like Michael Schrage, pointed out the dangers of leadership as a concept. (See “Leadership: Its Time Has Gone,” by Michael Schrage, below.)

Leadership: Its Time Has Gone
by Michael Schrage

The bitterest business rivalry over the past decade hasn’t been the struggle between free trade and protectionism, between capital and labor, or between Microsoft and everyone else; the bitterest rivalry has been leadership versus management. Leadership won — but it’s been a Pyrrhic victory at best.

Harvard Business School’s Abraham Zaleznik articulated the difference in his classic 1977 essay “Managers and Leaders: Are They Different?” Proclaimed Professor Zaleznik, “Managers and leaders are two very different types of people. Managers’ goals arise out of necessities rather than desires; they excel at defusing conflicts between individuals or departments, placating all sides while ensuring that an organization’s day-to-day business gets done. Leaders, on the other hand, adopt personal, active attitudes toward goals. They look for the opportunities and rewards that lie around the corner, inspiring subordinates and firing up the creative process with their own energy. Their relationships with employees and coworkers are intense, and their working environment is often chaotic.”

With artfully hedged neutrality, Professor Zaleznik declared both management and leadership essential for organizational success. And thus he raised the critical business question: Which offered the superior return on investment?

Global markets provided an unambiguously clear opinion: They craved leadership. The Lord John Brownes, Jack Welches, Percy Barneviks, Carlos Ghosns, Andy Groves, and Bill Gateses are celebrated far more as innovative global leaders than as operational management exemplars. The leadership “brand” has become so powerful and compelling that successful managers are inherently considered “great leaders.” Ironically, however, people tagged as great leaders don’t have to be great business managers. Leadership is the value added; management is what gets automated, rightsized, or outsourced to Bangalore or Guangzhou.

Yet the bursting of the dot-com/telecom bubbles and the disgraceful collapses of Enron, Arthur Andersen, WorldCom, Tyco, Parmalat, etc., have cruelly constrained the brand trajectory of the leadership label. Where governance was once the longest and most elastic of leashes that let leadership stray with minimal attention, it is now a beautifully upholstered cage with 24-hour surveillance and legal advisors on call worldwide.

In other words, the global rise of governance as a business concern reflects the pathological failure of leaders to manage. Accountability, transparency, and oversight will mean something very different to CEOs and the boardroom over the next 10 years than they did in the past. A new ecology of interdependent management, leadership, and governance is arising. Striking a balance among these three imperatives will be a greater challenge in years to come. Will those who meet that challenge emerge with better leadership, better management, and better governance? Today’s “leaders” have lost the right to be the only ones with the authority and legitimacy to answer those questions.

7. Organizational DNA (1,315; 33.9 percent): Leaders can design an organization’s structures — incentives, decision rights, reporting relationships, and information flows — to induce high performance by aligning them with one another and the strategic goals of the enterprise. Elucidated in the book Results, by Gary L. Neilson and Bruce A. Pasternack, this idea attracted people who wanted to design organizational change without “sermonizing about behavior,” as one reader put it.

8. Strategy-Based Transformation (1,277; 33.0 percent): Beyond the “blank page” of reengineering, this is the redesign of processes and organizational structures, and the consequent cultural change, to fulfill the strategic goals of the enterprise. In an ideal universe, this would not even be a management concept, because, as one correspondent put it, “All company activities should be aligned to the enterprise strategy.”

9. Complexity Theory (1,187; 30.6 percent): Markets and businesses are complex systems that can’t be controlled mechanistically, but their emergent order can sometimes be anticipated. An understanding of the ways that complex systems evolve can help managers intervene and act more effectively. Over the years, complexity theory has come to mean a family of related, but sometimes contradictory, theories — including chaos theory, artificial life, probability theory, and even system dynamics — of intricate and nonlinear systems in which so many elements interrelate that the effects appear random and unpredictable, even though it is possible to trace patterns of causality and probability. This topic garnered several comments from enthusiasts (“The most profound thing to hit management science since the invention of money”) and at least one denigrator, who claimed that managers will never make a business decision based on statistical models. If the comments we read are true, then today’s practitioners of complexity theory are working behind the scenes, acting as computer-aided consiglieri, giving decision makers a more nuanced view of the potential hurricanes caused by the butterfly wings they flap.

10. Lean Thinking (1,183; 30.5 percent): This type of process and management innovation is exemplified by the Toyota production system. Employees use a heightened awareness of work flow and demand to cut waste, eliminate cost, boost quality, and customize mass production. Said one anonymous correspondent, “It combines with complexity theory, emergent behavior, wisdom of crowds, disruption, and agile thinking to extend into areas like R&D to redefine innovation practices. Management thinking will need to change to address these fertile intersections.”

The Value of Ideas

And the ideas that didn’t make it into the top 10? They are also noteworthy, in part because many of them are more specific, more technology oriented, and more closely related to management functions. It’s as if ideas and concepts aren’t deemed truly enduring unless they transcend mere functions, like R&D, IT, finance, corporate governance, marketing, and manufacturing.

Only one group seemed to disagree: the thought leaders. Their list of most enduring ideas looked like this:

The Thought Leaders’ Top 10
1. Disruptive Technology
2. China Inc.
3. Corporate Governance Reform
4. Corporate Values
5. Format Competition
6. The Learning Organization
7. Advantaged Supply Chain Management
8. Complexity Theory
9. Glocalization
10. Enterprise Resilience

Perhaps readers of strategy+business turn to management ideas for diagnosis: for help coming to terms with the problems that keep their companies from acting effectively. Thought leaders, on the other hand, seem more interested in prognosis; the future trend–gazing that makes people much better strategists. Our contributors look outward; our readers look inward, it seems.

In the end, a really good business idea has five key qualities. (1) It is timely: It addresses, in a new, compelling way, an issue that is important to people right now. (It’s no coincidence, for example, that supply chain management became an important concept just as manufacturing became much more global.) (2) It has explanatory power: It reveals the hidden patterns and interrelationships that shape the phenomena we see, and that other theories or disciplines have not fully explained. (3) It has pragmatic value: It can be put into practice to produce replicable results. (Even relatively “soft” concepts like organizational learning have a nuts-and-bolts edge, helping to build human capabilities.) (4) It has a robust empirical foundation: It can be tested with real-world experience, and ideally with measurable data, and can survive theoretical challenge. (5) It has a natural constituency: A group of key people are ready to hear it.

I think all the ideas listed in our top 10 have those qualities. Or at least I hope so, because the stakes are high. Ideas about business, from the invention of accounting to the “invisible hand” of Adam Smith to the thinking of present-day economists, have had impact not just in the business world, but beyond. If these are the most enduring business ideas, then the rest of the world will be shaped accordingly.

--------------------------------------------------------------------------------
Art Kleiner (kleiner_art@strategy-business.com) is editor-in-chief of strategy+business.

Sunday, December 11, 2005

Berlin revisited: the hedgehog vs. the fox in forecasting.

From the New Yorker. link to original article.

EVERYBODY’S AN EXPERT
by LOUIS MENAND
Putting predictions to the test.
Issue of 2005-10-05
Posted 2005-11-28

Prediction is one of the pleasures of life. Conversation would wither without it. “It won’t last. She’ll dump him in a month.” If you’re wrong, no on will call you on it, because being right or wrong isn’t really the point. The point is that you think he’s not worthy of her, and the prediction is just a wa of enhancing your judgment with a pleasant prevision of doom. Unless you’re putting money on it, nothing is at stake except your reputation fo wisdom in matters of the heart. If a month goes by and they’re still together, the deadline can be extended without penalty. “She’ll leave him, trust me It’s only a matter of time.” They get married: “Funny things happen. You never know.” You still weren’t wrong. Either the marriage is a bad one—you erred in the right direction—or you got beaten by a low-probability outcome.

It is the somewhat gratifying lesson of Philip Tetlock’s new book, “Expert Political Judgment: How Good Is It? How Can We Know?” (Princeton; $35), that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.

“Expert Political Judgment” is not a work of media criticism. Tetlock is a psychologist—he teaches at Berkeley—and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts. Tetlock also asked questions designed to determine how they reached their judgments, how they reacted when their predictions proved to be wrong, how they evaluated new information that did not support their views, and how they assessed the probability that rival theories and predictions were accurate.
Tetlock got a statistical handle on his task by putting most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.”

People who are not experts in the psychology of expertise are likely (I predict) to find Tetlock’s results a surprise and a matter for concern. For psychologists, though, nothing could be less surprising. “Expert Political Judgment” is just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse. In one study, college counsellors were given information about a group of high-school students and asked to predict their freshman grades in college. The counsellors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate. There are also many studies showing that expertise and experience do not make someone a better reader of the evidence. In one, data from a test used to diagnose brain damage were given to a group of clinical psychologists and their secretaries. The psychologists’ diagnoses were no better than the secretaries’.

The experts’ trouble in Tetlock’s study is exactly the trouble that all human beings have: we fall in love with our hunches, and we really, really hate to be wrong. Tetlock describes an experiment that he witnessed thirty years ago in a Yale classroom. A rat was put in a T-shaped maze. Food was placed in either the right or the left transept of the T in a random sequence such that, over the long run, the food was on the left sixty per cent of the time and on the right forty per cent. Neither the students nor (needless to say) the rat was told these frequencies. The students were asked to predict on which side of the T the food would appear each time. The rat eventually figured out that the food was on the left side more often than the right, and it therefore nearly always went to the left, scoring roughly sixty per cent—D, but a passing grade. The students looked for patterns of left-right placement, and ended up scoring only fifty-two per cent, an F. The rat, having no reputation to begin with, was not embarrassed about being wrong two out of every five tries. But Yale students, who do have reputations, searched for a hidden order in the sequence. They couldn’t deal with forty-per-cent error, so they ended up with almost fifty-per-cent error.

The expert-prediction game is not much different. When television pundits make predictions, the more ingenious their forecasts the greater their cachet. An arresting new prediction means that the expert has discovered a set of interlocking causes that no one else has spotted, and that could lead to an outcome that the conventional wisdom is ignoring. On shows like “The McLaughlin Group,” these experts never lose their reputations, or their jobs, because long shots are their business. More serious commentators differ from the pundits only in the degree of showmanship. These serious experts—the think tankers and area-studies professors—are not entirely out to entertain, but they are a little out to entertain, and both their status as experts and their appeal as performers require them to predict futures that are not obvious to the viewer. The producer of the show does not want you and me to sit there listening to an expert and thinking, I could have said that. The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious.

Tetlock’s experts were also no different from the rest of us when it came to learning from their mistakes. Most people tend to dismiss new information that doesn’t fit with what they already believe. Tetlock found that his experts used a double standard: they were much tougher in assessing the validity of information that undercut their theory than they were in crediting information that supported it. The same deficiency leads liberals to read only The Nation and conservatives to read only National Review. We are not natural falsificationists: we would rather find more reasons for believing what we already believe than look for reasons that we might be wrong. In the terms of Karl Popper’s famous example, to verify our intuition that all swans are white we look for lots more white swans, when what we should really be looking for is one black swan.
Also, people tend to see the future as indeterminate and the past as inevitable. If you look backward, the dots that lead up to Hitler or the fall of the Soviet Union or the attacks on September 11th all connect. If you look forward, it’s just a random scatter of dots, many potential chains of causation leading to many possible outcomes. We have no idea today how tomorrow’s invasion of a foreign land is going to go; after the invasion, we can actually persuade ourselves that we knew all along. The result seems inevitable, and therefore predictable. Tetlock found that, consistent with this asymmetry, experts routinely misremembered the degree of probability they had assigned to an event after it came to pass. They claimed to have predicted what happened with a higher degree of certainty than, according to the record, they really did. When this was pointed out to them, by Tetlock’s researchers, they sometimes became defensive.

And, like most of us, experts violate a fundamental rule of probabilities by tending to find scenarios with more variables more likely. If a prediction needs two independent things to happen in order for it to be true, its probability is the product of the probability of each of the things it depends on. If there is a one-in-three chance of x and a one-in-four chance of y, the probability of both x and y occurring is one in twelve. But we often feel instinctively that if the two events “fit together” in some scenario the chance of both is greater, not less. The classic “Linda problem” is an analogous case. In this experiment, subjects are told, “Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations.” They are then asked to rank the probability of several possible descriptions of Linda today. Two of them are “bank teller” and “bank teller and active in the feminist movement.” People rank the second description higher than the first, even though, logically, its likelihood is smaller, because it requires two things to be true—that Linda is a bank teller and that Linda is an active feminist—rather than one.

Plausible detail makes us believers. When subjects were given a choice between an insurance policy that covered hospitalization for any reason and a policy that covered hospitalization for all accidents and diseases, they were willing to pay a higher premium for the second policy, because the added detail gave them a more vivid picture of the circumstances in which it might be needed. In 1982, an experiment was done with professional forecasters and planners. One group was asked to assess the probability of “a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983,” and another group was asked to assess the probability of “a Russian invasion of Poland, and a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983.” The experts judged the second scenario more likely than the first, even though it required two separate events to occur. They were seduced by the detail.

It was no news to Tetlock, therefore, that experts got beaten by formulas. But he does believe that he discovered something about why some peopl make better forecasters than other people. It has to do not with what the experts believe but with the way they think. Tetlock uses Isaiah Berlin’ metaphor from Archilochus, from his essay on Tolstoy, “The Hedgehog and the Fox,” to illustrate the difference. He says,

Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.

A hedgehog is a person who sees international affairs to be ultimately determined by a single bottom-line force: balance-of-power considerations, or the clash of civilizations, or globalization and the spread of free markets. A hedgehog is the kind of person who holds a great-man theory of history, according to which the Cold War does not end if there is no Ronald Reagan. Or he or she might adhere to the “actor-dispensability thesis,” according to which Soviet Communism was doomed no matter what. Whatever it is, the big idea, and that idea alone, dictates the probable outcome of events. For the hedgehog, therefore, predictions that fail are only “off on timing,” or are “almost right,” derailed by an unforeseeable accident. There are always little swerves in the short run, but the long run irons them out.

Foxes, on the other hand, don’t see a single determining explanation in history. They tend, Tetlock says, “to see the world as a shifting mixture of self-fulfilling and self-negating prophecies: self-fulfilling ones in which success breeds success, and failure, failure but only up to a point, and then self-negating prophecies kick in as people recognize that things have gone too far.”

Tetlock did not find, in his sample, any significant correlation between how experts think and what their politics are. His hedgehogs were liberal as well as conservative, and the same with his foxes. (Hedgehogs were, of course, more likely to be extreme politically, whether rightist or leftist.) He also did not find that his foxes scored higher because they were more cautious—that their appreciation of complexity made them less likely to offer firm predictions. Unlike hedgehogs, who actually performed worse in areas in which they specialized, foxes enjoyed a modest benefit from expertise. Hedgehogs routinely over-predicted: twenty per cent of the outcomes that hedgehogs claimed were impossible or nearly impossible came to pass, versus ten per cent for the foxes. More than thirty per cent of the outcomes that hedgehogs thought were sure or near-sure did not, against twenty per cent for foxes.

The upside of being a hedgehog, though, is that when you’re right you can be really and spectacularly right. Great scientists, for example, are often hedgehogs. They value parsimony, the simpler solution over the more complex. In world affairs, parsimony may be a liability—but, even there, there can be traps in the kind of highly integrative thinking that is characteristic of foxes. Elsewhere, Tetlock has published an analysis of the political reasoning of Winston Churchill. Churchill was not a man who let contradictory information interfere with his idées fixes. This led him to make the wrong prediction about Indian independence, which he opposed. But it led him to be right about Hitler. He was never distracted by the contingencies that might combine to make the elimination of Hitler unnecessary.

Tetlock also has an unscientific point to make, which is that “we as a society would be better off if participants in policy debates stated their beliefs i testable forms”—that is, as probabilities—“monitored their forecasting performance, and honored their reputational bets.” He thinks that we’r suffering from our primitive attraction to deterministic, overconfident hedgehogs. It’s true that the only thing the electronic media like better than hedgehog is two hedgehogs who don’t agree. Tetlock notes, sadly, a point that Richard Posner has made about these kinds of public intellectuals which is that most of them are dealing in “solidarity” goods, not “credence” goods. Their analyses and predictions are tailored to make their ideologica brethren feel good—more white swans for the white-swan camp. A prediction, in this context, is just an exclamation point added to an analysis Liberals want to hear that whatever conservatives are up to is bound to go badly; when the argument gets more nuanced, they change the channel. O radio and television and the editorial page, the line between expertise and advocacy is very blurry, and pundits behave exactly the way Tetlock say they will. Bush Administration loyalists say that their predictions about postwar Iraq were correct, just a little off on timing; pro-invasion liberals wh are now trying to dissociate themselves from an adventure gone bad insist that though they may have sounded a false alarm, they erred “in the righ direction”—not really a mistake at all.

The same blurring characterizes professional forecasters as well. The predictions on cable news commentary shows do not have life-and-death side effects, but the predictions of people in the C.I.A. and the Pentagon plainly do. It’s possible that the psychologists have something to teach those people, and, no doubt, psychologists are consulted. Still, the suggestion that we can improve expert judgment by applying the lessons of cognitive science and probability theory belongs to the abiding modern American faith in expertise. As a professional, Tetlock is, after all, an expert, and he would like to believe in expertise. So he is distressed that political forecasters turn out to be as unreliable as the psychological literature predicted, but heartened to think that there might be a way of raising the standard. The hope for a little more accountability is hard to dissent from. It would be nice if there were fewer partisans on television disguised as “analysts” and “experts” (and who would not want to see more foxes?). But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself.