Saturday, January 14, 2006

Still the strangest society.

January 15, 2006

Shutting Themselves In


One morning when he was 15, Takeshi shut the door to his bedroom, and for the next four years he did not come out. He didn't go to school. He didn't have a job. He didn't have friends. Month after month, he spent 23 hours a day in a room no bigger than a king-size mattress, where he ate dumplings, rice and other leftovers that his mother had cooked, watched TV game shows and listened to Radiohead and Nirvana. "Anything," he said, "that was dark and sounded desperate."

I met Takeshi outside Tokyo not long ago, shortly after he finally left his parents' house to join a job-training program called New Start. He was wiry, with a delicate face, tousled, dyed auburn hair and the intensity of a hungry college freshman. "Don't laugh, but musicians really helped me, especially Radiohead," he told me through an interpreter, before scribbling some lyrics in English in my notebook. "That's what encouraged me to leave my room."

The night Takeshi and I met, we were at one of New Start's three-times-a-week potluck dinners at a community center where the atmosphere was like a school dorm's - a dartboard nailed to the wall over a large dining table, a worn couch and overstuffed chairs in front of a TV blaring a soccer match. About two dozen guys lounged on chairs or sat on tatami mats, slurping noodles and soup and talking movies and music. Most were in their 20's. And many had stories very much like Takeshi's.

Next to us was Shuichi, who, like Takeshi, asked that I use only his first name to protect his privacy. He was 20, wore low-slung jeans on his lanky body and a 1970's Rod Stewart shag and had dreams of being a guitarist. Three years ago, he dropped out of high school and became a recluse for a miserable year before a counselor persuaded him to join New Start. Behind him a young man sat on the couch wearing small wire-frame glasses and a shy smile. He ducked his head as he spoke, and his voice was so quiet that I had to lean in to hear him. After years of being bullied at school and having no friends, Y.S., who asked to be identified by his initials, retreated to his room at age 14, and proceeded to watch TV, surf the Internet and build model cars - for 13 years. When he finally left his room one April afternoon last year, he had spent half of his life as a shut-in. Like Takeshi and Shuichi, Y.S. suffered from a problem known in Japan as hikikomori, which translates as "withdrawal" and refers to a person sequestered in his room for six months or longer with no social life beyond his home. (The word is a noun that describes both the problem and the person suffering from it and is also an adjective, like "alcoholic.") Some hikikomori do occasionally emerge from their rooms for meals with their parents, late-night runs to convenience stores or, in Takeshi's case, once-a-month trips to buy CD's. And though female hikikomori exist and may be undercounted, experts estimate that about 80 percent of the hikikomori are male, some as young as 13 or 14 and some who live in their rooms for 15 years or more.

South Korea and Taiwan have reported a scattering of hikikomori, and isolated cases may have always existed in Japan. But only in the last decade and only in Japan has hikikomori become a social phenomenon. Like anorexia, which has been largely limited to Western cultures, hikikomori is a culturebound syndrome that thrives in one particular country during a particular moment in its history.

As the problem has become more widespread in Japan, an industry has sprung up around it. There are support groups for parents, psychologists who specialize in it (including one who counsels shut-ins via the Internet) and several halfway programs like New Start, offering dorms and job training. For all the attention, though, hikikomori remains confounding. The Japanese public has blamed everything from smothering mothers to absent, overworked fathers, from school bullying to the lackluster economy, from academic pressure to video games. "I sometimes wonder whether or not I understand this issue," confessed Shinako Tsuchiya, a member of Parliament, one afternoon in her Tokyo office. She has led a study group on hikikomori, but most of her colleagues aren't interested, and the government has yet to allocate funds. "They don't understand how serious it is."

That may be in part because the scope of the problem is frustratingly elusive. A leading psychiatrist claims that one million Japanese are hikikomori, which, if true, translates into roughly 1 percent of the population. Even other experts' more conservative estimates, ranging between 100,000 and 320,000 sufferers, are alarming, given how dire the consequences may be. As a hikikomori ages, the odds that he'll re-enter the world decline. Indeed, some experts predict that most hikikomori who are withdrawn for a year or more may never fully recover. That means that even if they emerge from their rooms, they either won't get a full-time job or won't be involved in a long-term relationship. And some will never leave home. In many cases, their parents are now approaching retirement, and once they die, the fate of the shut-ins - whose social and work skills, if they ever existed, will have atrophied - is an open question.

That isn't a problem just for the hikikomori and their families but also for a country that has been struggling with a sagging economy, a plummeting birth rate and what has been called a youth crisis. The rate of "school refusal" (kids who skip school for one month or more a year, which is sometimes a precursor to hikikomori) has doubled since 1990. And along with hikikomori sufferers, hundreds of thousands of other young men and women are neither working nor in school. After 15 years of sluggish growth, the full-time salaryman jobs of the previous generation have withered, and in their places are often part-time jobs or no jobs and a sense of hopelessness among many Japanese about the future.

In addition to the economy, Japanese culture and sex roles play a strong part in the hikikomori phenomenon. "Men start to feel the pressure in junior high school, and their success is largely defined in a couple of years," said James Roberson, a cultural anthropologist at Tokyo Jogakkan College and an editor of the book "Men and Masculinities in Contemporary Japan." "Hikikomori is a resistance to that pressure. Some of them are saying: 'To hell with it. I don't like it and I don't do well."' Also, this is a society where kids can drop out. In Japan, children commonly live with their parents into their 20's, and despite the economic downturn, plenty of parents can afford to support their children indefinitely - and do. As one hikikomori expert put it, "Japanese parents tell their children to fly while holding firmly to their ankles."

One result is a new underclass of young men who can't or won't join the full-time working world and who are a stark counterpoint to Japan's long-running image as a country bursting with industrious salarymen. "We used to believe everyone was equal," said Noki Futagami, the founder of New Start. "But the gap is growing. I suspect there will be a bipolarization of this society. There will be the group of people who can be in the global world. And then there will be others, like the hikikomori. The ones who cannot be in that world."

n the mid-1980's, young men began showing up at Dr. Tamaki Saito's office who were lethargic and uncommunicative and spent most of their days in their rooms. "I didn't have a name for it," Saito told me one Friday evening at Sofukai Sasaki Hospital outside Tokyo, where he's the medical director. Saito is soft-spoken with sleepy eyes and thick black hair that he brushes off his forehead as he talks. For the last decade he has been Japan's reigning expert on hikikomori, and his office shelves are filled with books he has written on the subject, including "How to Rescue Your Child From Hikikomori."

"Initially, I diagnosed it as a type of depression or personality disorder, or schizophrenia," Saito went on to say. But as he treated an increasing number of patients with similar symptoms, he used the term hikikomori for the problem. Soon after, the media latched on to the phenomenon, dubbing the shut-ins "the lost generation," "the missing million" and "the ultimate in social parasitism" and making hikikomori the focus of dozens of books, magazine articles and films - including a documentary, "Home," in which a filmmaker tracked the life of his shut-in brother. At the same time, hikikomori were making headlines for sensationalistic crimes, like the kidnapping of a 9-year-old girl by a shut-in who hid her in his room for almost a decade.

In reality, though, most hikikomori are too trapped by inertia to leave their houses, much less plot violent schemes. Instead, they are more likely to suffer depression or obsessive-compulsive behaviors. In some cases these psychological problems lead to hikikomori. But often they are symptoms - a consequence of spending months cooped up inside their rooms and inside their heads. One hikikomori took showers several hours a day and wore gloves as thick as an astronaut's to ward off germs (he eventually joined a halfway program, threw away the gloves and got a job), while another scrubbed the tiles in his family's shower for hours at a time. "Our water bills were 10 times what they'd normally be," his brother told me. "It's as if he was trying to clean the dirt in his mind and his heart."

Saito, who has treated more than 1,000 hikikomori patients, views the problem as largely a family and social disease, caused in part by the interdependence of Japanese parents and children and the pressure on boys, eldest sons in particular, to excel in academics and the corporate world. Hikikomori often describe years of rote classroom learning followed by afternoons and evenings of intense cram school to prepare them for high-school or university entrance exams. Today's parents are more demanding because Japan's declining birth rate means they have fewer children on whom to push their hopes, says Mariko Fujiwara, director of research at the Hakuhodo Institute of Life and Living in Tokyo. If a kid doesn't follow a set path to an elite university and a top corporation, many parents - and by extension their children - view it as a failure. "After World War II," Fujiwara told me, "Japanese only knew a certain kind of salaryman future, and now they lack the imagination and the creativity to think about the world in a new way."

Those post-World War II salarymen who did work so tirelessly were at least rewarded with the security of lifetime employment. "It was simple in my youth - you went to high school, then to the University of Tokyo," says Noki Futagami of New Start, referring to Japan's most prestigious university. "And then you got a job in a major corporation. That's where you grew up. The corporation took care of you for the rest of your life." Now in its place is a leaner global economy that demands the kinds of skills - independent thinking, communication, entrepreneurship - that many parents and schools don't teach. Boys have spent their young lives being educated for a work system that has shriveled, leaving many feeling inadequate and stuck.

Many hikikomori also describe miserable school years when they didn't, or couldn't, conform to the norm. They were bullied for being too fat or too shy or even for being better than everyone else at sports or music. As the Japanese saying goes, "The nail that sticks out gets hammered in." One hikikomori was a victim of bullying in fifth grade because he excelled in baseball without having played as long as his teammates. His father admitted that he did nothing to help him. "We told him to handle it himself. We thought he was stronger than he was." Fujiwara says that urban Japanese parents lead increasingly isolated lives - removed from the extended family and tight-knit communities of previous generations - and simply don't know how to teach their children to communicate and negotiate relationships with peers.

In other societies the response from many youths would be different. If they didn't fit into the mainstream, they might join a gang or become a Goth or be part of some other subculture. But in Japan, where uniformity is still prized and reputations and outward appearances are paramount, rebellion comes in muted forms, like hikikomori. Any urge a hikikomori might have to venture into the world to have a romantic relationship or sex, for instance, is overridden by his self-loathing and the need to shut his door so that his failures, real or perceived, will be cloaked from the world. "Japanese young people are considered the safest in the world because the crime rate is so low," Saito said. "But I think it's related to the emotional state of people. In every country, young people have adjustment disorders. In Western culture, people are homeless or drug addicts. In Japan, it's apathy problems like hikikomori."

One Friday afternoon not long ago, Yoshimi Kawakami waited at a doorstep near Kyoto, expecting to be stood up. It has happened in the snow in Tokyo and in the heat of Kyoto summer afternoons. She has waited for two hours or more, fueled by the hope that - this time - someone will answer.

It is part of being a "rental sister," as the outreach counselors are known at New Start. Rental sisters are often a hikikomori's first point of contact and his route back to the outside world. (There are a few rental brothers, too, but "women are softer, and hikikomori respond better to them," one counselor told me.)

The relationship usually begins after a parent telephones New Start and arranges for consultations and routine visits from a rental sister, which costs about $8,000 a year. The rental sister then writes a letter to the hikikomori, introducing herself and the program. "I never read it; I threw it away," said Y.S., the 28-year-old with the shy smile I'd met at New Start's potluck. When Kawakami arrived at his house in Chiba, near Tokyo, for the first time, Y.S. opened his bedroom door long enough to tell her, "Please, go home."

It was a typical first meeting. "We'll just talk through the door," Kumi Hashizume, a counselor at New Start, said. "And tell them our interests and hobbies. Very rarely do we get any words back. And if they do speak, it's very stressed." Months can go by before a hikikomori opens his door and more months before he ventures out with a rental sister to the park or to the movies. The goal is that eventually he will enroll in New Start and live in the program's dorms and participate in its job-training programs, at a day-care center, a coffee shop, a restaurant.

Y.S. was not going to be one of Kawakami's easier cases. On her second visit, Y.S. refused again to open the door. "I told him it was snowing and I might have to spend the night unless he came out to talk to me," she recalled. Kawakami, who is 31 and girlish in a miniskirt, white platforms and sea green eye shadow, has a playful, cajoling manner with hikikomori clients, as if she were an older sister prodding an obstinate kid brother. "He came out that day and sat stiff-straight in the living room for two hours while I and another person from New Start talked to him about ourselves and the program," she told me through an interpreter. By the fifth visit, Y.S. still refused to talk. So Kawakami asked him to write a letter about himself. Y.S. no longer remembered what he wrote, but Kawakami did: he told her his birth date and that he loved making plastic model cars. He wrote: "I don't think the situation is good, but I don't know how to solve it. This might be a chance to change it. But I don't know if I can do it." When Kawakami asked him to create a car for some children at a day-care center, two weeks later he gave her one, meticulously detailed and painted. "He seemed so pleased," she said. "It was as if he'd never been asked to do something for someone else before. He was sitting in his room all day where nothing was expected of him, and he did nothing to show his value." Her visits continued every other week for six months, and she encouraged Y.S. to set a goal to leave home before his next birthday. On the day before he turned 28, he packed two boxes into Kawakami's car, and they drove two hours to New Start.

Now, four months later, Kawakami stood in front of the house of a new client, a 26-year-old former engineering grad student named Hiroshi, who, for reasons that were unclear to his parents and Kawakami, stopped attending school two years ago. He goes out occasionally - no one knows where - particularly, it seemed, when Kawakami was scheduled to visit.

While the stereotype of a hikikomori is a man who never leaves his room, many shut-ins do venture out once a day or once a week to a konbini, as a 24-hour convenience store is known in Japan. There, a hikikomori can find a to-go bento box for breakfast, lunch and dinner, which means he doesn't have to rely on his mother to cook, and he doesn't have to suffer through a meal in public. And for hikikomori, who tend to live on a reversed clock, waking around noon and going to sleep in the early-morning hours, the konbini is a safe and anonymous late-night choice: the cashier doesn't make small talk, and the salarymen in their suits and schoolchildren in their uniforms - reminders of the life the hikikomori is not living - are asleep at home.

Konbini are just one of the accouterments that facilitate the hikikomori life. They don't cause hikikomori any more than do the TV's and computers and video games that hikikomori rely on to fill out the tedious hours. But if objects can be enablers, to borrow from recovery lingo, then modern technology would be among them, as would the konbini, where, like nocturnal animals, hikikomori grab what they need to feed their sheltered lives and quickly return home before the morning light cracks and the working world reappears.

Back at Hiroshi's house, no one knew whether he was at a konbini or elsewhere or when he'd be back. "I told him you were coming this week, and he's been out every day," his mother, Mieko, said, greeting Kawakami at the door. Mieko and her husband, Kazuo, are the parents of four grown children and are still struggling with how to get their eldest son out of their house. Hiroshi rarely speaks to either of them, and though his bedroom is 15 feet from the kitchen, he has had only two meals with them in the last two years. Mieko would gladly cook three meals a day for him if he'd eat them. "It's very hard for me as a mother," she said. She occasionally finds empty packages of fermented soybeans in the kitchen garbage can - one clue that he eats at all.

At their dining-room table that afternoon, Mieko and Kazuo offered a few theories about Hiroshi's retreat. He was embarrassed about an oral presentation in graduate school; he felt he had performed terribly. She and her husband expected a lot from him - "maybe too much," Mieko said. He was smart, but they didn't praise him or express affection. And they pushed him to attend a junior high school he disliked. "We forced him to study hard," Mieko said, "and our relationship wasn't good after that."

As she spoke, the front door shut and Hiroshi slinked by the dining room, disappearing into his bedroom. Mieko raised her eyebrows and exchanged a glance with Kawakami, who took a deep breath and followed him in.

"You knew I was coming! And you left!" she teased him in a lilting voice, as she sat down on a tatami mat across from him. He was tall and skinny, dressed in chinos and sneakers and a button-down shirt, with the sleeves rolled above his elbows. He crouched on the floor and seemed distracted, as if he had come from someplace important and had an equally important appointment to get to shortly.

By Japanese standards, his room was enormous, with a wall of delicate shoji screens leading to a rock garden. But it was hard to imagine what he did there all day. There were no stacks of manga, the popular Japanese comic books, no DVD's, no computer games, all things found in the rooms of most hikikomori. The TV was broken, and the hard drive was missing from his computer. There were a few papers on his desk, including a newsletter from New Start that Kawakami brought on her last visit. Otherwise, the only evidence that this was a hikikomori's room were three holes in the wall - the size of fists. Shut-ins often describe punching their walls in a fit of anger or frustration at their parents or at their own lives. The holes were suggestive too of the practice of "cutting" among American adolescent girls. Both acts seemed to be attempts to infuse feeling into a numb life.

"You stay in this room all day? What do you do?" Kawakami needled Hiroshi.

Hiroshi looked away and folded his legs under him.

"I don't know what I do. Nothing important. Is it so bad to stay in your room?"

She told him she wanted him to visit New Start. Next week? she asked. The week after? He didn't say no but wouldn't say yes. Instead, he rubbed his arm, refolded his shirt sleeve, crossed his legs again. He looked out the window, up at the ceiling, then glanced back at Kawakami before shifting his eyes away again. He was like a trapped bird, curious about her, but also, it seemed, scared and eager to flee.

Still, Hiroshi interacted with her and was engaged. The exchange between them was very different from one I saw the day before between Hajime Kitazawa, a rental brother, and a client named Eisuke, whom he has visited every week for five months. Eisuke has been a shut-in for four years and rarely responds with more than a word or two. The biggest breakthrough came one day when Eisuke turned on his PlayStation 2 and set out a joystick - an invitation to play. But Kitazawa later lost ground when he asked him about his plans for the future. Eisuke wouldn't speak or make eye contact for more than 30 minutes. "I dropped the subject," Kitazawa said. "Then we went back to playing the game, and he started to react again."

Back in Hiroshi's room, that same question didn't seem as risky. "I don't have anything I want to do," he said. "That's why I'm in this trouble. I missed the chance. I was in graduate school while most people were getting jobs. If I'd gone to work it would have been good."

Hiroshi didn't say why working would have been better or why it was too late at age 26 to start a career. He said only that he wouldn't leave the house "until I know exactly what I want to do." It was typical hikikomori thinking: better to stay in your room than risk venturing into the world and failing.

As she walked back to the train station that evening, though, Kawakami said she felt hopeful that Hiroshi would come to visit New Start soon, even though about 30 percent of rental sisters' clients won't leave their rooms and another 10 percent of those who do join the program eventually return to the hikikomori life. "We usually limit our visits to a year, but if we see progress, we'll keep coming back," a counselor said. One rental sister visited a 17-year-old for more than 18 months before he finally joined the program. And in one of the most extreme cases, Takeshi Watanabe of the Tokyo Mental Health Academy counseled a hikikomori for 10 years - 500 visits - until he persuaded him to leave home. He has since graduated from a university, works part time and last summer vacationed in Spain.

n a Saturday afternoon thick with Tokyo humidity, about 30 mothers and fathers milled around the hallway of a community center in a Tokyo suburb. Many were retirees, and under different circumstances they might have been on the golf course or enrolled in the center's swing-dancing class. Instead, at a time when they expected their sons and daughters to be married and having children of their own, once a month they spend a weekend afternoon at a hikikomori parents' support group. "I'm 69 and I would be retired, but hikikomori is expensive," said Kouhei Nishizuka, a father with neatly combed silver hair and the stooped shoulders of someone who spends too much time at his desk.

He has been supporting his 28-year-old daughter, among the minority of female hikikomori, for the last eight years. "I have been to hospitals; I've read books," he said as he sat in the lobby holding a folder thick with newsletters and reports from the support group. "My wife and I put her in the hospital, but doctors couldn't help her. So what do we do? I don't know what caused it. She wanted to be an animator, but found it difficult to find a job." Over time, he said, she lost more and more weight. "I worried about her." So he asked her to move home. "Then she was worried about people in the neighborhood seeing her, and that's when it started. I think she hates to be out because she doesn't want to be compared to the neighbors. I'm trying to get her to gain energy to do something."

By the time parents seek help, often their child has been shut in for a year or more. "When they call," Dr. Saito said, "I offer them three choices: 1) Come to me for counseling; 2) Kick your child out; 3) Accept your child's state and be prepared to take care of him for the rest of your life. They choose Option 1." He also offers poignantly simple parenting tips, like not leaving dinner at a child's doorstep. "You make dinner and call him to the table, and if he doesn't come then let him fend for himself." In addition to meals, parents often provide monetary allowances for their adult child, and in rare cases, if a child has become verbally or physically abusive, parents move out, leaving their home to the shut-in.

"They do everything for the child," one counselor said. "When we take a step forward, the parents get afraid. They don't want turbulence."

But some parents also genuinely fear that their children won't survive without them. "Maybe we should have kicked him out," Mieko, Hiroshi's mother, told me. "But we couldn't in the beginning. And now it's too late. I don't know how he'd take care of himself. He doesn't have the skills. We'd just end up supporting him." Meanwhile, her daughter wants to marry, and Mieko worries that her hikikomori son hurts her chances. "People check a family's background," she said. Reputation is everything.

Which means it takes some courage to pick up the telephone and call New Start or Saito or Sadatsugu Kudo, who runs an organization called Youth Support Center, which fields about 1,500 calls a year from families seeking help. "You have to understand the relationship between parent and child in Japan," he says. "It is so unique. Most parents feel that hikikomori is a failure of their child-rearing. And consulting someone about it is getting rid of your responsibility as a parent; it's like getting rid of your child."

A few weeks after I first met Takeshi, the Radiohead fan at the New Start potluck, I returned to visit. Takeshi was getting off work at the program's coffee shop and offered to show me his room, a few blocks away in a low-slung concrete building with linoleum floors. The bedroom was no bigger than 8 feet by 8 feet and decorated with little more than a single-bed futon, CD's and a guitar. Next door was a common room, and across from it was a closed door with a small stream of light underneath. Takeshi pointed to it and said: "He's a little strange. We very rarely talk. He buys his own dinner and eats in his room all the time."

After Takeshi spent four years in his childhood bedroom, he was finally motivated to leave, he said, by his frustration with himself and by the Radiohead lyrics: "This is my final fit, my final bellyache." Then he said: "It's not hopeful, but I learned that the world is not such a good place, and regardless we have to move on. That caught my heart." He re-enrolled in high school, and on that first day out, his skin was pale from being inside for so long; he didn't shave or brush his teeth; his pants and white T-shirt were dirty. "I'd forgotten all the basic rules." None of the students talked to him, a pattern that would more or less continue for the next two years. It wasn't until he graduated and found a job cleaning offices where his co-workers were in their 50's and 60's - "These people were adults and didn't have a bias about me and my background" - that he had conversations again. Still, when he wasn't at work, he was home, where his mother was worried enough that she eventually called New Start. And after meeting a rental sister once, he joined the program.

The night of my visit, Takeshi took me to the Wednesday potluck already under way. The room was bustling with more than 20 people and several conversations going on at once. A couple of guys were sitting alone, and some seemed much younger than their years, as if frozen in the time they first retreated to their rooms. But overall this group included New Start's most promising clients; another 40 percent don't come to the communal meals at all. And then there are the hikikomori who never cross the doorway of New Start or places like it. The director of one parents' support group receives letters from hikikomori in their 40's who have been withdrawn for a decade or more. "I tell them about halfway programs," he said, "but they never go."

It was 9 p.m. and the potluck was winding down. Before Takeshi returned to his dorm, I asked him what he wanted to do once he leaves New Start. He looked at me for a few seconds, assessing, I sensed, whether he could trust me. "You might find it silly, but I'd like to do something with TV variety shows," he said. "I'd like to be a scriptwriter." He also wants to enroll in a university. "But there are idealistic dreams," he said, "and then there's reality." Neither plan seemed particularly far-fetched, I told him. "You think so?" he said. "I don't know. It might be too late for me." He is 23 years old.

Maggie Jones is a Japan Society media fellow. She last wrote for the magazine about the grown "victims" of a child sex-abuse ring recanting their stories.

Copyright 2006The New York Times Company.

Friday, January 13, 2006

re-mix, re-model.

Nature 439, 6-7 (5 January 2006) | doi:10.1038/439006a

Mashups mix data into global service

Declan Butler

Is this the future for scientific analysis?

Will 2006 be the year of the mashup? Originally used to describe the mixing together of musical tracks, the term now refers to websites that weave data from different sources into a new service. They are becoming increasingly popular, especially for plotting data on maps, covering anything from cafés offering wireless Internet access to traffic conditions. And advocates say they could fundamentally change many areas of science — if researchers can be persuaded to share their data.

Some disciplines already have software that allows data from different sources to be combined seamlessly. For example, a bioinformatician can get a gene sequence from the GenBank database, its homologues using the BLAST alignment service, and the resulting protein structures from the Swiss-Model site in one step. And an astronomer can automatically collate all available data for an object, taken by different telescopes at various wavelengths, into one place, rather than having to check each source individually.

So far, only researchers with advanced programming skills, working in fields organized enough to have data online and tagged appropriately, have been able to do this. But simpler computer languages and tools are helping.

Google's maps database, for example, allows users to integrate data into it using just ten lines of code ( UniProt, the world's largest protein database, is developing its existing public interfaces to protein sequence data to encourage outside users to access and reuse its data.

The biodiversity community is one group working to develop such services. To demonstrate the principle, Roderic Page of the University of Glasgow, UK, built what he describes as a "toy" — a mashup called ( If you type in a species name it builds a web page for it showing sequence data from GenBank, literature from Google Scholar and photos from a Yahoo image search. If you could pool data from every museum or lab in the world, "you could do amazing things", says Page.

Donat Agosti of the Natural History Museum in Bern, Switzerland, is working on this. He is one of the driving forces behind AntBase and AntWeb, which bring together data on some 12,000 ant species. He hopes that, as well as searching, people will reuse the data to create phylogenetic trees or models of geographic distribution.

This would provide the means for a real-time, worldwide collaboration of systematicists, says Norman Johnson, an entomologist at Ohio State University in Columbus. "It has the potential to fundamentally change and improve the way that basic systematic research is conducted."

A major limiting factor is the availability of data in formats that computers can manipulate. To develop AntWeb further, Agosti aims to convert 4,000 papers into machine-readable online descriptions. Another problem is the reluctance of many labs and agencies to share data.

But this is changing. A spokesman for the Global Health Atlas from the World Health Organization (WHO), for example, a huge infectious-disease database, says there are plans to make access easier. The Global Biodiversity Information Facility (GBIF) has linked up more than 80 million records in nearly 600 databases in 31 countries. And last month saw the launch of the International Neuroinformatics Coordinating Facility.

But such initiatives are hampered by restrictive data-access agreements. The museums and labs that provide the GBIF with data, for example, often require outside researchers to sign online agreements to download individual data sets, making real-time computing of data from multiple sources almost impossible.

Nature has created its own mashup, which integrates data on avian-flu outbreaks from the WHO and the UN Food and Agriculture Organization into Google Earth ( (you will need to download Google Earth before opening the mashup file). The result is a useful snapshot, but illustrates the problem. As the data are not in public databases that can be directly accessed by software, we had to request them from the relevant agencies, construct a database and compute them into Google Earth. If the data were available in a machine-readable format, the mashup could search the databases automatically and update the maps as outbreaks occur. Other researchers could also mix the data with their own data sets.

Page and Agosti hope that researchers will soon become more enthusiastic about sharing. "Once scientists see the value of freeing-up data, mashups will explode," says Page.


Nature mashup: mapping avian flu around the globe

To demonstrate the potential of "mashups", which weave together data from different sources, Nature has created this simple example – a global visualization of avian flu cases and outbreaks. To our knowledge, this is the only source where all of this information has been brought together.

We used Google Earth to map over time each of the 1800 or so outbreaks of avian flu in birds that have been reported over the past two years. The service also shows all confirmed human cases of infection with the H5N1 influenza virus in the same period.

The animal data was compiled from information held by the FAO, the World Organization for Animal Health (OIE) and various government sources, and was generously provided to Nature by the UN Food and Agricultural Organization (FAO) Emergency Prevention System (EMPRES) for Transboundary Animal and Plant Pests and Diseases. Nature compiled the data on human cases from World Health Organization bulletins.

Mapping the FAO data posed several challenges. The biggest was that the original datasets contained no latitude and longitude data for the outbreaks, so it was impossible to map them directly. FAO uses a UN system for defining geographical units such as place names, provinces and districts that can only shared internally within UN agencies, and so it was not available. Latitude and longitude data therefore had to be calculated for every outbreak location.

The data was structured into two databases, one for animal data and one for health data, and then converted to 'kml', an XML-based computer language that Google Earth uses as a standard for data exchange.

The Google Earth programme is then able to plot the cases and outbreaks by location and time, with links to relevant Web resources from FAO, WHO, and other organizations.

The map is a 'beta', and although the data has been manually checked, errors in the positions of some locations cannot be excluded. The underlying animal data itself also suffers from underreporting of outbreaks, and omissions or inaccuracies in reporting. The FAO also notes with respect to its own data, "facts and figures are to the best of our knowledge accurate and up to date" and that "FAO assumes no responsibility for any error or omission in the datasets".

For further more-detailed information on methods, see

Declan Butler

The greatest danger lies in not asking.

The American Century

Economic and media pundit would have you believe that the American economic leadership is increasingly at risk. A figure no less than Warren Buffett has suggested as such in his annual report to investors in 2005. All cite bottoms-up data to support their views including current account deficit, trade deficit, credit bubble, housing bubble, inflation, outsourcing, weak dollar, and the rise of China and India. These bottoms-up data have some utility, but many other metrics of value are not, and cannot, be accounted for in the grand analysis. Lack of visibility into the relationships among these variables further impairs the utility of the analysis.

The top down view is far from dire. Indeed, I could equally make the case that the 21st century will be the American Century. US owns key fundamental advantages: (1) ideally positioned, secure land. (2) its social system is a scale-free network tilted towards merit vs legacy more than any other country in history. Both assets enable freedom: the former by reducing fear and by providing resources.; the latter by reducing fear and creating opportunity. Turns out that freedom is not just a political asset, but a socio-economic, biologic asset that is almost unique to this time and place in history.

Freedom enables the following: (1) intellectual freedom enables plasticity, communication, and independent thinking which promotes new ideas and intellectual capital; (2) social freedom enables mobility and diversity, which promotes intellectual capital and beauty, particularly female beauty; (3) freedom from fear creates courage, ambition, opportunity, optimism, vision, spirit, leadership, and morality. All of these combine to unlock and create value. Value becomes the basis of preferential attachment, predominantly in the form of envy.

No doubt prosperity is rising many places ex-US. Zero-sum gamers would have you believe that this is a bad thing for the US. Hardly. First thing people do as they get more prosperous is care about health and security. Having other superpowers share the burden of international security would reduce a major drag on US prosperity.

Furthermore, the first place many prosperous foreigners invest their wealth is in the US. That is the advantage of being the dominant economic brand. The sophisticated elite of foreign countries invest in US land, US Treasuries, US stock market. They often move here and become citizens and their descendents marry into the melting pot. Not only are they investing their wealth here, they are contributing their superior genes. Allowing the best and brightest foreign nationals to occupy our graduate schools is not a harm to US future, as protectionists would want us to believe. Most of these foreigners don’t go home, but settle here, and their kids end up marrying into the melting pot (I am an example). Indeed, our immigration and education policies are great filters to cherry-pick talent from the rest of the world and concentrate them in our country. The Mexican border is a physical filter that serves the same function that benefits the US in the long run.

The foreigners’ envy of the US remains staggering, even for “prosperous” nations. They envy our freedom, our opportunity, our land, our security, our ease & fearlessness, and our women. Getting a US visa stamp on their passport is a badge of honor and a signature of bachelor eligibility that even money can’t buy in many countries.

Due to globalization, the US is becoming to the rest of the world what NY has become to the US. It’s the hub. The country largely works for NY, then consumes products created by NY. NY creates and captures a lot of that value. Same can be said for LA, SF and other locales. On the global scale, the rest of the world is increasingly manufacturing our goods, then is buying it back at a huge premium. We create and capture most of the value in between. Out-sourcing is further increasing, not decreasing, this asymmetry---as such, outsourcing is a form of hegemony. We can do this because we have first mover advantage, leadership, and dominant economic brand. We also own the media channels to distribute that brand power to the rest of the world.

This is not necessarily a bad thing. We are technically becoming a “debtor” nation in the same way the NY is a “debtor” to the rest of the US --- they are both attractors of investment by others. Since the US creates more value per asset (ROE, ROA, ROIC) than any other country, this is an efficient model for global prosperity. My expectation is that we will lead the world to greater prosperity, but the benefits will preferentially accrue to the US.

One final comment: so many in the US are worried that our education system is falling behind the rest of the world. This statement is based on qualifying exams. Like most measurements, there is an inherent flaw in the analysis. Asian kids can outscore us throughout their childhood, but American kids create far more value as adults. How is that possible? American education system provides a more balance education of moral education, emotional education, and analytic education. This is ultimately what unlocks value, not whether you learn calculus by 7th grade. Unfortunately, America is increasingly falling prey to competitive education and test-taking-based education, but our education remains far more liberal than any other and is a source of fundamental advantage for our kids. The key asset once again is freedom. In the US, kids are allowed to think and speak freely. This translates into greater plasticity, and greater intellectual capital. It’s that freedom-based education of the US (compared to the inflexible oppressive systems in Asia) that has allowed the US to remain the best innovators even though Asia has been scoring higher on tests for decades. Tell me: who invented the internet and the tools that empowered this blog?

Thursday, January 12, 2006

If medicine is the goose, could management be the gander?

Excerpted from the Harvard Business Review.


Executives routinely dose their organizations with strategic snake oil: discredited nostrums, partial remedies, or untested management miracle cures. In many cases, the facts about what works are out there--so why don't managers use them?

BOLD NEW WAY OF THINKING has taken the medical establishment by storm in the past decade: the idea that decisions in medical care should be based on the latest and best knowledge of what actually works. Dr. David Sackett, the individual most associated with evidence-based medicine, defines it as "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients." Sackett, his colleagues at McMaster University in Ontario, Canada, and the growing number of physicians joining the movement are committed to identifying, disseminating, and, most importantly, applying research that is soundly conducted and clinically relevant.

If all this sounds laughable to you--after all, what else besides evidence would guide medical decisions?--then you are woefully naive about how doctors have traditionally plied their trade. Yes, the research is out there--thou-sands of studies are conducted on medical practices and products every year. Unfortunately, physicians don't use much of it. Recent studies show that only about 15% of their decisions are evidence based. For the most part, here's what doctors rely on instead: obsolete knowledge gained in school, long-standing but never proven traditions, patterns gleaned from experience, the methods they believe in and are most skilled in applying, and information from hordes of vendors with products and services to sell.

The same behavior holds true for managers looking to cure their organizational ills. Indeed, we would argue, managers are actually much more ignorant than doctors about which prescriptions are reliable--and they're less eager to find out. If doctors practiced medicine like many companies practice management, there would be more unnecessarily sick or dead patients and many more doctors in jail or suffering other penalties for malpractice.

It's time to start an evidence-based movement in the ranks of managers. Admittedly, in some ways, the challenge is greater here than in medicine. (See the sidebar "What Makes It Hard to Be Evidence Based?") The evidence is weaker; almost anyone can (and often does) claim to be a management expert; and a bewildering array of sources--Shakespeare, Billy Graham, Jack Welch, Tony Soprano, fighter pilots, Santa Claus, Attila the Hun--are used to generate management advice. Managers seeking the best evidence also face a more vexing problem than physicians do: Because companies vary so wildly in size, form, and age, compared with human beings, it is far more risky in business to presume that a proven "cure" developed in one place will be effective elsewhere.

Still, it makes sense that when managers act on better logic and evidence, their companies will trump the competition. That is why we've spent our entire research careers, especially the last five years, working to develop and surface the best evidence on how companies ought to be managed and teaching managers the right mind-set and methods for practicing evidence-based management. As with medicine, management is and will likely always be a craft that can be learned only through practice and experience. Yet we believe that managers (like doctors) can practice their craft more effectively if they are routinely guided by the best logic and evidence--and if they relentlessly seek new knowledge and insight, from both inside and outside their companies, to keep updating their assumptions, knowledge, and skills. We aren't there yet, but we are getting closer. The managers and companies that come closest already enjoy a pronounced competitive advantage.

What Passes for Wisdom

If a doctor or a manager makes a decision that is not based on the current best evidence of what may work, then what is to blame? It may be tempting to think the worst. Stupidity. Laziness. Downright deceit. But the real answer is more benign. Seasoned practitioners sometimes neglect to seek out new evidence because they trust their own clinical experience more than they trust research. Most of them would admit problems with the small sample size that characterizes personal observation, but nonetheless, information acquired firsthand often feels richer and closer to real knowledge than do words and data in a journal article. Lots of managers, likewise, get their companies into trouble by importing, without sufficient thought, performance management and measurement practices from their past experience. We saw this at a small software company, where the chair of the compensation committee, a successful and smart executive, recommended the compensation policies he had employed at his last firm. The fact that the two companies were dramatically different in size, sold different kinds of software, used different distribution methods, and targeted different markets and customers didn't seem to faze him or many of his fellow committee members.

Another alternative to using evidence is making decisions that capitalize on the practitioner's own strengths. This is particularly a problem with specialists, who default to the treatments with which they have the most experience and skill. Surgeons are notorious for it. (One doctor and author, Melvin Konner, cites a common joke amongst his peers: "If you want to have an operation, ask a surgeon if you need one.") Similarly, if your business needs to drum up leads, your event planner is likely to recommend an event, and your direct marketers will probably suggest a mailing. The old saying "To a hammer, everything looks like a nail" often explains what gets done.

Hype and marketing, of course also play a role in what information reaches the busy practitioner. Doctors face an endless supply of vendors, who muddy the waters by exaggerating the benefits and downplaying the risks of using their drugs and other products. Meanwhile, some truly efficacious solutions have no particularly interested advocates behind them. For years, general physicians have referred patients with plantar warts on their feet to specialists for expensive and painful surgical procedures. Only recently has word got out that duct tape does the trick just as well.

Numerous other decisions are driven by dogma and belief. When people are overly influenced by ideology, they often fail to question whether a practice will work--it fits so well with what they "know" about what makes people and organizations tick. In business, the use and defense of stock options as a compensation strategy seems to be just such a case of cherished belief trumping evidence, to the detriment of organizations. Many executives maintain that options produce an ownership culture that encourages 80-hour workweeks, frugality with the company's money, and a host of personal sacrifices in the interest of value creation. T.J. Rodgers, chief executive of Cypress Semiconductor, typifies this mind-set. He told the San Francisco Chronicle that without options, "I would no longer have employee shareholders, I would just have employees." There is, in fact, little evidence that equity incentives of any kind, including stock options, enhance organizational performance. A recent review of more than 220 studies compiled by Indiana University's Dan R. Dalton and colleagues concluded that equity ownership had no consistent effects on financial performance.

Ideology is also to blame for the persistence of the first-mover-advantage myth. Research by Wharton's Lisa Bolton demonstrates that most people--whether experienced in business or naive about it--believe that the first company to enter an industry or market will have a big advantage over competitors. Yet empirical evidence is actually quite mixed as to whether such an advantage exists, and many "success stories" purported to support the first-mover advantage turn out to be false. (, for instance, was not the first company to start selling books online.) In Western culture, people believe that the early bird gets the worm, yet this is a half-truth. As futurist Paul Saffo puts it, the whole truth is that the second (or third or fourth) mouse often gets the cheese. Unfortunately, beliefs in the power of being first and fastest in everything we do are so ingrained that giving people contradictory evidence does not cause them to abandon their faith in the first-mover advantage. Beliefs rooted in ideology or in cultural values are quite "sticky," resist disconfirmation, and persist in affecting judgments and choice, regardless of whether they are true.

Finally, there is the problem of uncritical emulation and its business equivalent: casual benchmarking. Both doctors and managers look to perceived high performers in their field and try to mimic those top dogs' moves. We aren't damning benchmarking in general--it can be a powerful and cost-efficient tool. (See the sidebar "Can Benchmarking Produce Evidence?") Yet it is important to remember that if you only copy what other people or companies do, the best you can be is a perfect imitation. So the most you can hope to have are practices as good as, but no better than, those of top performers--and by the time you mimic them, they've moved on. This isn't necessarily a bad thing, as you can save time and money by learning from the experience of others inside and outside your industry. And if you consistently implement best practices better than your rivals, you will beat the competition.

Benchmarking is most hazardous to organizational health, however, when used in its "casual" form, in which the logic behind what works for top performers, why it works, and what will work elsewhere is barely unraveled. Consider a quick example. When United Airlines decided in 1994 to try to compete with Southwest in the California market, it tried to imitate Southwest. United created a new service, Shuttle by United, with separate crews and planes (all of them Boeing 737s). The gate staff and flight attendants wore casual clothes. Passengers weren't served food. Seeking to emulate Southwest's legendary quick turnarounds and enhanced productivity, Shuttle by United increased the frequency of its flights and reduced the scheduled time planes would be on the ground. None of this, however, reproduced the essence of Southwest's advantage--the company's culture and management philosophy, and the priority placed on employees. Southwest wound up with an even higher market share in California after United had launched its new service. The Shuttle is now shuttered.

We've just suggested no less than six substitutes that managers, like doctors, often use for the best evidence--obsolete knowledge, personal experience, specialist skills, hype, dogma, and mindless mimicry of top performers--so perhaps it's apparent why evidence-based decision making is so rare. At the same time, it should be clear that relying on any of these six is not the best way to think about or decide among alternative practices. We'll soon describe how evidence-based management takes shape in the companies we've seen practice it. First, though, it is useful to get an example on the table of the type of issue that companies can address with better evidence.

An Example: Should We Adopt Forced Ranking?

The decision-making process used at Oxford's Centre for Evidence-Based Medicine starts with a crucial first step--the situation confronting the practitioner must be framed as an answerable question. That makes it clear how to compile relevant evidence. And so we do that here, raising a question that many companies have faced in recent years: Should we adopt forced ranking of our employees? The question refers to what General Electric more formally calls a forced-curve performance-ranking system. It's a talent management approach in which the performance levels of individuals are plotted along a bell curve. Depending on their position on the curve, employees fall into groups, with perhaps the top 20%, the so-called A players, being given outsize rewards; the middle 70% or so, the B players, being targeted for development; and the lowly bottom 10%, the C players, being counseled or thrown out of their jobs.

Without a doubt, this question arose for many companies as they engaged in benchmarking. General Electric has enjoyed great financial success and seems well stocked with star employees. GE alums have gone on to serve as CEOs at many other companies, including 3M, Boeing, Intuit, Honeywell, and the Home Depot. Systems that give the bulk of rewards to star employees have also been thoroughly hyped in business publications--for instance, in the McKinsey-authored book The War for Talent. But it's far from clear that the practice is worth emulating. It isn't just the infamous Enron--much praised in The War for Talent-- that makes us say this. A couple of years ago, one of us gave a speech at a renowned but declining high-technology firm that used forced ranking (there, it was called a "stacking system"). A senior executive told us about an anonymous poll conducted among the firm's top 100 or so executives to discover which company practices made it difficult to turn knowledge into action. The stacking system was voted the worst culprit.

Would evidence-based management have kept that company from adopting this deeply unpopular program? We think so. First, managers would have immediately questioned whether their company was similar enough to GE in various respects that a practice cribbed from it could be expected to play out in the same way. Then, they would have been compelled to take a harder look at the data presumably supporting forced ranking--the claim that this style of talent management actually has caused adherents to be more successful. So, for example, they might have noticed a key flaw in The War for Talent's research method: The authors report in the appendix that companies were first rated as high or average performers, based on return to shareholders during the prior three to ten years; then interviews and surveys were conducted to measure how these firms were fighting the talent wars. So, for the 77 companies (of 141 studied), management practices assessed in 1997 were treated as the "cause" of firm performance between 1987 and 1997. The study therefore violates a fundamental condition of causality: The proposed cause needs to occur before the proposed effect.

Next, management would have assembled more evidence and weighed the negative against the positive. In doing so, it would have found plenty of evidence that performance improves with team continuity and time in position--two reasons to avoid the churn of what's been called the "rank and yank" approach. Think of the U.S. Women's National Soccer Team, which has won numerous championships, including two of the four Women's World Cups and two of the three Olympic women's tournaments held to date. The team certainly has had enormously talented players, such as Mia Hamm, Brandi Chastain, Julie Foudy, Kristine Lilly, and Joy Fawcett. Yet all these players will tell you that the most important factor in their success was the communication, mutual understanding and respect, and ability to work together that developed during the 13 or so years that the stable core group played together. The power of such joint experience has been established in every setting examined, from string quartets to surgical teams, to top management teams, to airplane cockpit crews.

If managers at the technology firm had reviewed the best evidence, they would have also found that in work that requires cooperation (as nearly all the work in their company did), performance suffers when there is a big spread between the worst- and best-paid people--even though giving the lion's share of rewards to top performers is a hallmark of forced-ranking systems. In a Haas School of Business study of 102 business units, Douglas Cowherd and David Levine found that the greater the gap between top management's pay and that of other employees, the lower the product quality. Similar negative effects of dispersed pay have been found in longitudinal studies of top management teams, universities, and a sample of nearly 500 public companies. And in a recent Novations Group survey of more than 200 human resource professionals from companies with more than 2,500 employees, even though over half of the companies used forced ranking, the respondents reported that this approach resulted in lower productivity, inequity, skepticism, decreased employee engagement, reduced collaboration, damage to morale, and mistrust in leadership. We can find plenty of consultants and gurus who praise the power of dispersed pay, but we can't find a careful study that supports its value in settings where cooperation, coordination, and information sharing are crucial to performance.

Negative effects of highly dispersed pay are even seen in professional sports. Studies of baseball teams are especially interesting because, of all major professional sports, baseball calls for the least coordination among team members. But baseball still requires some cooperation--for example, between pitchers and catchers, and among infielders. And although individuals hit the ball, teammates can help one another improve their skills and break out of slumps. Notre Dame's Matt Bloom did a careful study of over 1,500 professional baseball players from 29 teams, spanning an eight-year period, which showed that players on teams with greater dispersion in pay had lower winning percentages, gate receipts, and media income.

Finally, an evidence-based approach would have surfaced data suggesting that average players can be extremely productive and that A players can founder, depending on the system they work in. Over 15 years of research in the auto industry provides compelling evidence for the power of systems over individual talent. Wharton's John Paul MacDuffie has combined quantitative studies of every automobile plant in the world with in-depth case studies to understand why some plants are more effective than others. MacDuffie has found that lean or flexible production systems--with their emphasis on teams, training, and job rotation, and their de-emphasis on status differences among employees--build higher-quality cars at a lower cost.

Becoming a Company of Evidence-Based Managers

It is one thing to believe that organizations would per form better if leaders knew and applied the best evidence. It is another thing to put that belief into practice. We appreciate how hard it is for working managers and executives to do their jobs. The demands for decisions are relentless, information is incomplete, and even the very best executives make many mistakes and undergo constant criticism and second-guessing from people inside and outside their companies. In that respect, managers are like physicians who face one decision after another: They can't possibly make the right choice every time. Hippocrates, the famous Greek who wrote the physicians' oath, described this plight well: "Life is short, the art long, opportunity fleeting, experiment treacherous, judgment difficult."

Teaching hospitals that embrace evidence-based medicine try to overcome impediments to using it by providing training, technologies, and work practices so staff can take the critical results of the best studies to the bedside. The equivalent should be done in management settings. But it's also crucial to appreciate that evidence-based management, like evidence-based medicine, entails a distinct mind-set that clashes with the way many managers and companies operate. It features a willingness to put aside belief and conventional wisdom--the dangerous half-truths that many embrace--and replace these with an unrelenting commitment to gather the necessary facts to make more informed and intelligent decisions.

As a leader in your organization, you can begin to nurture an evidence-based approach immediately by doing a few simple things that reflect the proper mind-set. If you ask for evidence of efficacy every time a change is proposed, people will sit up and take notice. If you take the time to parse the logic behind that evidence, people will become more disciplined in their own thinking. If you treat the organization like an unfinished prototype and encourage trial programs, pilot studies, and experimentation--and reward learning from these activities, even when something new fails--your organization will begin to develop its own evidence base. And if you keep learning while acting on the best knowledge you have and expect your people to do the same--if you have what has been called "the attitude of wisdom"--then your company can profit from evidence-based management as you benefit from "enlightened trial and error" and the learning that occurs as a consequence.

Demand evidence. When it comes to setting the tone for evidence-based management, we have met few chief executives on a par with Kent Thiry, the CEO of DaVita, a $2 billion operator of kidney dialysis centers headquartered in El Segundo, California. Thiry joined DaVita in October 1999, when the company was in default on its bank loans, could barely meet payroll, and was close to bankruptcy. A big part of his turnaround effort has been to educate the many facility administrators, a large proportion of them nurses, in the use of data to guide their decisions.

To ensure that the company has the information necessary to assess its operations, the senior management team and DaVita's chief technical officer, Harlan Cleaver, have been relentless in building and installing systems that help leaders at all levels understand how well they are doing. One of Thiry's mottoes is "No brag, just facts." When he stands up at DaVita Academy, a meeting of about 400 frontline employees from throughout the organization, and states that the company has the best quality of treatment in the industry, that assertion is demonstrated with specific, quantitative comparisons.

A large part of the company's culture is a commitment to the quality of patient care. To reinforce this value, managers always begin reports and meetings with data on the effectiveness of the dialysis treatments and on patient health and well-being. And each facility administrator gets an eight-page report every month that shows a number of measures of the quality of care, which are summarized in a DaVita Quality Index. This emphasis on evidence also extends to management issues--administrators get information on operations, including treatments per day, teammate (employee) retention, the retention of higher-paying private pay patients, and a number of resource utilization measures such as labor hours per treatment and controllable expenses.

The most interesting thing about these monthly reports is what isn't yet included. DaVita COO Joe Mello explained that if a particular metric is deemed important, but the company currently lacks the ability to collect the relevant measurements, that metric is included on reports anyway, with the notation "not available." He said that the persistent mention of important measures that are missing helps motivate the company to figure out ways of gathering that information.

Many impressive aspects of DaVita's operations have contributed to the company's success, as evidenced by the 50% decrease in voluntary turnover, best-in-industry quality of patient care, and exceptional financial results. But the emphasis on evidence-based decision making in a culture that reinforces speaking the truth about how things are going is certainly another crucial component.

Examine logic. Simply asking for backup research on proposals is insufficient to foster a true organizational commitment to evidence-based management, especially given the problems that bedevil much so-called business research. As managers or consultants make their case, pay close attention to gaps in exposition, logic, and inference. (See the sidebar "Are You Part of the Problem?") This is particularly important because, in management research, studies that use surveys or data from company records to correlate practices with various performance outcomes are far more common than experiments. Such "nonexperimental" research is useful, but care must be taken to examine the logic of the research design and to control statistically for alternative explanations, which arise in even the best studies. Managers who consume such knowledge need to understand the limitations and think critically about the results.

When people in the organization see senior executives spending the time and mental energy to unpack the underlying assumptions that form the foundation for some proposed policy, practice, or intervention, they absorb a new cultural norm. The best leaders avoid the problem of seeming captious about the work of subordinates; they tap the collective wisdom and experience of their teams to explore whether assumptions seem sensible. They ask, "What would have to be true about people and organizations if this idea or practice were going to be effective? Does that feel true to us?"

Consultant claims may require an extra grain of salt. It is surprising how often purveyors of business knowledge are fooled or try to fool customers. We admire Bain & Company, for example, and believe it is quite capable of good research. We do wonder, however, why the company has a table on its Web site's home page that brags, "Our clients outperform the market 4 to 1" (the claim was "3 to 1" a few years back). The smart people at Bain know this correlation doesn't prove that their advice transformed clients into top performers. It could simply be that top performers have more money for hiring consultants. Indeed, any claim that Bain deserves credit for such performance is conspicuously absent from the Web site, at least as of fall 2005. Perhaps the hope is that visitors will momentarily forget what they learned in their statistics classes!

Treat the organization as an unfinished prototype. For some questions in some businesses, the best evidence is to be found at home--in the company's own data and experience rather than in the broader-based research of scholars. Companies that want to promote more evidence-based management should get in the habit of running trial programs, pilot studies, and small experiments, and thinking about the inferences that can be drawn from them, as CEO Gary Loveman has done at Harrah's. Love-man joked to us that there are three ways to get fired at Harrah's these days: steal, harass women, or institute a program without first running an experiment. As you might expect, Harrah's experimentation is richest and most renowned in the area of marketing, where the company makes use of the data stream about customers' behaviors and responses to promotions. In one experiment reported by Harvard's Rajiv Lal in a teaching case, Harrah's offered a control group a promotional package worth $125 (a free room, two steak dinners, and $30 in casino chips); it offered customers in an experimental group just $60 in chips. The $60 offer generated more gambling revenue than the $125 offer did, and at a reduced cost. Loveman wanted to see experimentation like this throughout the business, not just in marketing. And so the company proved that spending money on employee selection and retention efforts (including giving people realistic job previews, enhancing training, and bolstering the quality of frontline supervision) would reduce turnover and produce more engaged and committed employees. Harrah's succeeded in reducing staff turnover by almost 50%.

Similarly, CEO Meg Whitman attributes much of eBay's success to the fact that management spends less time on strategic analysis and more time trying and tweaking things that seem like they might work. As she said in March 2005, "This is a completely new business, so there's only so much analysis you can do." Whitman suggests instead, "It's better to put something out there and see the reaction and fix it on the fly. You could spend six months getting it perfect in the lab…[but] we're better off spending six days putting it out there, getting feedback, and then evolving it."

Yahoo is especially systematic about treating its home page as an unfinished prototype. Usama Fayyad, the company's chief data officer, points out that the home page gets millions of hits an hour, so Yahoo can conduct rigorous experiments that yield results in an hour or less--randomly assigning, say, a couple hundred thousand visitors to the experimental group and several million to the control group. Yahoo typically has 20 or so experiments running at any time, manipulating site features like colors, placement of advertisements, and location of text and buttons. These little experiments can have big effects. For instance, an experiment by data-mining researcher Nitin Sharma revealed that simply moving the search box from the side to the center of the home page would produce enough additional "click throughs" to bring in millions more dollars in advertising revenue a year.

A big barrier to using experiments to build management knowledge is that companies tend to adopt practices in an all-or-nothing way--either the CEO is behind the practice, so everyone does it or at least claims to, or it isn't tried at all. This tendency to do things everywhere or nowhere severely limits a company's ability to learn. In particular, multisite organizations like restaurants, hotels, and manufacturers with multiple locations can learn by experimenting in selected sites and making comparisons with "control" locations. Field experiments at places such as McDonald's restaurants, 7-Eleven convenience stores, Hewlett-Packard, and Intel have introduced changes in some units and not others to test the effects of different incentives, technologies, more interesting job content, open versus closed offices, and even detailed and warm (versus cursory and cold) explanations about why pay cuts were being implemented.

Embrace the attitude of wisdom. Something else, something broader, is more important than any single guideline for reaping the benefits of evidence-based management: the attitude people have toward business knowledge. At least since Plato's time, people have appreciated that true wisdom does not come from the sheer accumulation of knowledge, but from a healthy respect for and curiosity about the vast realms of knowledge still unconquered. Evidence-based management is conducted best not by know-it-alls but by managers who profoundly appreciate how much they do not know. These managers aren't frozen into inaction by ignorance; rather, they act on the best of their knowledge while questioning what they know.

Cultivating the right balance of humility and decisiveness is a huge, amorphous goal, but one tactic that serves it is to support the continuing professional education of managers with a commitment equal to that in other professions. The Centre for Evidence-Based Medicine says that identifying and applying effective strategies for lifelong learning are the keys to making this happen for physicians. The same things are surely critical to evidence-based management.

Another tactic is to encourage inquiry and observation even when rigorous evidence is lacking and you feel compelled to act quickly. If there is little or no information and you can't conduct a rigorous study, there are still things you can do to act more on the basis of logic and less on guesswork, fear, belief, or hope. We once worked with a large computer company that was having trouble selling its computers at retail stores. Senior executives kept blaming their marketing and sales staff for doing a bad job and dismissed complaints that it was hard to get customers to buy a lousy product--until one weekend, when members of the senior team went out to stores and tried to buy their computers. All of the executives encountered sales clerks who tried to dissuade them from buying the firm's computers, citing the excessive price, weak feature set, clunky appearance, and poor customer service. By organizing such field trips and finding other ways to gather qualitative data, managers can convey that decisions should not ignore real-world observations.

Will It Make a Difference?

The evidence-based-medicine movement has its critics, especially physicians who worry that clinical judgment will be replaced by search engines or who fear that bean counters from HMOs will veto experimental or expensive techniques. But initial studies suggest that physicians trained in evidence-based techniques are better informed than their peers, even 15 years after graduating from medical school. Studies also show conclusively that patients receiving the care that is indicated by evidence-based medicine experience better outcomes.

At this time, that level of assurance isn't available to those who undertake evidence-based management in business settings. We have the experience of relatively few companies to go on, and while it is positive, evidence from broad and representative samples is needed before that experience can be called a consistent pattern. Yet the theoretical argument strikes us as ironclad. It seems perfectly logical that decisions made on the basis of a preponderance of evidence about what works elsewhere, as well as within your own company, will be better decisions and will help the organization thrive. We also have a huge body of peer-reviewed studies--literally thousands of careful studies by well-trained researchers--that, although routinely ignored, provide simple and powerful advice about how to run organizations. If found and used, this advice would have an immediate positive effect on organizations.

Does all this sound too obvious? Perhaps. But one of the most important lessons we've learned over the years is that practicing evidence-based management often entails being a master of the mundane. Consider how the findings from this one little study could help a huge organization: An experiment at the University of Missouri compared decision-making groups that stood up during ten- to 20-minute meetings with groups that sat down. Those that stood up took 34% less time to make decisions, and the quality was just as good. Whether people should sit down or stand up during meetings may seem a downright silly question at first blush. But do the math. Take energy giant Chevron, which has over 50,000 employees. If each employee replaced just one 20-minute sit-down meeting per year with a stand-up meeting, each of those meetings would be about seven minutes shorter. That would save Chevron over 350,000 minutes--nearly 6,000 hours--per year.

Leaders who are committed to practicing evidence-based management also need to brace themselves for a nasty side effect: When it is done right, it will undermine their power and prestige, which may prove unsettling to those who enjoy wielding influence. A former student of ours who worked at Netscape recalled a sentiment he'd once heard from James Barksdale back when he was CEO: "If the decision is going to be made by the facts, then everyone's facts, as long as they are relevant, are equal. If the decision is going to be made on the basis of people's opinions, then mine count for a lot more." This anecdote illustrates that facts and evidence are great levelers of hierarchy. Evidence-based practice changes power dynamics, replacing formal authority, reputation, and intuition with data. This means that senior leaders--often venerated for their wisdom and decisiveness--may lose some stature as their intuitions are replaced, at least at times, by judgments based on data available to virtually any educated person. The implication is that leaders need to make a fundamental decision: Do they want to be told they are always right, or do they want to lead organizations that actually perform well?

If taken seriously, evidence-based management can change how every manager thinks and acts. It is, first and foremost, a way of seeing the world and thinking about the craft of management; it proceeds from the premise that using better, deeper logic and employing facts, to the extent possible, permits leaders to do their jobs more effectively. We believe that facing the hard facts and truth about what works and what doesn't, understanding the dangerous half-truths that constitute so much conventional wisdom about management, and rejecting the total nonsense that too often passes for sound advice will help organizations perform better.


By Jeffrey Pfeffer and Robert I. Sutton

Jeffrey Pfeffer ( is the Thomas D. Dee II Professor of Organizational Behavior at Stanford Graduate School of Business in California.

Robert I. Sutton ( is a professor of management science and engineering at Stanford School of Engineering, where he is also a codirector of the Center for Work, Technology, and Organization.

Pfeffer and Sutton are the authors of The Knowing-Doing Gap: How Smart Companies Turn Knowledge into Action (Harvard Business School Press, 1999) and Hard Facts, Dangerous Half-Truths, and Total Nonsense: Profiting from Evidence-Based Management (Harvard Business School Press, forthcoming in March 2006).


Across the board, U.S. automobile companies have for decades benchmarked Toyota, the world leader in auto manufacturing. In particular, many have tried to copy its factory-floor practices. They've installed just-in-time inventory systems, statistical process control charts, and pull cords to stop the assembly line if defects are noticed. Yet, although they (most notably, General Motors) have made progress, for the most part the companies still lag behind Toyota in productivity--the hours required to assemble a car--and often in quality and design as well.

Studies of the automobile industry, especially those by Wharton professor John Paul MacDuffie, suggest that the U.S. companies fell prey to the same pair of fundamental problems we have seen in so many casual-benchmarking initiatives. First, people mimic the most visible, the most obvious, and, frequently, the least important practices. The secret to Toyota's success is not a set of techniques per se, but the philosophy of total quality management and continuous improvement the company has embraced, as well as managers' accessibility to employees on the plant floor, which enables Toyota to tap these workers' tacit knowledge. Second, companies have different strategies, cultures, workforces, and competitive environments--so that what one of them needs to do to be successful is different from what others need to do. The Toyota system presumes that people will be team players and subordinate their egos for the good of the group, a collectivistic mind-set that tends to fit Asian managers and workers better than it does U.S. and European managers and workers.

Before you run off to benchmark, possibly spending effort and money that will result in no payoff or, worse yet, problems that you never had before, ask yourself the following questions:

• Do sound logic and evidence indicate that the benchmarking target's success is attributable to the practice we seek to emulate? Southwest Airlines is the most successful airline in the history of the industry. Herb Kelleher, its CEO from 1982 to 2001, drinks a lot of Wild Turkey bourbon. Does this mean that your company will dominate its industry if your CEO drinks a lot of Wild Turkey?

• Are the conditions at our company--strategy, business model, workforce--similar enough to those at the bench-marked company to make the learning useful? Just as doctors who do neurosurgery learn mostly from other neurosurgeons, not from orthopedists, you and your company should seek to learn from relevant others.

• Why does a given practice enhance performance? And what is the logic that links it to bottom-line results? If you can't explain the underlying theory, you are likely engaging in superstitious learning, and you may be copying something irrelevant or even damaging--or only copying part (perhaps the worst part) of the practice. As senior GE executives once pointed out to us, many companies that imitate their "rank and yank" system take only the A, B, and C rankings and miss the crucial subtlety that an A player is someone who helps colleagues do their jobs more effectively, rather than engaging in dysfunctional internal competition.

• What are the downsides of implementing the practice even if it is a good idea overall? Keep in mind that there is usually at least one disadvantage. For example, research by Mary Benner at Wharton and Michael Tushman at Harvard Business School shows that firms in the paint and photography industries that implemented more extensive process management programs did increase short-term efficiency but had more trouble keeping up with rapid technological changes. You need to ask if there are ways of mitigating the downsides, maybe even solutions that your benchmarking target uses that you aren't seeing. Say you are doing a merger. Look closely at what Cisco does and why, as it consistently profits from mergers while most other firms consistently fail.


Perhaps the greatest barrier to evidence-based management is that today's prevailing standards for assessing management knowledge are deeply flawed. Unfortunately, they are bolstered by the actions of virtually every major player in the marketplace for business knowledge. The business press in particular, purveyor of so many practices, needs to make better judgments about the virtues and shortcomings of the evidence it generates and publishes. We propose six standards for producing, evaluating, selling, and applying business knowledge.

1. Stop treating old ideas as if they were brand-new.

Sir Isaac Newton is often credited as saying, "If I have seen farther, it is by standing on the shoulders of giants." But peddlers of management ideas find they win more speaking engagements and lucrative book contracts if they ignore antecedents and represent insights as being wholly original. Most business magazines happily recycle and rename concepts to keep the money flowing. This continues to happen even though, as renowned management theorist James March pointed out to us in an e-mail message, "most claims of originality are testimony to ignorance and most claims of magic are testimony to hubris." How do we break the cycle? For starters, people who spread ideas ought to acknowledge key sources and encourage writers and managers to build on and blend with what's come before. Doing so isn't just intellectually honest and polite. It leads to better ideas.

2. Be suspicious of "breakthrough" ideas and studies.

Related to the desire for "new" is the desire for "big"-- the big idea, the big study, the big innovation. Unfortunately, "big" rarely happens. Close examination of so-called breakthroughs nearly always reveals that they're preceded by the painstaking, incremental work of others. We live in a world where scientists and economists who win the Nobel Prize credit their predecessors' work; they carefully point out the tiny, excruciating steps they took over the years to develop their ideas and hesitate to declare breakthroughs, while--like old-fashioned snake oil salesmen--one business guru after another claims to have developed a brand-new cure-all. Something is wrong with this picture. Still, managers yearn for magic remedies, and purveyors pretend to give them what they crave.

3. Celebrate and develop collective brilliance.

The business world is among the few places where the term "guru" has primarily positive connotations. But a focus on gurus masks how business knowledge is and ought to be developed and used. Knowledge is rarely generated by lone geniuses who cook up brilliant new ideas in their gigantic brains. Writers and consultants need to be more careful about describing the teams and communities of researchers who develop ideas. Even more important, they need to recognize that implementing practices, executing strategy, and accomplishing organizational change all require the coordinated actions of many people, whose commitment to an idea is greatest when they feel ownership.

4. Emphasize drawbacks as well as virtues.

Doctors are getting better at explaining risks to patients and, in the best circumstances, enabling them to join a decision process where potential problems are considered. This rarely happens in management, where too many solutions are presented as costless and universally applicable, with little acknowledgment of possible pitfalls. Yet all management practices and programs have both strong and weak points, and even the best have costs. This doesn't mean companies shouldn't implement things like Six Sigma or Balanced Scorecards, just that they should recognize the hazards. That way, managers won't become disenchanted or, worse, abandon a valuable program or practice when known setbacks occur.

5. Use success (and failure) stories to illustrate sound practices, but not in place of a valid research method.

There is an enormous problem with research that relies on recollection by the parties involved in a project, as so much management research does when it seeks out keys to subsequent success. A century ago, Ambrose Bierce, in his Devil's Dictionary, defined "recollect" as "To recall with additions something not previously known," foreshadowing much research on human memory. It turns out that, for example, eyewitness accounts are notoriously unreliable and that, in general, people have terrible memory, regardless of how confident they are in their recollections. Most relevant to management research is that people tend to remember much different things when they are anointed winners (versus losers), and what they recall has little to do with what happened.

6. Adopt a neutral stance toward ideologies and theories.
Ideology is among the more widespread, potent, and vexing impediments to using evidence-based management. Academics and other thought leaders can come to believe in their own theories so fervently that they're incapable of learning from new evidence. And managers can lower or raise the threshold of their skepticism when a proposed solution, on its face, seems "vaguely socialistic" or "compassionate," "militaristic" or "disciplined." The best way to keep such filters from obscuring good solutions is to establish clarity and consensus on the problem to be solved and on what constitutes evidence of efficacy.


You may well be trying to bring the best evidence to bear on your decisions. You follow the business press, buy business books, hire consultants, and attend seminars featuring business experts. But evidence-based management is still hard to apply. Here's what you're up against.

There's too much evidence. With hundreds of English-language magazines and journals devoted to business and management issues, dozens of business newspapers, roughly 30,000 business books in print and thousands more being published each year, and the Web-based outlets for business knowledge continuing to expand (ranging from online versions of Fortune and the Wall Street Journal to specialized sites like and, it is fair to say that there is simply too much information for any manager to consume. Moreover, recommendations about management practice are seldom integrated in a way that makes them accessible or memorable. Consider, for instance, Business: The Ultimate Resource, a tome that weighs about eight pounds and runs 2,208 oversize pages. Business claims that it "will become the 'operating system' for any organization or anyone in business." But a good operating system fits together in a seamless and logical manner--not the case here or with any such encyclopedic effort to date.

There's not enough good evidence. Despite the existence of "data, data everywhere," managers still find themselves parched for reliable guidance. In 1993, senior Bain consultant Darrell Rigby began conducting the only survey we have encountered on the use and persistence of various management tools and techniques. (Findings from the most recent version of Bain's Management Tools survey were published in Strategy and Leadership in 2005.) Rigby told us it struck him as odd that you could get good information on products such as toothpaste and cereal but almost no information about interventions that companies were spending millions of dollars to implement. Even the Bain survey, noteworthy as it is, measures only the degree to which the different programs are used and does not go beyond subjective assessments of their value.

The evidence doesn't quite apply. Often, managers are confronted with half-truths--advice that is true some of the time, under certain conditions. Take, for example, the controversy around stock options. The evidence suggests that, in general, heavier reliance on stock options does not increase a firm's performance, but it does increase the chances that a company will need to restate its earnings. However, in small, privately held start-ups, options do appear to be relevant to success and less likely to produce false hype. One hallmark of solid research is conservatism--the carefulness of the researcher to point out the specific context in which intervention A led to outcome B. Unfortunately, that leaves managers wondering if the research could possibly be relevant to them.

People are trying to mislead you. Because it's so hard to distinguish good advice from bad, managers are constantly enticed to believe in and implement flawed business practices. A big part of the problem is consultants, who are always rewarded for getting work, only sometimes rewarded for doing good work, and hardly ever rewarded for evaluating whether they have actually improved things. Worst of all, if a client's problems are only partly solved, that leads to more work for the consulting firm! (If you think our charge is too harsh, ask the people at your favorite consulting firm what evidence they have that their advice or techniques actually work--and pay attention to the evidence they offer.)

You are trying to mislead you. Simon and Garfunkel were right when they sang, "A man hears what he wants to hear and disregards the rest." Many practitioners and their advisers routinely ignore evidence about management practices that clashes with their beliefs and ideologies, and their own observations are contaminated by what they expect to see. This is especially dangerous because some theories can become self-fulfilling--that is, we sometimes perpetuate our pet theories with our own actions. If we expect people to be untrustworthy, for example, we will closely monitor their behavior, which makes it impossible to develop trust. (Meanwhile, experimental evidence shows that when people are placed in situations where authority figures expect them to cheat, more of them do, in fact, cheat.)

The side effects outweigh the cure. Sometimes, evidence points clearly to a cure, but the effects of the cure are too narrowly considered. One of our favorite examples comes from outside management, in the controversy over social promotion in public schools--that is, advancing a child to the next grade even if his or her work isn't up to par. Former U.S. president Bill Clinton represented the views of many when, in his 1999 State of the Union address, he said, "We do our children no favors when we allow them to pass from grade to grade without mastering the material." President George W. Bush holds the same view. But this belief is contrary to the results from over 55 published studies that demonstrate the net negative effects of ending social promotion (versus no careful studies that find positive effects). Many school systems that have tried to end the practice have quickly discovered the fly in the ointment: Holding students back leaves schools crowded with older students, and costs skyrocket as more teachers and other resources are needed because the average student spends more years in school. The flunked kids also consistently come out worse in the end, with lower test scores and higher drop-out rates. There are also reports that bullying increases: Those flunked kids, bigger than their classmates, are mad about being held back, and the teachers have trouble maintaining control in the larger classes.

Stories are more persuasive, anyway. It's hard to remain devoted to the task of building bulletproof, evidence-based cases for action when it's clear that good storytelling often carries the day. And indeed, we reject the notion that only quantitative data should qualify as evidence. As Einstein put it, "Not everything that can be counted counts, and not everything that counts can be counted." When used correctly, stories and cases are powerful tools for building management knowledge. Many quantitative studies are published on developing new products, but few come close to Tracy Kidder's Pulitzer-winning Soul of a New Machine in capturing how engineers develop products and how managers can enhance or undermine the engineers' (and products') success. Gordon MacKenzie's Orbiting the Giant Hairball is the most charming and useful book on corporate creativity we know. Good stories have their place in an evidence-based world, in suggesting hypotheses, augmenting other (often quantitative) research, and rallying people who will be affected by a change.

A History of Decision Making.

Excerpted from the Harvard Business Review.


Humans have perpetually sought new tools and insights to help them make decisions. From entrails to artificial intelligence, what a long, strange trip it's been

SOMETIME IN THE MIDST OF THE LAST CENTURY, Chester Barnard, a retired telephone executive and author of The Functions of the Executive, imported the term "decision making" from the lexicon of public administration into the business world. There it began to replace narrower descriptors such as "resource allocation" and "policy making." The introduction of that phrase changed how managers thought about what they did and spurred a new crispness of action and desire for conclusiveness, argues William Starbuck, professor in residence at the University of Oregon's Charles H. Lundquist College of Business. "Policy making could go on and on endlessly, and there are always resources to be allocated," he explains. "'Decision' implies the end of deliberation and the beginning of action."

So Barnard--and such later theorists as James March, Herbert Simon, and Henry Mintzberg--laid the foundation for the study of managerial decision making. But decision making within organizations is only one ripple in a stream of thought flowing back to a time when man, facing uncertainty, sought guidance from the stars. The questions of who makes decisions, and how, have shaped the world's systems of government, justice, and social order. "Life is the sum of all your choices," Albert Camus reminds us. History, by extrapolation, equals the accumulated choices of all mankind.

The study of decision making, consequently, is a palimpsest of intellectual disciplines: mathematics, sociology, psychology, economics, and political science, to name a few. Philosophers ponder what our decisions say about ourselves and about our values; historians dissect the choices leaders make at critical junctures. Research into risk and organizational behavior springs from a more practical desire: to help managers achieve better outcomes. And while a good decision does not guarantee a good outcome, such pragmatism has paid off. A growing sophistication with managing risk, a nuanced understanding of human behavior, and advances in technology that support and mimic cognitive processes have improved decision making in many situations.

Even so, the history of decision-making strategies is not one of unalloyed progress toward perfect rationalism. In fact, over the years we have steadily been coming to terms with constraints--both contextual and psychological--on our ability to make optimal choices. Complex circumstances, limited time, and inadequate mental computational power reduce decision makers to a state of "bounded rationality," argues Simon. While Simon suggests that people would make economically rational decisions if only they could gather enough information, Daniel Kahneman and Amos Tversky identify factors that cause people to decide against their economic interest even when they know better. Antonio Damasio draws on work with brain-damaged patients to demonstrate that in the absence of emotion it is impossible to make any decisions at all. Erroneous framing, bounded awareness, excessive optimism: the debunking of Descartes's rational man threatens to swamp our confidence in our choices, with only improved technology acting as a kind of empirical breakwater.

Faced with the imperfectability of decision making, theorists have sought ways to achieve, if not optimal outcomes, at least acceptable ones. Gerd Gigerenzer urges us to make a virtue of our limited time and knowledge by mastering simple heuristics, an approach he calls "fast and frugal" reasoning. Amitai Etzioni proposes "humble decision making," an assortment of nonheroic tactics that include tentativeness, delay, and hedging. Some practitioners, meanwhile, have simply reverted to the old ways. Last April, a Japanese television equipment manufacturer turned over its $20 million art collection to Christie's when the auction house trounced archrival Sotheby's in a high-powered round of rock-paper-scissors, a game that may date back as far as Ming Dynasty China.

In this special issue on decision making, our focus--as always--is on breaking new ground. What follows is a glimpse of the bedrock that lies beneath that ground.

Chances Are

RISK IS AN INESCAPABLE PART OF EVERY DECISION. For most of the everyday choices people make, the risks are small. But on a corporate scale, the implications (both upside and downside) can be enormous. Even the tritely expressed (and rarely encountered) win-win situation entails opportunity costs in the form of paths not taken.

To make good choices, companies must be able to calculate and manage the attendant risks. Today, myriad sophisticated tools can help them do so. But it was only a few hundred years ago that the risk management tool kit consisted of faith, hope, and guesswork. That's because risk is a numbers game, and before the seventeenth century, humankind's understanding of numbers wasn't up to the task. Most early numbering methods were unwieldy, as anyone knows who has tried to multiply XXIII by VI. The Hindu-Arabic numeral system (which, radically, included zero) simplified calculations and enticed philosophers to investigate the nature of numbers. The tale of our progression from those early fumblings with base 10 is masterfully told by Peter Bernstein in Against the Gods: The Remarkable Story of Risk.

Bernstein's account begins in the dark days when people believed they had no control over events and so turned to priests and oracles for clues to what larger powers held in store for them. It progresses quickly to a new interest in mathematics and measurement, spurred, in part, by the growth of trade. During the Renaissance, scientists and mathematicians such as Girolamo Cardano mused about probability and concocted puzzles around games of chance. In 1494, a peripatetic Franciscan monk named Luca Pacioli proposed "the problem of Points"--which asks how one should divide the stakes in an incomplete game. Some 150 years later, French mathematicians Blaise Pascal and Pierre de Fermat developed a way to determine the likelihood of each possible result of a simple game (balla, which had fascinated Pacioli). But it wasn't until the next century, when Swiss scholar Daniel Bernoulli took up the study of random events, that the scientific basis for risk management took shape.

Bernoulli (who also introduced the far-reaching concept of human capital) focused not on events themselves but on the human beings who desire or fear certain outcomes to a greater or lesser degree. His intent, he wrote, was to create mathematical tools that would allow anyone to "estimate his prospects from any risky undertaking in light of [his] specific financial circumstances." In other words, given the chance of a particular outcome, how much are you willing to bet?

In the nineteenth century, other scientific disciplines became fodder for the risk thinkers. Carl Friedrich Gauss brought his geodesic and astronomical research to bear on the bell curve of normal distribution. The insatiably curious Francis Galton came up with the concept of regression to the mean while studying generations of sweet peas. (He later applied the principle to people, observing that few of the sons--and fewer of the grandsons--of eminent men were themselves eminent.)

But it wasn't until after World War I that risk gained a beachhead in economic analysis. In 1921, Frank Knight distinguished between risk, when the probability of an outcome is possible to calculate (or is knowable), and uncertainty, when the probability of an outcome is not possible to determine (or is unknowable)--an argument that rendered insurance attractive and entrepreneurship, in Knight's words, "tragic." Some two decades later, John von Neumann and Oskar Morgenstern laid out the fundamentals of game theory, which deals in situations where people's decisions are influenced by the unknowable decisions of "live variables" (aka other people).

Today, of course, corporations try to know as much as is humanly and technologically possible, deploying such modern techniques as derivatives, scenario planning, business forecasting, and real options. But at a time when chaos so often triumphs over control, even centuries' worth of mathematical discoveries can do only so much. Life "is a trap for logicians," wrote the novelist G.K. Chesterton. "Its wildness lies in wait."

The Meeting of Minds

IN THE FIFTH CENTURY BC, Athens became the first (albeit limited) democracy. In the seventeenth century, the Quakers developed a decision-making process that remains a paragon of efficiency, openness, and respect. Starting in 1945, the United Nations sought enduring peace through the actions of free peoples working together.

There is nobility in the notion of people pooling their wisdom and muzzling their egos to make decisions that are acceptable--and fair--to all. During the last century, psychologists, sociologists, anthropologists, and even biologists (studying everything from mandrills to honeybees) eagerly unlocked the secrets of effective cooperation within groups. Later, the popularity of high-performance teams, coupled with new collaborative technologies that made it "virtually" impossible for any man to be an island, fostered the collective ideal.

The scientific study of groups began, roughly, in 1890, as part of the burgeoning field of social psychology. In 1918, Mary Parker Follett made a passionate case for the value of conflict in achieving integrated solutions in The New State: Group Organization--The Solution of Popular Government. A breakthrough in understanding group dynamics occurred just after World War II, sparked--oddly enough--by the U.S. government's wartime campaign to promote the consumption of organ meat. Enlisted to help, psychologist Kurt Lewin discovered that people were more likely to change their eating habits if they thrashed the subject out with others than if they simply listened to lectures about diet. His influential "field theory" posited that actions are determined, in part, by social context and that even group members with very different perspectives will act together to achieve a common goal.

Over the next decades, knowledge about group dynamics and the care and feeding of teams evolved rapidly. Victor Vroom and Philip Yetton established the circumstances under which group decision making is appropriate. R. Meredith Belbin defined the components required for successful teams. Howard Raiffa explained how groups exploit "external help" in the form of mediators and facilitators. And Peter Drucker suggested that the most important decision may not be made by the team itself but rather by management about what kind of team to use.

Meanwhile, research and events collaborated to expose collective decision making's dark underbelly. Poor group decisions--of the sort made by boards, product development groups, management teams--are often attributed to the failure to mix things up and question assumptions. Consensus is good, unless it is achieved too easily, in which case it becomes suspect. Irving Janis coined the term "groupthink" in 1972 to describe "a mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members' strivings for unanimity override their motivation to realistically appraise alternative courses of action." In his memoir, A Thousand Days, former Kennedy aide Arthur Schlesinger reproached himself for not objecting during the planning for the Bay of Pigs invasion: "I can only explain my failure to do more than raise a few timid questions by reporting that one's impulse to blow the whistle on this nonsense was simply undone by the circumstances of the discussion."

It seems that decisions reached through group dynamics require, above all, a dynamic group. As Clarence Darrow neatly put it: "To think is to differ."

Thinking Machines

COMPUTER PROFESSIONALS EULOGIZE XEROX PARC OF THE 1970S as a technological Eden where some of today's indispensable tools sprouted. But comparable vitality and progress were evident two decades earlier at the Carnegie Institute of Technology in Pittsburgh. There, a group of distinguished researchers laid the conceptual--and in some cases the programming--foundation for computer-supported decision making.

Future Nobel laureate Herbert Simon, Allen Newell, Harold Guetzkow, Richard M. Cyert, and James March were among the CIT scholars who shared a fascination with organizational behavior and the workings of the human brain. The philosopher's stone that alchemized their ideas was electronic computing. By the mid-1950s, transistors had been around less than a decade, and IBM would not launch its groundbreaking 360 mainframe until 1965. But already scientists were envisioning how the new tools might improve human decision making. The collaborations of these and other Carnegie scientists, together with research by Marvin Minsky at the Massachusetts Institute of Technology and John McCarthy of Stanford, produced early computer models of human cognition – the embryo of artificial intelligence.

AI was intended both to help researchers understand how the brain makes decisions and to augment the decision-making process for real people in real organizations. Decision support systems, which began appearing in large companies toward the end of the 1960s, served the latter goal, specifically targeting the practical needs of managers. In a very early experiment with the technology, managers used computers to coordinate production planning for laundry equipment, Daniel Power, editor of the Web site, relates. Over the next decades, managers in many industries applied the technology to decisions about investments, pricing, advertising, and logistics, among other functions.

But while technology was improving operational decisions, it was still largely a cart horse for hauling rather than a stallion for riding into battle. Then in 1979, John Rockart published the HBR article "Chief Executives Define Their Own Data Needs," proposing that systems used by corporate leaders ought to give them data about the key jobs the company must do well to succeed. That article helped launch "executive information systems," a breed of technology specifically geared toward improving strategic decision making at the top. In the late 1980s, a Gartner Group consultant coined the term "business intelligence" to describe systems that help decision makers throughout the organization understand the state of their company's world. At the same time, a growing concern with risk led more companies to adopt complex simulation tools to assess vulnerabilities and opportunities.

In the 1990s, technology-aided decision making found a new customer: customers themselves. The Internet, which companies hoped would give them more power to sell, instead gave consumers more power to choose from whom to buy. In February 2005, the shopping search service BizRate reports, 59% of online shoppers visited aggregator sites to compare prices and features from multiple vendors before making a purchase, and 87% used the Web to size up the merits of online retailers, catalog merchants, and traditional retailers.

Unlike executives making strategic decisions, consumers don't have to factor what Herbert Simon called "zillions of calculations" into their choices. Still, their newfound ability to make the best possible buying decisions may amount to technology's most significant impact to date on corporate success--or failure.

The Romance of the Gut

"GUT," ACCORDING TO THE FIRST DEFINITION IN MERRIAM-WEBSTER'S LATEST EDITION, means "bowels." But when Jack Welch describes his "straight from the gut" leadership style, he's not talking about the alimentary canal. Rather, Welch treats the word as a conflation of two slang terms: "gut" (meaning emotional response) and "guts" (meaning fortitude, nerve).

That semantic shift--from human's stomach to lion's heart--helps explain the current fascination with gut decision making. From our admiration for entrepreneurs and firefighters, to the popularity of books by Malcolm Gladwell and Gary Klein, to the out comes of the last two U.S. presidential elections, instinct appears ascendant. Pragmatists act on evidence. Heroes act on guts. As Alden Hayashi writes in "When to Trust Your Gut" (HBR February 2001): "Intuition is one of the X factors separating the men from the boys."

We don't admire gut decision makers for the quality of their decisions so much as for their courage in making them. Gut decisions testify to the confidence of the decision maker, an in valuable trait in a leader. Gut decisions are made in moments of crisis when there is no time to weigh arguments and calculate the probability of every outcome. They are made in situations where there is no precedent and consequently little evidence. Sometimes they are made in defiance of the evidence, as when Howard Schultz bucked conventional wisdom about Americans' thirst for a $3 cup of coffee and Robert Lutz let his emotions guide Chrysler's $80 million investment in a $50,000 muscle car. Financier George Soros claims that back pains have alerted him to discontinuities in the stock market that have made him fortunes. Such decisions are the stuff of business legend.

Decision makers have good reasons to prefer instinct. In a survey of executives that Jagdish Parikh conducted when he was a student at Harvard Business School, respondents said they used their intuitive skills as much as they used their analytical abilities, but they credited 80% of their successes to instinct. Henry Mintzberg explains that strategic thinking cries out for creativity and synthesis and thus is better suited to intuition than to analysis. And a gut is a personal, nontransferable attribute, which increases the value of a good one. Readers can parse every word that Welch and Lutz and Rudolph Giuliani write. But they cannot replicate the experiences, thought patterns, and personality traits that inform those leaders' distinctive choices.

Although few dismiss outright the power of instinct, there are caveats aplenty. Behavioral economists such as Daniel Kahneman, Robert Shiller, and Richard Thaler have described the thousand natural mistakes our brains are heir to. And business examples are at least as persuasive as behavioral studies. Michael Eisner (Euro Disney), Fred Smith (ZapMail), and Soros (Russian securities) are among the many good businesspeople who have made bad guesses, as Eric Bona-beau points out in his article "Don't Trust Your Gut" (HBR May 2003).

Of course the gut/brain dichotomy is largely false. Few decision makers ignore good information when they can get it. And most accept that there will be times they can't get it and so will have to rely on instinct. Fortunately, the intellect informs both intuition and analysis, and research shows that people's instincts are often quite good. Guts may even be trainable, suggest John Hammond, Ralph Keeney, Howard Raiffa, and Max Bazerman, among others.

In The Fifth Discipline, Peter Senge elegantly sums up the holistic approach: "People with high levels of personal mastery…cannot afford to choose between reason and intuition, or head and heart, any more than they would choose to walk on one leg or see with one eye." A blink, after all, is easier when you use both eyes. And so is a long, penetrating stare.

Reprint R0601B


By Leigh Buchanan and Andrew O'Connell

Leigh Buchanan ( is a senior editor at HBR.

Andrew O'Connell (ao' is a manuscript editor at HBR.