Saturday, February 11, 2006

Forget about going to hell in a handbasket. We don't have time for the handbasket.

The pace of life is becoming ridiculous.

---

Twelve Easy Pieces

By JON MOOALLEM

Apples to Apples

For years, suspicion has been growing in the orchards of the Wenatchee Valley in Washington State and in the food industry at large that fruit, nature's original hand-held convenience food, is simply too poorly designed for today's busy eater. The apple, for instance: whatever it has meant to Americans over the years — from mom's pie to the little red schoolhouse — getting our mouths around one has also apparently meant some unspoken aggravation. Next to a banana or a grape, it's a daunting strongbox of a fruit, prohibitively so for anyone with braces or dentures; and even if you can break in, there's no guarantee a given apple will eat as sweet as it looks.

Nearly half of Americans now consume most of their meals away from home or on the go, utilizing an expedient fleet of Go-Gurts, drinkable soups and cereal bars, while bags of prewashed salad and baby carrots await us at home. Given how many foods we've been able to tweak or outright reinvent to fit into our harried lives, who could take seriously the Granny Smith — which, not unlike the bayonet or the daguerreotype, is by contemporary standards a cumbersome and unreliable technology? What appeal could an apple have left but nostalgia, the kind of thing you'd find at Restoration Hardware beside a galvanized watering can?

"A bowl of apples is like a piece of art," says Tony Freytag, marketing director at Crunch Pak, an apple-processing company. "It's display. People won't touch it. But you put out a tray of cut-up apples — that's food."

Since helping found Crunch Pak in 2000, Freytag has become one pioneer in a rapidly growing industry: packaging bags of ordinary-looking, fresh slices of apples that, bathed in an all-natural flavorless sealant, won't turn brown or lose their crisp for up to three weeks. Last year, McDonald's stocked 54 million pounds of presliced apples, to sell with caramel dip or in salads, and this increased visibility boosted enthusiasm for them in school cafeterias and among time-strapped, health-conscious parents nationwide. Crunch Pak now slices and packs apples under its own name for Wal-Mart and nearly a dozen other chains, under the in-house brand at Whole Foods and for the organic bagged-salad giant Earthbound Farm. In loose one-pound bags (about $2.99), eight-packs of two-ounce snack pouches (about $3.99) or six-ounce cupholder-ready canisters (about $1.99), they have slipped onto refrigerated shelves among packages of herbs and cantaloupe cubes.

What's astounding is how little these apples ask of us, particularly the nearly two-thirds of us who claim we'd eat more fruit if it didn't go bad "too soon" — meaning, presumably, before we wished it would. Now open your fridge after 17, 18, 21 days of neglect, and you will find the pre-sliced modern apple abiding: a bag of pristine white crescents, still smiling. In short, these are the most utilitarian apples mankind has ever built.

Not since the canneries of the early 20th century have food processors sought merely to preserve perishables. Processing foods now means redesigning them, making them easier to eat for a population that is steadily less willing to go to any trouble at all. Given the childhood obesity epidemic and the longstanding economic troubles of America's apple growers, boosting the apple's performance so that it could, as an industry observer explained, "stand up to ordinary use," was a doubly urgent project. By making a healthful, fresh fruit that looks and acts more like a bag of chips, a handful of companies like Crunch Pak may have finally figured out a way to compete with the hassle-free junk foods that blazed into this era of hyperconvenience. Some marketers say that the reformation of our venerable apple — and the sense that this improvement was necessary — suggest that we may soon buy most of our produce this way. Presliced plums, celery, tomatoes, sweet potatoes, mangoes and star fruits are all in production.

Crunch Pak was one of the first companies that labored to bring the new apple on line. Each found early on that what can be done casually at home — slicing an apple and squeezing lemon juice on it — is maddeningly difficult to pull off in a factory. The anti-browning bath is only one movement in a grand symphony of technologies at work. For nearly two decades, teams of food scientists, engineers and can-do businessmen struggled to pin down the apple, while the apple skirted and ducked them at every turn. They zigged, the apple zagged. Clearing one hurdle only brought more into view, and even now the particulars of production must be reassessed and rejiggered daily. The apple, Freytag told me when we first met, "is a moving target."

Freytag, a 54-year-old, broad-shouldered Texan, stepped out into the central Washington heat last August from Crunch Pak's refrigerated packing facility, an expressionless gray concrete hangar just off the town of Cashmere's American-flag-lined thoroughfare. He had made an unlikely transition into fruit after years of product-development work for retailers like Neiman Marcus and Saks Fifth Avenue. He likes to build things, to solve things, and he often talks about slicing apples as an "equation" — a matrix of "variables" and "commonalities" that may never be fully controlled but, with enough persistence, can be made adequately algorithmic.

Throughout the Wenatchee Valley behind us, the first apples, Galas, were rioting into ripeness. Soon they'd arrive at Crunch Pak, and immediately, their vital signs would be taken: sugars, starches, internal pressure in pounds per square inch. The company must understand each crop's singular specifications in the same way a prizefighter studies the height, weight and reach of his opponent. The apple's wildness, so to speak, must be thoroughly assessed before the faintly Rube Goldbergian machinery goes about transforming it into something more sophisticated — an apple in 12 slices, with a compelling, albeit elusive, advantage.

But what precisely makes cutting up a piece of fruit worth the tremendous saga that preceded it? Why, after thousands of years of eating apples, are we losing our patience for them, and where, if not at the apple, will it stop? Strangely, this is one puzzle that the characteristically industrious Freytag seemed uninterested or unable to solve.

"I had a real problem in the beginning, as a consumer, paying $3 for a bag of lettuce when I knew that lettuce cost 89 cents a head," he told me, seeming to venture an explanation. "And yet we all did it," he said, and left it at that.

Snackability

As soon as you slice into an apple, the apple mobilizes against you, swiftly and on many fronts. Chemical signals are broadcast, ramping up its production of the hormone ethylene, which encourages ripening, and increasing the rate at which the apple absorbs oxygen and gives off carbon dioxide — just as a human's breath will quicken when a person is injured.

The knife edge has meanwhile ruptured the architecture of countless cells. Substances normally compartmentalized within the fruit suddenly spill out and intermingle. Freed enzymes seep into cell walls, softening them. Little by little, like moisture in the walls of a house, this enfeebles the entire apple. Phytochemicals called phenols, the wealth of which make apples so good for us, are also loosed. So is an enzyme called polyphenol oxidase. Inevitably the two meet. Their reaction begets others, producing a chain of acids that clump together and coagulate into other acids called polyphenols. The polyphenols are brown and unappetizing. Rapidly, they occupy more and more of the flesh. The apple, left absent-mindedly on the counter when the phone rang or the baby started crying, has now efficiently disfigured itself. It is spectacularly good at this.

With no way to tame this mayhem — to stop a wounded apple from "turning its artillery on itself," as James Gorny, a food scientist, put it recently — the apple industry was unable to get in on the baby-carrot and bagged-lettuce booms of the mid-90's. Most anti-browning treatments left sliced apples with a sour taste or had names like "4-hexyl resorcinol," which would muddy the fruit's biggest selling point, its wholesome image.

"We studied this at the Apple Commission in the 80's," says Steve Lutz, a second-generation apple man who headed the Washington Apple Commission until 2000. "At the time, it was sort of like the Holy Grail: if we could just figure out how to slice these goddarn things."

Throughout the last century, food makers have used convenience, or at least its appearance, to add value to small-margin commodities like apples. In the 1960's, new processing plants in Washington State made frozen juice concentrate from the boxcars of unmarketable fruits that growers had dumped in the Columbia River each year. Such "value-added products" were right in line with the postwar bounty of timesaving foodstuffs emblematized by the TV dinner.

But with the fracturing of the family meal, innovations now center on ease of use, not ease of preparation. True convenience now means being eaten with one hand, no utensils, outside the home and alone. Glass jars of applesauce gave way to single-serving snack packs; more recently, as with Mott's "Fruit Blasters" and Birds Eye "Squeezle-Sauz," to applesauce pressed like epoxy from tubes.

Industry insiders now talk about elevating a food's "snackability," which, in short, means engineering it with enough convenience that picking up a piece and putting it in your mouth becomes an almost perfunctory transaction. A snackable food is crumbless and fussless. It is most likely broken into bite-size pieces, encouraging us to eat more. If the food's form itself doesn't imply a portion size — the way, say, one apple or one cupcake does — there's no obvious signal to stop. This triggers what one marketer, Barb Stuckey, calls "mindless munching" — the hand's almost hypnotic back and forth between bag and mouth. Stuckey also explains that plucking at individualized little pieces of something is just "more fun" than dealing with a chunkier whole. So pretzels are cut down into stumpy pretzel bits, doughnuts broken into doughnut holes and regular-size carrots scrubbed and lathed into several "baby" ones. Even the Kit Kat — a candy bar designed to be easily handled in the first place — is now available as a bag of smaller Kit Kat Bites.

Still, in light of the incident in Eden, the proposition that a whole apple isn't a particularly tempting item to reach for and bite into may sound absurd. It is not. Research done by the Washington Apple Commission and Mantrose-Haueser, the company that eventually solved the browning issue, sought to establish, empirically, just this. One study showed Florida school students eating slices twice as often as whole apples. In Nevada, elementary-school kids ate nearly triple the fruit when offered apple slices over whole — and this study even measured how much of each ended up in the garbage, since a whole apple might be tossed after a few bites.

"This all started with children," Freytag says. "I think what parents saw is, here's some piece of fruit my child will actually eat." Every parent seems to understand his or her small-mouthed, small-handed child won't get very far with a craggy Red Delicious, and Lutz says that everyone knew that "if it came down to the consumer standing there, cutting it up themselves, putting some lemon juice on it and putting it in a Ziploc bag, they're just not going to do it." More troubling still, research suggests that unless kids develop a taste for apples early, roughly between the ages of 2 and 11, they may never. But children with baby teeth may have trouble eating a whole apple. And with the American Association of Orthodontists recommending that kids get their first consultation by age 7, that all-important window in which to hook people for life was being truncated at both ends.

The dream of a presliced apple gained particular urgency during the economic free fall of the 1998-99 crop year. The crop from 1998 was the largest to date, up about 15 percent to 277 million bushels. Yet overall U.S. consumption of apples had been flat for a generation. (Americans eat less than half as many pounds of apples yearly as Europeans do.) Export markets were drying up, and cheaper Chinese apples were coming in to compete. Meanwhile, new exotic-seeming varieties like Braeburns and Galas were driving Red Delicious prices down. They were also refining tastes. A 1999 Associated Press article cited complaints that "the Red Delicious apple — the symbol of Washington's apple industry — too often is about as delicious as a wad of cotton."

Even without a way to solve the browning issue, Lutz pitched the Subway sandwich chain on slices served with caramel dip, just like the Apple Dippers that McDonald's sells today. He called them Submersibles and proposed, naïvely, shipping Subway boxes of whole apples and installing a wall-mounted slicer in each franchise — burdening the kid behind the counter with cutting them. "We lost out to cookies," he says.

The commission also introduced a campaign featuring a scooter-riding superhero, Apple Guy, who in one TV commercial delivered energizing apples to a weary father and daughter at a zoo. Apple Guy struggled to convince a demographic that the commission dubbed "Stressed Moms" that the apple — as is — was in fact a convenient, snackable snack. He failed.

Clearly all fruit was now competing with processed snacks that, over time, had swayed people's expectations. No matter how convenient the old apple was, it wasn't branded correctly or sending the right signals. No one had done anything to it, vouched for it or made it just for you.

"You're being outspent 100 to 1 by items that are prepackaged, easy to eat and also totally consistent," Lutz says. (According to one study, in 2000, apples, bananas and oranges together accounted for less than 6 percent of the $80 billion snack-food market.) "I mean, with apples, you're growing them on a tree. You can only put out what Mother Nature gives you." He adds that the path from orchard to eater is fraught with potential stresses. Washington growers have to pick all their apples in late summer and fall. They hold them in oxygen-depleted storage rooms and then trickle them, via various distribution centers, into supermarkets throughout the rest of the year. There's no way for anyone to know how a particular apple has weathered all of this without biting into it. And a bright red apple that greets the teeth like a sponge amounts to a kind of betrayal.

"People were quite discriminating and quite punitive," says Desmond O'Rourke, an economist who has studied apple marketing for four decades, describing consumer studies done at the time. "If they got an apple they didn't like, they wouldn't buy apples again" for some time. It seemed one bad apple really could spoil the whole bunch. No one, after all, walks down the chip-and-cracker aisle squeezing bags of Fritos to see which one is best.


Making the Cut

A few mornings after Christmas, Freytag and I looked up at the hulking assemblage of chutes, belts, flumes and fast-moving parts that take up a fourth of Crunch Pak's slicing room. Men and women in purple coats, hard hats and masks were positioned at the opposite end, sorting apple pieces.

It wasn't until we climbed a ladder onto a little platform that I could see them: a slow, thick traffic of apples entering the room through a stainless-steel canal of frigid water, 10 feet off the ground. They were hefty and red-speckled, their tops bobbing in a tight phalanx. The odd one sported a few leaves.

These were Galas, which, along with Pink Ladies, Crunch Pak sells simply as "Sweet." (Granny Smiths are labeled "Tart.") They'd been picked around the time of my last visit in August and sealed in storage ever since. This had slowed, but not completely halted, their ripening.

The apple spends its entire life ripening, slowly converting its starches into sugars and making its solids soluble. The exact chemistry inside even apples of the same variety will be different on every day of the season. And once picked, Freytag explained, "that apple is still a living, breathing thing. Everything from the cellular structure, to the sugars, the starches — when you store it, nothing in that apple will stay the same."

All this variability makes it extremely tough to convert those thousands of idiosyncratic fruits into baggies of uniform, ultrareliable snacks. Over the years, Crunch Pak has been building a sweeping apple database. As the company constantly monitors and tweaks everything in the slicing room — the anti-browning treatments, water temperature, air temperature — it correlates those variables to the condition of the slices churned out. Entering any bag's 11-digit code gives a snapshot of how the processing floor was configured at the time it was sealed. The company is laboring, like a cryptographer, to decode the apple and predict with greater and greater confidence how any given one will react.

"It's all part of the equation," Freytag told me, shouting over the din of rushing water and the machinery's thwacks. "We look at the apples and have to figure out, what are the physiological issues inside that apple? And what are we going to have to do to sustain and accommodate them?" Among other things, an apple's specific chemistry dictates the optimal concentration of NatureSeal, the blend of vitamin C and calcium that finally made the nonbrowning apple slice a practicable proposition.

Though it may sound like glorified lemon juice, NatureSeal is the product of a decade of U.S.D.A. and private research. It's a flavorless white powder that, mixed with water, penetrates a few millimeters beneath the surface of a cut apple. (According to Crunch Pak, the sliced apples don't lose any nutritional value and in fact NatureSeal ends up fortifying each apple with 200 percent of the daily requirement of Vitamin C.) The ascorbic acid in NatureSeal searches out and bonds to the loose phenols, blocking them off from the polyphenol oxidase enzyme and interrupting the browning reaction. The calcium salts work like cement to stiffen the fruit's softening cell walls. All of this happens inside the apple, so the solution leaves no perceptible layer or shell on the surface — unlike, say, the high-sheen shellac on Goobers, or the substrate on time-released pain relievers, both of which NatureSeal's producer, Mantrose-Haueser, also makes. Freytag first heard about NatureSeal while trying to figure out a way to slice pears for a packer in Oregon. When he received a sample at his house, the first thing he did was stick his finger in the powder and lick it. Surprisingly, there was no taste. Eventually, he sliced and treated some apples in his kitchen, then stuck them in the refrigerator in his garage to see how long they'd last.

"My son was 20-something, and he wasn't living with us," he remembered. "He'd just come over to eat. And I found that the apples were disappearing. He ate some at one point that were, like, 30 days old. So I said, 'How were the apples?"' Here Freytag affected a slightly stoned-sounding obliviousness: "'Great, Dad. No problem."'

Soon enough, with the brown stuff out of the way, would-be processors could peer inside the split-open apple and see other problems awaiting them. Crunch Pak initially set up shop in a plant also used to slice onions, not realizing a cut apple soaks up odor like a sponge. The facility Freytag and I were now standing in was the company's third successively larger one since going into production in 2001.

The only precedent for nearly everything being done to the apples in the busy room was slicing apples for pie filling. Ugliness is of no consequence inside a pie, and the existing machinery wasn't averse to browbeating apples to cheaply and quickly dispatch them into the world's ovens.

Gradually each fresh-cut company had to retrofit or build from scratch a kinder network of apparatuses to coddle the apple: water flumes replaced conveyor belts, for example, and the drop height on the bagging equipment — originally used to bag lettuce — was reduced. Because competitors often work independently with equipment makers to address problems, the whole industry is now rife with stubbornly guarded secrets. It took four months and a nondisclosure agreement before Freytag would even let me see Crunch Pak cut some fruit.

He now reached to a bulky furnacelike compartment of the slicing machine and forcefully pulled its two panels apart. Inside, pistons fired and, centered in this confusion, six shining apples atop six separate cylinders spun furiously on their axes while the entire mechanism supporting them rotated. The whole thing resembled some science-fiction rib cage, as if the small, whirling apples inside were the organic parts powering the entire machine.

We were watching Galas be cored, but it was happening so rapidly that I had to take Freytag's word for it. Slices and stray seed-cell divots fell from a chute on our left in quick batches. Each was an apple, sliced in 12 segments, the optimum width to sustain crispness longer. Each of the six machines sheers roughly 90 fruits per minute, 16 hours a day, six days a week, and feeds them to tanks of NatureSeal at the end of the line.

A lot is at stake in the split second the apple is sliced. A ragged cut rips open more cells and leads to more browning. Few producers slice Red Delicious, since the irregular humps on its blossom end prevent the machine from clutching it snugly. "A nice Granny Smith, or a Gala or Fuji," Ben Zamore, a sales manager at Atlas Pacific, the company that manufactures the slicing machines, says, "those are beautiful, well-shaped fruits." Meanwhile, much has been done, clandestinely, to hone the blade edges. Atlas regularly tests the efficiency of its blades at a lab at Colorado State University. Freytag wouldn't tell me anything about Crunch Pak's blades.

Cleaving an apple open obliterates its natural safeguard against contamination — its skin — and exposes the moist, absorptive surfaces inside. Since a cut apple browns to kill a layer of itself and shield the rest, NatureSeal similarly weakens its natural defenses.

"Once you've cut that apple open, you are challenging the integrity of the fruit," explains Richard Olsen of Tree Top, the nation's largest apple processor and the company supplying McDonald's with its slices. (Crunch Pak early on turned down a contract with McDonald's, wary in part of growing too fast. The company now dices apples for Arby's.) Anyone who slices the apple must therefore take responsibility for it — become the apple's immune system. Air in Crunch Pak's slicing room must be purified and pressurized. Everyone wears latex and plastic, and even I had to walk across a funny platform of spinning yellow shoe brushes before entering. With apples skating through a production line and intermixing, an infectious agent like listeria in one could put an entire day's output at risk.

Freytag likes to say: "When we first got into this business, what we were told is: 'O.K., an apple is an apple is an apple. And you take this solution, and you put an apple in it, and — Bingo! You've got sliced apples that don't turn brown.' If we knew then what we know now, we might not be in this business." Clearly this is more a boast than a confession.

Disgust

Freytag conducted Crunch Pak's first market research at Little League games. After dispensing samples, he would ask parents why they would pay extra for what effectively boils down to an apple-chopping service. He claims that the answer he heard most often was "Because I'd rather be here at the game watching my son play baseball than at home slicing apples."

Oddly, apple slicing was being denigrated as an unthinkably oppressive undertaking, a kind of punitive kitchen duty, evoking the G.I. obscured behind mounds of to-be-peeled potatoes.

"You look at the number of meals being eaten in automobiles," Steve Lutz says (research by John Nihoff, a Culinary Institute of America food historian, estimates that 19 percent of all meals or snacks in this country are eaten there), "and you'd think the apple is convenient already. But when you finish it, you have a core to deal with. You have waste. Plus, once you've started an apple, you're sort of committed to eating the whole thing."

"I don't think consumers are very comfortable leaving a half-eaten apple lying around their car or their house," Lutz adds. "I don't know of a single instance, and I can't imagine a single instance where a mother has said to her child, 'I know you can't eat this whole apple, but take two bites and we'll take the rest home and put it in your lunch for tomorrow."'

This is a peculiar phenomenon of the food industry's escalating arms race of snackability: that once the minor hassles of a given food are eliminated, its original version can feel positively insufferable. "I call it the Starbucks effect," Lutz says. "Once you get used to Starbucks, you can't go back to Folgers. Even though you might have fond memories of Folgers."

Stranger still, I found, if we've become less willing to put up with the old apple, we may also be slightly less willing to stomach it.

Paul Rozin is a cultural psychologist at the University of Pennsylvania and, though he may not introduce himself this way at parties, an authority on disgust. In even our brief phone conversation not long ago, Rozin assured me repeatedly that he personally enjoys biting into a big apple. He enjoys it very much, in fact. But he immediately intuited why others wouldn't and why, after snacking, we're increasingly more comfortable holding a spent plastic bag than an apple core.

"Because the bag doesn't have any of you in it!" he shot out, as though it were obvious. "The core is an extension of your tongue and your mouth, and the bag is not." We, it turns out, are the thing we find most disgusting.

"As the world gets more and more cleaned up of these things, and as you get highly sensitive to disgust, a bitten piece of food in your hand is not too nice," he posited. An eater of the whole apple must, with each bite, readdress his mouth to "the unsavoriness of the bitten edge in front of you." But eating apple slices means treating yourself to a clean, unspoiled, appealingly geometric shape every few seconds.

Apples do not feature prominently in the literature of disgust — not yet, at least. Meat is generally considered the most potentially disgusting food, since turning a dead carcass into an unthreatening, appetizing meal involves a nuanced psychological maneuver. ("No one who went into a supermarket would know that beef is from a cow," Rozin elaborated. "We've completely sanitized it.")

Analyzing a group of studies in the 1940's, the seminal disgust-psychologist Andras Angyal noted that, with the exception of a few slimy things, "no plant product was reported as disgusting." But given the metropolis of slickly packaged, processed foods we've built up around the apple, perhaps even a fruit might now seem crude in the same way as a raw pork tenderloin. Might the idyllic apple, too, need sanitizing, civilizing?

"That would be the next step," Rozin assented, later adding, "We all breathe each other's air. But if there were a way not to, I'm sure people would avoid doing it."

Savage Tastes

"It takes a savage or wild taste to appreciate a wild fruit," Henry David Thoreau wrote. In "Wild Apples," he went so far as to claim that a crabby, uncultivated apple can be enjoyed only if it's eaten outdoors. It will actually taste different indoors. "What is sour in the house," he wrote, "a bracing walk makes sweet."

We, like Thoreau, are used to eating while walking, though getting up and changing locations to accommodate a piece of fruit is laughably antithetical to the way we snack now. We process our food so we don't have to adapt to its eccentricities. We wedge foods into our lifestyles, and as they get crowded out or accelerated past, we give them the tools to catch us on the fly. But while a cookie or a Cheez Doodle can be reined in without much argument, an apple — like most healthful alternatives — has a will of its own.

"An apple is like us," Crunch Pak's general manager, Craig Carson, explained after my tour. "It's alive. If we don't handle it correctly, we can kill it."

Carson is a slim and serious-looking man who has been growing apples in the Wenatchee Valley for 25 years. He doesn't expect sliced apples to dominate in nearly the same way bagged salad has overtaken the lettuce business, he said. Cutting apples is simply one way to help growers, to make a healthy food more versatile, more likely to be eaten. Talking with him, I began to wonder if processing apples is not so unlike growing them. After all, Thoreau was offended that orchardists cultivated only a few varieties from nature's virtually infinite stock. And growers have long thinned out their boughs to promote fewer, bigger apples rather than the many, smaller ones more advantageous to the plant species.

This is to say that our integrity and the apple's have always been at odds, and historically, we haven't been the ones to budge. As our tastes skew in more unexpected directions, we expect more from our apples and must retrofit them accordingly or transform them. As a man named Welcome Sauer, who ran the Apple Commission's consumer research, remembers realizing: "What we were selling with the fresh-cut apple slice was not an apple anymore. What we were selling was a guilt-free snack food." It may be that companies like Crunch Pak are simply furthering the domestication of the apple, just more assertively and invasively than anyone has before.

Around the time of my visit in December, a new Nielsen report showed Crunch Pak controlling more than half of the presliced-apple market and the whole category vaulting up 300 percent from the previous year.

"When you see numbers like that," Freytag had told me, "I think back to when people would look at me like I was from Mars and say: 'Why would I want a sliced apple? I've got a whole apple' — and those were their exact words."

It was clear they'd come around as bagged apple slices smoothly exited the cutting room on a conveyor belt through the wall. There, more purple-clad employees stood waiting to box them up. Sufficiently enhanced, the apples were now bound for places like Costco and Albertsons and out into what — standing in the contraption-filled clean room on the opposite side of that wall — Freytag had maybe generously called "the normal world."

Jon Mooallem is a writer living in San Francisco. He has written for Harper's, The New Yorker and The Believer.

Copyright 2006The New York Times Company

The need to believe.

link to original article.

In John They Trust

South Pacific villagers worship a mysterious American they call John Frum - believing he'll one day shower their remote island with riches

By Paul Raffaele

In the morning heat on a tropical island halfway across the world from the United States, several dark-skinned men—clad in what look to be U.S. Army uniforms—appear on a mound overlooking a bamboo-hut village. One reverently carries Old Glory, precisely folded to reveal only the stars. On the command of a bearded “drill sergeant,” the flag is raised on a pole hacked from a tall tree trunk. As the huge banner billows in the wind, hundreds of watching villagers clap and cheer.

Chief Isaac Wan, a slight, bearded man in a blue suit and ceremonial sash, leads the uniformed men down to open ground in the middle of the village. Some 40 barefoot "G.I.’s" suddenly emerge from behind the huts to more cheering, marching in perfect step and ranks of two past Chief Isaac. They tote bamboo “rifles” on their shoulders, the scarlet tips sharpened to represent bloody bayonets, and sport the letters “USA,” painted in red on their bare chests and backs.

This is February 15, John Frum Day, on the remote island of Tanna in the South Pacific nation of Vanuatu. On this holiest of days, devotees have descended on the village of Lamakara from all over the island to honor a ghostly American messiah, John Frum. “John promised he’ll bring planeloads and shiploads of cargo to us from America if we pray to him,” a village elder tells me as he salutes the Stars and Stripes. “Radios, TVs, trucks, boats, watches, iceboxes, medicine, Coca-Cola and many other wonderful things.”

The island’s John Frum movement is a classic example of what anthropologists have called a “cargo cult”—many of which sprang up in villages in the South Pacific during World War II, when hundreds of thousands of American troops poured into the islands from the skies and seas. As anthropologist Kirk Huffman, who spent 17 years in Vanuatu, explains: “You get cargo cults when the outside world, with all its material wealth, suddenly descends on remote, indigenous tribes.” The locals don’t know where the foreigners’ endless supplies come from and so suspect they were summoned by magic, sent from the spirit world. To entice the Americans back after the war, islanders throughout the region constructed piers and carved airstrips from their fields. They prayed for ships and planes to once again come out of nowhere, bearing all kinds of treasures: jeeps and washing machines, radios and motorcycles, canned meat and candy.

But the venerated Americans never came back, except as a dribble of tourists and veterans eager to revisit the faraway islands where they went to war in their youth. And although almost all the cargo cults have disappeared over the decades, the John Frum movement has endured, based on the worship of an American god no sober man has ever seen.

Many Americans know Vanuatu from the reality TV series “Survivor,” though the episodes shot there hardly touched on the Melanesian island nation’s spectacular natural wonders and fascinating, age-old cultures. Set between Fiji and New Guinea, Vanuatu is a Y-shaped scattering of more than 80 islands, several of which include active volcanoes. The islands were once home to fierce warriors, among them cannibals. Many inhabitants still revere village sorcerers, who use spirit-possessed stones in magic rituals that can lure a new lover, fatten a pig or kill an enemy.

Americans with longer memories remember Vanuatu as the New Hebrides—its name until its independence from joint British and French colonial rule in 1980. James Michener’s book Tales of the South Pacific, which spawned the musical South Pacific, grew out of his experiences as an American sailor in the New Hebrides in World War II.

My own South Pacific experience, in search of John Frum and his devotees, begins when I board a small plane in Vanuatu’s capital, Port-Vila. Forty minutes later, coral reefs, sandy beaches and green hills announce Tanna Island, about 20 miles long and 16 miles at its widest point, with a population of around 28,000. Climbing into an ancient jeep for the drive to Lamakara, which overlooks Sulphur Bay, I wait while Jessel Niavia, the driver, starts the vehicle by touching together two wires sticking out from a hole under the dashboard.

As the jeep rattles up a steep slope, the narrow trail slicing through the jungle’s dense green weave of trees and bushes, Jessel tells me that he is the brother-in-law of one of the cult’s most important leaders, Prophet Fred—who, he adds proudly, “raised his wife from the dead two weeks ago.”

When we reach the crest of a hill, the land ahead falls away to reveal Yasur, Tanna’s sacred volcano, a few miles to the south, its ash-coated slopes nudging the shoreline at Sulphur Bay. Dark smoke belches from its cone. “‘Yasur’ means God in our language,” Jessel murmurs. “It’s the house of John Frum.”

“If he’s an American, why does he live in your volcano?” I wonder aloud.

“Ask Chief Isaac,” he says. “He knows everything.”

Dotting the dirt road are small villages where women with curly, bubble-shaped hair squat over bundles of mud-coated roots called kava, a species of pepper plant and a middling narcotic that is the South Pacific’s traditional drug of choice. Connoisseurs say that Tanna’s kava is the strongest of all. Jessel buys a bundle of roots for 500 vatu, about $5. “We’ll drink it tonight,” he says with a grin.

For as long as Tanna’s inhabitants can remember, island men have downed kava at sunset each day in a place off-limits to women. Christian missionaries, mostly Presbyterians from Scotland, put a temporary stop to the practice in the early 20th century, also banning other traditional practices, or “kastom,” that locals had followed faithfully for millennia: dancing, penis wrapping and polygamy. The missionaries also forbade working and amusement on Sundays, swearing and adultery. In the absence of a strong colonial administrative presence, they set up their own courts to punish miscreants, sentencing them to forced labor. The Tannese seethed under the missionaries’ rules for three decades. Then, John Frum appeared.

The road drops steeply through more steamy jungle to the shoreline, around the point from Yasur, where I will stay in a hut on the beach. As the sun sets beyond the rain-forest- covered mountains that form Tanna’s spine, Jessel’s brother, Daniel Yamyam, arrives to fetch me. He has the soft-focus eyes and nearly toothless smile of a kava devotee. Daniel was once a member of Vanuatu’s Parliament in Port-Vila, and his constituents included John Frum followers from what was then the movement’s stronghold, Ipikil, on Sulphur Bay. “I’m now a Christian, but like most people on Tanna, I still have John Frum in my heart,” he says. “If we keep praying to John, he’ll come back with plenty of cargo.”

Daniel leads me to his village nakamal, the open ground where the men drink kava. Two young boys bend over the kava roots Jessel had purchased, chewing chunks of them into a stringy pulp. “Only circumcised boys who’ve never touched a girl’s body can make kava,” Daniel tells me. “That ensures that their hands are not dirty.”

Other boys mix water with the pulp and twist the mixture through a cloth, producing a dirty-looking liquid. Daniel hands me a half-coconut shell filled to the brim. “Drink it in one go,” he whispers. It tastes vile, like muddy water. Moments later my mouth and tongue turn numb.

The men split into small groups or sit by themselves, crouching in the darkness, whispering to each other or lost in thought. I toss back a second shell of the muddy mix, and my head tugs at its mooring, seeking to drift away into the night.

Yasur rumbles like distant thunder, a couple of miles over the ridge, and through the trees I glimpse an eerie red glow at its cone. In 1774, Capt. James Cook was lured ashore by that same glow. He was the first European to see the volcano, but local leaders banned him from climbing to the cone because it was taboo. Daniel assures me the taboo is no longer enforced. “Go with Chief Isaac,” he advises. “You can ask him tomorrow.”

After I drink my third shell of kava, Daniel peers into my undoubtedly glazed eyes. “I’d better take you back,” he says. By the seaside at my hut, I dance unsteadily to the rhythm of the waves as I try to pluck the shimmering moon from the sky and kiss it.

The next morning, I head to Lamakara to talk to Chief Isaac. Surrounded by an eerie doomsday moonscape of volcanic ash, Yasur looms behind the village. But at only 1,184 feet high, the sacred volcano has none of the majesty of, say, Mount Fuji; instead, its squat shape reminds me of a pugnacious bulldog standing guard before its master’s house. My driver points at the cone. “Haus blong John Frum,” he says in pidgin English. It’s John Frum’s house.

In the village dozens of cane huts, some with rusting tin roofs, encircle an open ceremonial dancing ground of impacted ash and the mound where the American flag flies each day, flanked by the much smaller flags of Vanuatu, ex-colonial ruler France and the Australian Aborigines, whose push for racial equality the villagers admire. Clearly, John Frum has yet to return with his promised cargo because Lamakara is dirt poor in consumer goods. But island men, wrapped in cloth known as lava-lava, women in large flowered dresses and mostly barefoot children in T-shirts appear healthy and seem happy. That’s no surprise: like many South Pacific coastal villages, it’s a place where coconuts drop by your side as you snooze. Yams, taro, and pineapples and other fruit thrive in the fertile volcanic soil, and plump pigs sniff around the village for scraps. Tasty fruit bats cling upside down in nearby trees.

Chief Isaac, in an open-neck shirt, green slacks and cloth shoes, greets me on the mound and leads me into a hut behind the flagpoles: the John Frum inner sanctum, off-limits to all but the cult’s senior leaders and, it seems, male visitors from abroad. “Office blong me,” he says with a smile as we enter.

The hut is dominated by a round table displaying a small U.S. flag on a pedestal, a carved bald eagle and imitation U.S. military uniforms neatly folded and placed in a circle, ready for use on John Frum Day in a little more than a week. Above, suspended by vine from a beam, hangs a globe, a stone ax and a pair of green stones carved into circles the size of a silver dollar. “Very powerful magic,” the chief says as he points to the stones. “The gods made them a long time ago.”

Written on a pair of blackboards is a plea that John Frum’s followers lead a kastom life and that they refrain from violence against each other. One of the blackboards bears a chalked red cross, probably copied from U.S. military ambulances and now an important symbol for the cult.

“John Frum came to help us get back our traditional customs, our kava drinking, our dancing, because the missionaries and colonial government were deliberately destroying our culture,” Chief Isaac says, his pidgin English translated by Daniel.

“But if John Frum, an American, is going to bring you modern goods, how does that sit with his wish that you lead a kastom life?” I ask.

“John is a spirit. He knows everything,” the chief says, slipping past the contradiction with the poise of a skilled politician. “He’s even more powerful than Jesus.”

“Have you ever seen him?”

“Yes, John comes very often from Yasur to advise me, or I go there to speak with John.”

“What does he look like?”

“An American!”

“Then why does he live in Yasur?”

“John moves from America to Yasur and back, going down through the volcano and under the sea.”
When I mention Prophet Fred, anger flares in Chief Isaac’s eyes. “He’s a devil,” he snarls. “I won’t talk about him.”

What about your visit to the United States in 1995? I ask. What did you think of your religion’s heaven on earth? He raises his hands apologetically. “I have much to do today. I’ll tell you about it another time.” On the way back to my hut, it occurs to me that I forgot to ask him to take me to the volcano.

Chief Isaac and other local leaders say that John Frum first appeared one night in the late 1930s, after a group of elders had downed many shells of kava as a prelude to receiving messages from the spirit world. “He was a white man who spoke our language, but he didn’t tell us then he was an American,” says Chief Kahuwya, leader of Yakel village. John Frum told them he had come to rescue them from the missionaries and colonial officials. “John told us that all Tanna’s people should stop following the white man’s ways,” Chief Kahuwya says. “He said we should throw away their money and clothes, take our children from their schools, stop going to church and go back to living as kastom people. We should drink kava, worship the magic stones and perform our ritual dances.”

Perhaps the chieftains in their kava reveries actually experienced a spontaneous vision of John Frum. Or perhaps the apparition has more practical roots. It’s possible that local leaders conceived of John Frum as a powerful white-skinned ally in the fight against the colonials, who were attempting to crush much of the islanders’ culture and prod them into Christianity. In fact, that view of the origins of the cult gained credence in 1949, when the island administrator, Alexander Rentoul, noting that “frum” is the Tannese pronunciation of “broom,” wrote that the object of the John Frum movement “was to sweep (or broom) the white people off the island of Tanna.”

Whatever the truth, John Frum’s message struck a chord. Villagers on Tanna began throwing their money into the sea and killing their pigs for grand feasts to welcome their new messiah. Colonial authorities eventually struck back, arresting the movement’s leaders—including Chief Isaac’s father, Chief Nikiau. They were shipped to a prison at Port-Vila in 1941, their subsequent years behind bars earning them status as the John Frum movement’s first martyrs.

The cult got its biggest boost the following year, when American troops by the thousands were dispatched to the New Hebrides, where they built large military bases at Port-Vila and on the island of Espíritu Santo. The bases included hospitals, airstrips, jetties, roads, bridges and corrugated-steel Quonset huts, many erected with the help of more than a thousand men recruited as laborers from Tanna and other parts of the New Hebrides—among them Chief Kahuwya.

Where the U.S. armed forces go, so go the legendary PXs, with their seemingly endless supply of chocolate, cigarettes and Coca-Cola. For men who lived in huts and farmed yams, the Americans’ wealth was a revelation. The troops paid them 25 cents a day for their work and handed out generous amounts of goodies.

The Americans’ munificence dazzled the men from Tanna, as did the sight of dark-skinned soldiers eating the same food, wearing the same clothes, living in similar huts and tents and operating the same high-tech equipment as white soldiers. “In kastom, people sit together to eat,” says Kirk Huffman, who was the curator of Vanuatu’s cultural center during his years in the island nation. “The missionaries had angered the Tannese by always eating separately.”

It seems this is when the legend of John Frum took on a decidedly American character. “John Frum appeared to us in Port-Vila,” Chief Kahuwya says, “and stayed with us throughout the war. John was dressed in all white, like American Navy men, and it was then we knew John was an American. John said that when the war was over, he’d come to us in Tanna with ships and planes bringing much cargo, like the Americans had in Vila.”

In 1943, the U.S. command, concerned about the movement’s growth, sent the USS Echo to Tanna with Maj. Samuel Patten on board. His mission was to convince John Frum followers that, as his report put it, “the American forces had no connection with Jonfrum.” He failed. At war’s end, the U.S. military unwittingly enhanced the legend of their endless supply of cargo when they bulldozed tons of equipment—trucks, jeeps, aircraft engines, supplies—off the coast of Espíritu Santo. During six decades in the shallows, coral and sand have obscured much of the watery grave of war surplus, but snorkelers can still see tires, bulldozers and even full Coke bottles. The locals wryly named the place Million Dollar Point.

After the war, when they returned home from Port-Vila to their huts, the Tanna men were convinced that John Frum would soon join them, and hacked a primitive airstrip out of the jungle in the island’s north to tempt the expected American planes from the skies. Across the South Pacific, thousands of other cargo cult followers began devising similar plans—even building bamboo control towers strung with rope and bamboo aerials to guide in the planes. In 1964, one cargo cult on New Hanover Island in Papua New Guinea offered the U.S. government $1,000 for Lyndon Johnson to come and be their paramount chief. But as the years passed with empty skies and seas, almost all the cargo cults disappeared, the devotees’ hopes crushed.

At Sulphur Bay the faithful never wavered. Each Friday afternoon, hundreds of believers stream across the ash plain below Yasur, coming to Lamaraka from villages all over Tanna. After the sun goes down and the men have drunk kava, the congregation gathers in and around an open hut on the ceremonial ground. As light from kerosene lamps flickers across their faces, they strum guitars and homemade ukuleles, singing hymns of John Frum’s prophecies and the struggles of the cult’s martyrs. Many carry the same plea: “We’re waiting in our village for you, John. When are you coming with all the cargo you promised us?”

Threaded among the singers’ perfect harmonies is a high-pitched Melanesian keening that hones each hymn with a yearning edge. I look around in vain for Chief Isaac until a senior man in the cult whispers that after drinking kava, Isaac has disappeared among the darkened trees to talk to John Frum. The weekly service doesn’t end until the sun comes back up, at seven the next morning.

The John Frum movement is following the classic pattern of new religions,” says anthropologist Huffman. Schisms split clumps of faithful from the main body, as apostates proclaim a new vision leading to sacrilegious variants on the creed’s core beliefs.

Which explains Prophet Fred, whose village, Ipikil, is nestled on Sulphur Bay. Daniel says that Prophet Fred split with Chief Isaac in 1999 and led half of the believer villages into his new version of the John Frum cult. “He had a vision while working on a Korean fishing boat in the ocean,” Daniel says. “God’s light came down on him, and God told him to come home and preach a new way.” People believed that Fred could talk to God after he predicted, six years ago, that Lake Siwi would break its natural dam and flood into the ocean. “The people living around the lake [on the beach beneath the volcano] moved to other places,” says Daniel. “Six months later, it happened.”

Then, almost two years ago, Prophet Fred’s rivalry with Chief Isaac exploded. More than 400 young men from the competing camps clashed with axes, bows and arrows and slingshots, burning down a thatched church and several houses. Twenty-five men were seriously injured. “They wanted to kill us, and we wanted to kill them,” a Chief Isaac loyalist says.

A few days before Lamakara’s annual John Frum celebration, I visit Prophet Fred’s village—only to find that he’s gone to the island’s northern tip to preach, most likely to avoid the celebrations. Instead, I meet his senior cleric, Maliwan Tarawai, a barefoot pastor carrying a well-thumbed Bible. “Prophet Fred has called his movement Unity, and he’s woven kastom, Christianity and John Frum together,” Tarawai tells me. The American messiah is little more than a figurehead in Fred’s version, which bans the display of foreign flags, including Old Glory, and forbids any talk of cargo.

All morning I watch as vocalists with a string band sing hymns about Prophet Fred while several wild-eyed women stumble around in what appears to be a trance. They faith-heal the sick by clutching the ailing area of the body and praying silently to the heavens, casting out demons. Now and then they pause to clutch with bony fingers at the sky. “They do this every Wednesday, our holy day,” Tarawai explains. “The Holy Spirit has possessed them, and they get their healing powers from him and from the sun.”

Back in Lamakara, John Frum Day dawns warm and sticky. After the flag raising, Chief Isaac and other cult leaders sit on benches shaded by palm fronds as several hundred followers take turns performing traditional dances or modern improvisations. Men and boys clad in stringy bark skirts stride onto the dancing ground clutching replicas of chain saws carved from jungle boughs. As they thump their feet in time to their own singing, they slash at the air with the make-believe chain saws. “We’ve come from America to cut down all the trees,” they sing, “so we can build factories.”

On the day before I leave Tanna, Chief Isaac and I finally climb the slippery ash slopes of Yasur, the ground trembling about every ten minutes with each thunderous explosion from within the volcano’s crater. Every ear-humming bang sends a huge plume of potentially killer gas high into the sky, a mingling of sulfur dioxide, carbon dioxide and hydrogen chloride.

Darkness brings a spectacular display, as molten lava explodes from the crater’s vents, shooting into the air like giant Roman candles. Two people were killed here by “lava bombs,” or falling chunks of volcanic rock, in 1994. Chief Isaac leads me to a spot on the crumbling rim, away from the drift of the hazardous gas but still within reach of the incandescent bombs the unpredictable volcano bursts into the air.

The chief tells me about his trip to the United States in 1995, and shows faded pictures of himself in Los Angeles, outside the White House and with a drill sergeant at a military base. He says he was astonished by the wealth of the United States, but surprised and saddened by the poverty he saw among white and black Americans alike, and by the prevalence of guns, drugs and pollution. He says he returned happily to Sulphur Bay. “Americans never show smiling faces,” he adds, “and so it seems they always think that death is never far away.”

When I ask what he most wants from America, the simplicity of his request moves me: “A 25-horsepower outboard motor for the village boat. Then we can catch much fish in the sea and sell them in the market so that my people can have a better life.”

As we look down into John Frum’s fiery Tanna home, I remind him that not only does he not have an outboard motor from America, but that all the devotees’ other prayers have been, so far, in vain. “John promised you much cargo more than 60 years ago, and none has come,” I point out. “So why do you keep faith with him? Why do you still believe in him?”

Chief Isaac shoots me an amused look. “You Christians have been waiting 2,000 years for Jesus to return to earth,” he says, “and you haven’t given up hope.”

the American Century continued part III

Most of life is an exercise of mismeasurements. Basing decisions on mismeasured data can lead to false conclusions about the past, present, and future. The most valuable assets in life defy measurement. How does one measure the value of intellectual freedom, courage, trust of society, etc?




Why The Economy Is A Lot Stronger Than You Think; In a knowledge-based world, the traditional measures don't tell the story
By Michael Mandel, with Steve Hamm in New York and Christopher J. Farrell in St. Paul, Minn.

You read this magazine religiously, watch CNBC while dressing for work, scan the Web for economic reports. You've heard, over and over, about the underlying problems with the U.S. economy -- the paltry investment rate, the yawning current account deficit, the pathetic amount Americans salt away. And you know what the experts are saying: that the U.S. faces a perilous economic future unless we cut back on spending and change our profligate ways.

But what if we told you that the doomsayers, while not definitively wrong, aren't seeing the whole picture? What if we told you that businesses are investing about $1 trillion a year more than the official numbers show? Or that the savings rate, far from being negative, is actually positive? Or, for that matter, that our deficit with the rest of the world is much smaller than advertised, and that gross domestic product may be growing faster than the latest gloomy numbers show? You'd be pretty surprised, wouldn't you?

Well, don't be. Because the economy you thought you knew -- the one all those government statistics purport to measure and make rational and understandable -- actually may be on a stronger footing than you think. Then again, it could be much more volatile than before, with bigger booms and deeper busts. If true, that has major implications for policymakers -- not least Ben Bernanke, who on Feb. 1 succeeded Alan Greenspan as chairman of the Federal Reserve.

Everyone knows the U.S. is well down the road to becoming a knowledge economy, one driven by ideas and innovation. What you may not realize is that the government's decades-old system of number collection and crunching captures investments in equipment, buildings, and software, but for the most part misses the growing portion of GDP that is generating the cool, game-changing ideas. ``As we've become a more knowledge-based economy,'' says University of Maryland economist Charles R. Hulten, ``our statistics have not shifted to capture the effects.''

The statistical wizards at the Bureau of Economic Analysis in Washington can whip up a spreadsheet showing how much the railroads spend on furniture ($39 million in 2004, to be exact). But they have no way of tracking the billions of dollars companies spend each year on innovation and product design, brand-building, employee training, or any of the other intangible investments required to compete in today's global economy. That means that the resources put into creating such world-beating innovations as the anticancer drug Avastin, inhaled insulin, Starbuck's, exchange-traded funds, and yes, even the iPod, don't show up in the official numbers.

Now, a generation of economists who came of professional age watching the dot-com boom and bust are trying to get a grip on this shadow economy: People like Carol A. Corrado and Daniel E. Sichel of the Federal Reserve Board, who, along with Hulten, figured out that businesses are spending much more on future-oriented investments than widely believed. In a way, these economists are disciples of Greenspan, who understood earlier than most that the conventional numbers don't capture the emerging knowledge economy.

Greenspan was continually digging into arcane factoids he hoped would give him a better insight into what was going on under the hood of the U.S. economy. And Bernanke seems to understand the importance of doing the same. In a speech last year, he said that intangible investments ``appear to be quantitatively important.'' As a result, Bernanke noted, ``aggregate saving and investment may be significantly understated in the U.S. official statistics.''

BEYOND WIDGETS

As Greenspan would be the first to tell you, it's a lot easier counting how many widgets the nation produces in a year than quantifying the creation and marketing of knowledge. After all, we're talking about intangibles: brand equity, the development of talent, the export of best practices.

This stuff is hard to measure, but to ignore it is to miss what the economy is telling us. And to miss that is to increase the likelihood of committing policy blunders. Including these intangible investments could provide a better picture of the economy, one that offers more advance warning of recessions, slippage in our ability to innovate, and other nasty surprises.

To understand why the government measures the economy the way it does, it helps to go back in time to the 1930s. The Great Depression had the nation in a death grip, and government planners and politicians lacked the tools to answer the big question of the day: Was the economy getting better or worse? To find out, the Commerce Dept. brought in economist Simon Kuznets , then at the National Bureau of Economic Research, to calculate for the first time the nation's income and output -- the purchasing power and production of the U.S. economy. Setting such a benchmark would allow the government to figure out if the economy was growing or shrinking.

Working with handwritten data, Kuznets and a small group of fellow economists began counting tangible things like machines and buildings as long-term investments. It made sense, since this was still the Industrial Age. And such calculations came in handy during World War II, when the Roosevelt Administration needed a fix on the nation's capacity to grind out tanks, ships, and planes.

A BREAK WITH THE PAST

Kuznets' work set the tone for the rest of the century, not to mention helping win him the Nobel prize in Economics in 1971. Machines and buildings were counted as future-oriented investment, but spending on education, training, and R&D was not. No attempt was made to judge the social utility of expenditures. For example, the $6 million cost of building the Flamingo Hotel, the Las Vegas casino opened by Bugsy Siegel in 1946, was tallied as an investment. But AT&T's funding of Bell Labs, where the transistor was invented around the same time, wasn't even included in GDP. Kuznets himself acknowledged the limitations of his system, yet it stayed basically the same for most of the postwar period.

By the early '90s, Greenspan was becoming increasingly frustrated by the official numbers' inability to explain a rapidly evolving economy. In 1996 and 1997 he refused to accept conventional data telling him that productivity growth was falling in much of the service sector, noting -- correctly, as it turns out -- that ``this pattern is highly unlikely.'' He also pointed out that the official numbers for consumer inflation were too high.

At the Washington offices of the BEA, J. Steven Landefeld, who became director in 1995, felt pressure to include numbers that better reflected the knowledge economy. Landefeld isn't a rash fellow, and the pace of change at the BEA, while quick for a statistical agency, would be called deliberate by most. But in 1999 -- six decades after Kuznets laid the groundwork for calculating GDP -- Landefeld and the BEA decided to break with the past.

The BEA started treating business spending on software as a long-lived investment. The decision was overdue. Companies were spending more than $150 billion annually on software, far more than the $100 billion for computer hardware. And the software often stayed in use longer than the hardware. The fact that economists could go into stores and see software in brightly colored boxes reassured them that it was real. ``Prepackaged software is a lot easier'' to count, recalls Landefeld.

Silly as it may seem now, it was a revolutionary change at the time. But over the past seven years the economy has continued to evolve while the numbers we use to capture it have remained the same. Globalization, outsourcing, and the emphasis on innovation and creativity are forcing businesses to shift at a dramatic rate from tangible to intangible investments.

According to BusinessWeek's calculations, the top 10 biggest U.S. corporations that report their R&D outlays -- a list that includes ExxonMobil, Procter & Gamble, General Electric, Microsoft, and Intel -- have boosted R&D spending by 42%, or almost $11 billion, since 2000. Yet over the same period, they have only increased capital spending by a meager 2%, or less than $1 billion. So all together, these giants have actually increased their future-oriented investment by roughly $12 billion -- most of which doesn't show up in the BEA numbers.

This shift to intangibles looks all the more remarkable when we look a bit further back. P&G, for example, has boosted its spending on R&D, which doesn't count as investment in the GDP statistics, by 39% since 1996. By contrast, the company's capital budget, which does factor into GDP, is no bigger today than it was back then. The same is true at spicemaker McCormick & Co., where capital spending is basically flat compared to 1996 but R&D outlays to create new products have tripled over the same period.

Want to see how this works? Grab your iPod, flip it over, and read the script at the bottom. It says: ``Designed by Apple in California. Assembled in China.'' Where the gizmo is made is immaterial to its popularity. It is great design, technical innovation, and savvy marketing that have helped Apple Computer sell more than 40 million iPods. Yet the folks at the BEA don't count what Apple spends on R&D and brand development, which totaled at least $800 million in 2005. Rather, they count each iPod twice: when it arrives from China, and when it sells. That, in effect, reduces Apple -- one of the world's greatest innovators -- to a reseller of imported goods.

That's why the new research from Corrado, Sichel, and Hulten is so important, and why building and improving upon it could become a key goal of economists in the coming years. Ultimately, we might end up with a ``knowledge-adjusted'' GDP, which would track the spending so crucial for global competitiveness.

Right now, though, rough calculations of these intangibles are all we have. To help come up with their $1 trillion number for unmeasured business investment, for example, Corrado, Sichel, and Hulten counted the portion of advertising designed to have long-lived effects on perception (that would include the sort of corporate image advertising seen in this magazine). They also estimated the value of new product development in the financial-services industry, which current R&D numbers miss. ``We had to hunt around for bits and pieces of data,'' says Hulten.

Assessing how much bang for the buck companies get from their spending on intangibles is even harder, especially in the fast-changing knowledge economy. Take employee training. In the old days, that required flying people to a teaching facility, which cost companies a lot of time on top of the cost of the instructors and real estate. Now online learning and other innovations are driving down the cost of training. At IBM, the training budget fell by $10 million from 2003 to 2004, a 1.4% decline, while the number of classroom and e-learning hours rose by 29%. Are other companies seeing an equally dramatic decline in the cost of training? No one knows.

CHANGING PERCEPTIONS

That's why the BEA doesn't want to move too fast. It plans to publish supplementary accounts for R&D in the next few years, which will track R&D spending without adding it into the official GDP numbers. Other intangibles, though, remain below the radar. ``No one disagrees with this conceptually,'' says BEA chief Landefeld. ``The problem is in the empirical measurement.''

But look at how our perception of the economy changes once you add in things like R&D and brand-building. The published data show that total investment -- business, residential, and government -- has been falling over the past three decades as a share of national spending, while consumption has been rising. Add in the intangible investments provided by our three economists, and the picture changes completely.

Total investment rises, going from 23.8% of national spending in the 1970s to 25.1% in the early 2000s -- much higher than the 18.3% the conventional numbers show. That helps explain why the economy has sustained strong productivity growth, and why foreign investors continue to pour money into the U.S.

Factoring in the knowledge economy also helps us understand why the recession of 2001 seemed worse than the official statistics showed -- and why the recovery was so slow. According to the published numbers, the six-month recession of 2001 was so mild the business sector actually grew at a modest 0.4% pace that year. By 2003, however, more than 3 million private sector jobs had disappeared.

One reason for this disconnect is simple: Corporations hacked back their budgets for R&D, advertising, training, and so forth. Yes, that canceled out a ton of high-paying jobs, but had no direct effect on GDP. Remember that R&D and other intangible business investments are not currently counted as national output. Therefore, when a company laid off an engineer doing long-term product development but kept selling the same number of its old products, GDP stayed the same. Productivity even went up, because fewer workers were producing the same amount of output. And if that laid-off engineer went to work, say, building houses? National output might even have risen.

There's enough data available through 2003 to estimate how business intangibles would have changed the growth numbers. For our purposes, let's assume that overall intangible business investment followed the same path as industrial R&D and advertising, for which annual data are available. Crunch the numbers and it looks like the business sector really grew by only 0.1% in 2001, less than a quarter of the size of the official increase. Growth in 2002 now also looks slower than the published data.

By contrast, the conventional numbers may be understating the strength of the economy today. The BEA announced on Jan. 27 that growth in the fourth quarter of 2005 was only 1.1%. In part that was because of a smaller-than-expected increase in business capital spending. However, employment at design and management-consulting firms is up sharply in the quarter, suggesting that businesses may be spending on intangibles instead. Indeed, the consumer confidence number for January zoomed to the highest level since 2002, as Americans became more optimistic about finding jobs.

Then again, the economy may hit bigger bumps in the years ahead. When companies significantly trim their spending on R&D, design, training, and other knowledge-enhancing activities, as they did in 2001, the resulting pain in terms of job losses and reduced innovation could deepen the next downturn.

Perhaps the trickiest and most controversial aspect of the shadow economy is how it alters our assessment of international trade. The same intangible investments not counted in GDP, such as business knowhow and brand equity, are for the most part left out of foreign trade stats, too. Also largely ignored is the mass influx of trained workers into the U.S. They represent an immense contribution of human capital to the economy that the U.S. gets free of charge, which can substantially balance out the trade deficit of goods and services. ``I don't know that the trade deficit really tells you where you are in the global economy,'' says Gary L. Ellis, chief financial officer of Medtronic Inc., a world leader in medical devices such as implantable defibrillators. ``We're exporting a lot of knowledge.''

Time for another real-world example. In December, Intel Corp. announced plans to build a new wafer-fabrication plant in Israel. To the statisticians, the value of that foreign investment is the book value of the plant -- that is, the cost of erecting the building and installing the chipmaking machinery.

Not counted is the systematic export of knowhow to Israel that enables that factory to operate profitably. At the core is a program called Copy Exactly!, which requires that a new fab duplicate an existing one that is working well, down to how often the plant's pumps are serviced. All of this critical information is documented and transferred from the U.S. to the new plant, but it is not picked up by the trade statistics.

The numbers don't catch Intel's exhaustive training program either. To get its new plants running quickly, the chipmaker brings 800 or 900 employees from the new fab to spend a minimum of six months in Hillsboro, Ore., where Intel develops new production processes. By the time they return home, these people will have picked up not just the details of the process but also tribal knowledge -- the unwritten lore of how Intel works. With that info in their heads, they're equipped to get the new factory up and running at high volume within a quarter, rather than taking a year or more. In economics speak, this is a classic transfer of human capital. So why isn't it called an export?

Ricardo Hausmann, director of Harvard's Center for International Development, believes it should be. He describes these cross-border flows of knowhow as ``dark matter.'' Hausmann notes that U.S. multinationals consistently earn higher rates of return than their foreign counterparts -- an average of 6% on foreign operations since 2000, vs. the 1.2% foreign multinationals earn in the U.S., according to the latest BEA figures. From that, he infers that the multinationals are benefiting, in part, from knowledge exported from the U.S., a country with faster productivity growth than the rest of the industrialized world.

Using these arguments, Hausmann finds that the U.S. current account deficit actually disappears, averaged over time. ``With globalization, you develop a blueprint and sell it in all countries,'' he says. ``Countries that are good at creating blueprints get more exports of dark matter.''

Admittedly, most trade experts are hostile to Hausmann's conclusions. A recent report from Goldman, Sachs & Co. likened Hausmann's dark matter to cold fusion. And the economists at the BEA worry that adding knowledge exports to the trade stats would make published data less useful. ``I have a problem putting fabricated flows into exports,'' says Ralph H. Kozlow, who oversees international accounts at the BEA. ``You get into an impossible statistical maze when you try to value all of this at anything that anyone would believe.''

But even if Hausmann is overstating his case, he's on the right track. There's no doubt that the statistical problems are formidable, but it's also certain that the conventional trade statistics are missing a big portion of the knowledge flows that create value these days. Suppose we assume that U.S. multinationals can earn an extra percentage point of return on their foreign investments by being able to use business intangibles exported from the U.S. Then a rough estimate of the value of the unmeasured exports of knowledge is anywhere from $25 billion to $100 billion per year, depending on what assumptions are used.

And let's not forget about immigrants. The workers who move to the U.S. each year bring with them a mother lode of education and skills -- human capital -- for free. One celebrated example is Jonathan Ive, the man who designed the iPod and iMac. Ive was born in England and educated at Newcastle Polytechnic University of Northumbria before joining Apple Computer Inc. in California in 1992.

Ive is not unique. Most of the workers who immigrate to the U.S. each year have at least a high school diploma, while about a third have a college education or better. Since it costs, on average, roughly $100,000 to provide 12 years of elementary and secondary education, and another $100,000 to pay for a college degree, immigrants are providing a subsidy of at least $50 billion annually to the U.S. economy in free human capital. Alternatively, valuing their contribution to the economy by the total wages they expect to earn during their lifetime would put the value of the human capital of new immigrants closer to $200 billion per year. Either the low or high estimate would make the current account deficit look smaller.

These numbers may also seem squishy. Still, if Fed chief Bernanke, corporate executives, and ordinary investors want to know where we've been, and where we're headed, tracking the creation and flow of knowledge is the only way to go.

The Shadow Economy

Intangibles such as R&D, training, education, and exports of knowledge are

poorly tracked by today's statistics. Here's how better counting of

intangibles would change our picture of the economy:

-- Investment is rising as a share of the economy, rather than falling.

-- The current account deficit is considerably smaller.

-- The personal savings rate in 2005 was positive, not negative.

-- The part of the federal budget devoted to current spending is in balance.

-- The 2001 recession was deeper than we thought. Current growth, however,

may be stronger.

FEDERAL GOVERNMENT

Government outlays for education and R&D are incorrectly labeled as current

consumption, rather than future-oriented investment.

Education/research and development $215

Physical capital 181

Surplus of rest of federal budget 78

* Billions of dollars; estimates for fiscal year 2005

Data: Office of Management and Budget, Congressional Budget Office

BUSINESS

Business investment in intangibles such as product development and training

is critical for long-term profitability, but it doesn't get counted in GDP.

Unmeasured intangibles $978

Physical capital and software 1139

Billions of dollars; Average for 2000-2003

Data: Corrado, Hulten, Sichel

FAMILY

Household outlays for education, the most important investment in the future

of the next generation, are improperly counted as consumption in the

published data.

Education $224

Personal savings, as officially measured -42

Billions of dollars, 2005

Data: Bureau of Economic Analysis

THE REST OF THE WORLD

The foreign trade statistics do not reflect the human capital being brought

into the country by skilled immigrants. The official numbers also do not show

the flows of knowhow that enable U.S. multinationals to reap high returns on

their overseas operations.

Unmeasured inflows of human capital $50-$200*

Unmeasured exports of knowhow $25-$100*

* Billions of dollars; Annual average for 2000-2004

Data: BusinessWeek

Big Companies Go Intangible

Companies are putting more emphasis on R&D and less on capital investment.
Since 2000, the ``intangibility index'' -- the ratio of R&D to capital
spending, multiplied by 100 -- has risen for 9 of the 10 biggest U.S.
companies that report R&D
INTANGIBILITY
INDEX*
COMPANY 2000 LATEST**

ExxonMobil 5.1 4.4
GE*** 73.6 100.7
Microsoft 429.1 761.6
Procter & Gamble 62.9 89.0
Pfizer 211.0 295.4
Johnson & Johnson 183.8 239.2
Altria 32.0 42.3
ChevronTexaco 2.2 2.9
Intel 58.4 88.4
IBM 95.6 129.9
ALL 10 56.8 79.1

PERCENTAGE CHANGE 2000-
OVERALL LATEST**

R&D spending +42.1%
Capital spending +2.1
* Capital spending for oil companies includes expenditures for exploration
as well.
** Latest year for which R&D and capital spending are both available.
*** Excluding GE Capital Services

Sunday, February 05, 2006

Will knowing the truth mean freedom?

February 5, 2006
Looking for the Lie

By ROBIN MARANTZ HENIG

Liars always look to the left, several friends say; liars always cover their mouths, says a man sitting next to me on a plane. Beliefs about how lying looks are plentiful and often contradictory: depending on whom you choose to believe, liars can be detected because they fidget a lot, hold very still, cross their legs, cross their arms, look up, look down, make eye contact or fail to make eye contact. Freud thought anyone could spot deception by paying close enough attention, since the liar, he wrote, "chatters with his finger-tips; betrayal oozes out of him at every pore." Nietzsche wrote that "the mouth may lie, but the face it makes nonetheless tells the truth."

This idea is still with us, the notion that liars are easy to spot. Just last month, Charles Bond, a psychologist at Texas Christian University, reported that among 2,520 adults surveyed in 63 countries, more than 70 percent believe that liars tend to avert their gazes. The majority also believe that liars squirm, stutter, touch or scratch themselves or tell longer stories than usual. The liar stereotype exists in just about every culture, Bond wrote, and its persistence "would be less puzzling if we had more reason to imagine that it was true." What is true, instead, is that there are as many ways to lie as there are liars; there's no such thing as a dead giveaway.

Most people think they're good at spotting liars, but studies show otherwise. A very small minority of people, probably fewer than 5 percent, seem to have some innate ability to sniff out deception with accuracy. But in general, even professional lie-catchers, like judges and customs officials, perform, when tested, at a level not much better than chance. In other words, even the experts would have been right almost as often if they had just flipped a coin.

In the middle of the war on terrorism, the federal government is not willing to settle for 50-50 odds. "Credibility assessment" is the new catch phrase, which emerged at about the same time as "red-level alert" and "homeland security." Unfortunately, most of the devices now available, like the polygraph, detect not the lie but anxiety about the lie. The polygraph measures physiological responses to stress, like increases in blood pressure, respiration rate and electrodermal skin response. So it can miss the most dangerous liars: the ones who don't care that they're lying, don't know that they're lying or have been trained to lie. It can also miss liars with nothing to lose if they're detected, the true believers willing to die for the cause.

Responding to federal research incentives, a handful of scientists are building a cognitive theory of deception to show what lying looks like — on a liar's face, in a liar's demeanor and, most important, in a liar's brain. The ultimate goal is a foolproof technology for deception detection: a brain signature of lying, something as visible and unambiguous as Pinocchio's nose.

Deception is a complex thing, evanescent and difficult to pin down; it's no accident that the poets describe it with diaphanous imagery like "tangled web" and "tissue of lies." But the federal push for a new device for credibility assessment leaves little room for complexity; the government is looking for a blunt instrument, a way to pick out black and white from among the duplicitous grays.

Nearly a century ago the modern polygraph started out as a machine in search of an application; it hung around for lack of anything better. But the polygraph has been mired in controversy for years, with no strong scientific theory to adequately explain why, or even whether, it works. If the premature introduction of a new machine is to be avoided this time around, the first step is to do something that was never done with the polygraph, to develop a theory of the neurobiology of deception. Two strands of scientific work are currently involved in this effort: brain mapping, which uses the 21st century's most sophisticated techniques for visualizing patterns of brain metabolism and electrical activity; and face reading, which uses tools that are positively prehistoric, the same two eyes used by our primate ancestors to spot a liar.

As these two strands, the ancient and the futuristic, contribute to a new generation of lie detectors, the challenge will be twofold: to resist pressure to introduce new technologies before they are adequately tested and to fight the overzealous use of these technologies in places where they do not belong — to keep inviolable that most private preserve of our ordinary lives, the place inside everyone's head where secrets reside.


The Five-of-Clubs Lie

The English language has 112 words for deception, according to one count, each with a different shade of meaning: collusion, fakery, malingering, self-deception, confabulation, prevarication, exaggeration, denial. Lies can be verbal or nonverbal, kindhearted or self-serving, devious or baldfaced; they can be lies of omission or lies of commission; they can be lies that undermine national security or lies that make a child feel better. And each type might involve a unique neural pathway.

To develop a theory of deception requires parsing the subject into its most basic components so it can be studied one element at a time. That's what Daniel Langleben has been doing at the University of Pennsylvania. Langleben, a psychiatrist, started an experiment on deception in 2000 with a simple design: a spontaneous yes-no lie using a deck of playing cards. His research involved taking brain images with a functional-M.R.I. scanner, a contraption not much bigger than a kayak but weighing 10 tons. Unlike a traditional M.R.I., which provides a picture of the brain's anatomy, the functional M.R.I. shows the brain in action. It takes a reading, every two to three seconds, of how much oxygen is being used throughout the brain, and that information is superimposed on an anatomical brain map to determine which regions are most active while performing a particular task.

There's very little about being in a functional-M.R.I. scanner that is natural: you are flat on your back, absolutely still, with your head immobilized by pillows and straps. The scanner makes a dreadful din, which headphones barely muffle. If you're part of an experiment, you might be given a device with buttons to press for "yes" or "no" and another device with a single panic button. Not only is the physical setup unnatural, but in most deception studies the experimental design is unnatural, too. It is difficult to replicate the real-world conditions of lying — the relationship between liar and target, the urgency not to get caught — in a functional-M.R.I. lab, or in any other kind of lab. But as an early step in mapping the lying brain, such artificiality has to suffice.

In Langleben's first deception study at Penn, the subjects were told at the beginning of the experiment to lie about a particular playing card, the five of clubs. To be sure the card carried no emotional weight, Langleben screened out compulsive gamblers from the group. One at a time, the subjects lay motionless in the scanner, watched pictures of playing cards flash onto a screen and pressed a button indicating whether they had that card or not. When an image of a card they didn't have came up, the subjects, as they had been instructed, told the truth and pressed "no." But when an image of the five of clubs came up, they also pressed "no," even though the card was in their pockets. That is, whenever they saw the five of clubs, they lied.

According to Langleben, certain regions of the brain were more active on average when his 18 subjects were lying than when they were telling the truth. Lying was associated with increased activity in several areas of the cortex, including the anterior cingulate cortex and the superior frontal gyrus. "We didn't have a map of deception in the brain — we still don't — so we didn't know exactly what this meant," Langleben said. "But that wasn't the question we were asking at the time in any case. What we were asking with that first experiment was, 'Can the difference in brain activity between lie and truth be detected by functional M.R.I.?' Our study showed that it can." He said that the prefrontal cortex — the reasoning part of the brain — was generally more aroused during lying than during truth-telling, an indication that it took more cognitive work to lie.

Brain mappers are just beginning to figure out how different parts of the brain function. The function of one region found to be activated in the five-of-clubs experiment, the anterior cingulate cortex, is still the subject of some debate; it is thought, among other things, to help a person choose between two conflicting responses, which makes it a logical place to look for a signature of deception. This region is also activated during the Stroop task, in which a series of words are written in different colors and the subject must respond with what color the ink is, disregarding the word itself. This is harder than it sounds, at least when the written word is a color word that is different from the ink it is written in. If the word "red" is written in blue, for instance, a lot of people say "red" instead of "blue." Telling a spontaneous lie is similar to the Stroop task in that it involves holding two things in mind simultaneously — in this case, the truth and the lie — and making a choice about which one to apply.

Langleben performed his card experiment again in 2003, with a few refinements, including giving his subjects the choice of two cards to lie about and whether to lie at all. This second study found activation in some of the same regions as the first, establishing a pattern of deception-related activity in particular parts of the cortex: one in the front, two on the sides and two in the back. The finding in the back, the parietal cortex, intrigued Langleben.

"At first I thought the parietal finding was a fluke," he said. The parietal cortex is usually activated during arousal of various kinds. It is also involved in the manifestation of thoughts as physical changes, like goose bumps that erupt when you're afraid, or sweating that increases when you lie. The connection to sweating interested Langleben, since sweating is also one of the polygraph's hallmark measurements. He looked at existing studies of this response, and in all of them he found activity that could be traced back to the parietal lobe. Until Langleben's observation of its connection to brain changes, the sweat response (which the polygraph measures with sensors on the palm or fingertips) had been thought to be a purely "downstream" change, a secondary effect caused not by the lie itself but by the consequences of lying: guilt, anxiety, fear or the excess positive emotion one researcher calls "duping delight." But Langleben's findings indicated that it might have a corollary "upstream," in the central nervous system. This meant that at least one polygraph measurement might have a signature right at the source of the lie, the brain itself.

So there it was: the first intimation of a Pinocchio response.

The parietal-cortex finding, while speculative, is "interesting to pay attention to because of its relationship to the polygraph," Langleben said. "In this way, we might not have to cancel the polygraph. We may be able to put it on firm neuroscience footing."


One Lie Zone or Many?

Over at Harvard, Stephen Kosslyn, a psychologist, was looking at the map Langleben was starting to build and found himself troubled by the connection between deception and the anterior cingulate cortex. "Yes, it lights up during spontaneous lying," Kosslyn said, but it also lights up during other tasks, like the Stroop task, that have nothing to do with deception. "So it couldn't be the lie zone." Deception "is a huge, multidimensional space," he said, "in which every combination of things matters." Kosslyn began by thinking about the different dimensions, the various ways that lies differ from one another in terms of how they are produced. Is the lie about you, or about someone else? Is it about something you did yesterday or something your friend plans to do tomorrow? Do you feel strongly about the lie? Are there serious consequences to getting caught? Each type of lie might lead to activation of particular parts of the brain, since each type involves its own set of neural processes.

He decided to compare the brain tracings for lies that are spontaneous, like those in Langleben's study, with those that are rehearsed. A spontaneous lie comes when a mother asks her teenage son, "Did you do your math homework?" A rehearsed lie comes when she asks him, "Why are you coming home an hour past your curfew?" The question about the homework probably surprises him, and he has to lie on the fly. The question about the curfew was probably one he had been anticipating, and concocting an answer to, for most of the previous hour.

Kosslyn's working hypothesis was that different brain networks are used during spontaneous lying than are used during truth-telling or the telling of a memorized lie. Spontaneous lying requires the liar not only to generate the lie and keep the lie in mind but also to keep in mind what the truth is, to avoid revealing it by mistake. In contrast, Kosslyn said, a rehearsed lie requires only that an individual retrieve the lie from memory, since the work of establishing a credible lie has already been done.

To help his subjects generate meaningful lies to memorize, Kosslyn asked them to provide details about one notable work experience and one vacation experience. Then he helped them construct what he called an "alternative-reality scenario" about one of them. (The other experience he held in reserve as the basis for his subject's unrehearsed spontaneous lies.) If the experience was a vacation in Miami, for instance, Kosslyn changed it to San Diego; if the person had gone there to visit a sister, he changed it to a visit to Uncle Sol. Kosslyn had the participants practice the false scenario for a few hours, and then he put them into a scanner at Harvard's functional-M.R.I. facility. There were 10 subjects altogether, all in their 20's.

As he predicted, Kosslyn found that as far as the brain was concerned, spontaneous and rehearsed lies were two different things. They both involved memory processing, but of different kinds of memories, which in turn activated different regions of the cortex: one part of the frontal lobe (involved in working memory) for the spontaneous lie; a different part in the right anterior frontal cortex (involved in retrieving episodic memory) for the lie that was rehearsed. That's not much of a map yet, but it is a cumulative movement toward a theory of deception: that lying involves different cognitive work than truth-telling and that it activates several regions in the cerebral cortex that are also activated during certain memory and thinking tasks.

Even as these small bits of data emerge through functional-M.R.I. imagery, however, Kosslyn remains skeptical about the brain-mapping enterprise as a whole. "If I'm right, and deception turns out to be not just one thing, we need to start pulling the bird apart by its joints and looking at the underlying systems involved," he said. A true understanding of deception requires a fuller knowledge of functions like memory, perception and visual imagery, he said, aspects of neuroscience investigations not directly related to deception at all.

In Kosslyn's view, brain mapping and lie detection are two different things. The first is an academic exercise that might reveal some basic information about how the brain works, not only during lying but also during other high-level tasks; it uses whatever technology is available in the sophisticated neurophysiology lab. The second is a real-world enterprise, best accomplished not necessarily by using elaborate instruments but by encouraging people "to use their two eyes and brains." Searching for a "lie zone" of the brain as a counterterrorism strategy, he said, is like trying to get to the moon by climbing a tree. It feels as if you're getting somewhere because you're moving higher and higher. But then you get to the top of the tree, and there's nowhere else to go, and the moon is still hundreds of thousands of miles away. Better to have stayed on the ground and really figured out the problem before setting off on a path that looks like progress but is really nothing more than motion. Better, in this case, to discover what deception looks like in the brain by breaking it down into progressively smaller elements, no matter how artificial the setup and how tedious the process, before introducing a lie-detection device that doesn't really get you where you want to go.


Your Brain Waves Know You're a Liar

Even the most enthusiastic brain mappers probably agree with one aspect of Kosslyn's skeptical analysis: a true brain map of lying is, at best, elusive. Part of the difficulty comes from the technology itself. In the world of brain mapping, a functional-M.R.I. scan paints a picture that is broad and, in its way, lumbering. It can indicate which region of the brain is active, but it can take a reading no more frequently than once every two seconds.

For a more refined picture of cognitive change from one instant to the next, scientists have turned to the electroencephalogram, which detects neural impulses on the scale of milliseconds. But while EEG's might be ideal to answer "when" questions about brain activity, they are not so good at answering questions about "where." Most EEG's use 10 or 12 electrodes attached by a tacky glue at scattered spots on the scalp, which record electrical impulses firing from the brain as a whole. They give little indication of which region is doing the firing.

That's why deception researchers use a refined version of the ordinary EEG, which increases the number of electrodes from 12 to 128. These 128 electrodes, each the size of a typewriter key, are studded around a stretchy mesh cap. Using the cap, investigators can trace where electrical impulses are coming from when a person lies.

The cap is unwieldy and uncomfortable — definitely not ready yet for the world outside the laboratory. Jennifer Vendemia, a psychologist at the University of South Carolina, has been using the cap since 2000, when she began studying deception by looking at a particular class of brain wave known as E.R.P., for event-related potential. The E.R.P. wave represents electrical activity in response to a stimulus, usually 300 or 400 milliseconds after the stimulus is shown. It can be a sign that high-level cognitive processes, like paying attention and retrieving memories, are taking place.

Vendemia has studied deception and E.R.P. waves in 626 undergraduates. She outfits them with the electrode cap and a plastic barbershop cape, which is necessary because, in order to maintain an electrical circuit, each of the 128 electrodes has to be thoroughly soaked. The cap is sopping wet when she puts it on her subjects, and during the experiment Vendemia occasionally comes into the room with a squirter and soaks it down some more.

Vendemia presented her subjects with a series of true-false statements, like "The grass is green" and "A snake has 13 legs," which they were instructed to answer either truthfully or deceptively, depending on which color the statement was written in. The subjects took a longer time — up to 200 milliseconds longer, on average — to lie than to tell the truth. They revealed a change in certain E.R.P. waves while they were lying, especially in the regions of the brain that the functional-M.R.I. scanners also focused on as possible lie zones: the parietal and medial regions of the brain, along the top and middle of the head.

"E.R.P. has the advantage of being a little more portable, and substantially less expensive, than M.R.I.," Vendemia said. "But E.R.P. cannot do some of the things that functional M.R.I. can do. If you're trying to model the brain, you really need both techniques."

One thing E.R.P. might eventually be able to do is predict whether someone intends to lie — even before he or she has made a decision about it. This brings us into sci-fi territory, into the realm of mind reading. When Vendemia has a subject in an E.R.P. cap, she can detect the first brain-wave changes within 240 to 260 milliseconds after a true-false statement appears on a computer screen. But these changes are an indication of intention, not action; it can take 400 to 600 milliseconds for a person to decide whether to respond with "true" or "false." "With E.R.P., I've taken away your right to make a decision about your response," Vendemia said. "It's the ultimate invasion." If someone knows before you do what your brain is indicating as your intention, is there any room left, in that window of a few hundred milliseconds, for the exercise of free will? Or have you already been labeled a liar by your spontaneous brain waves, without your having a chance to override them and choose a different path?

Lies make secrets possible; they let us carve out a private territory that no one, not even those closest to us, can enter without our permission. Without lies, there can be no such sanctuary, no interior life that is completely and inviolably ours. Do we want to allow anyone, whether a government interrogator or a beloved spouse, unfettered access to that interior life?


How Lies Leak Into the Open

Even a practiced lie-catcher like Paul Ekman recognizes that lying is a matter of privacy. "I don't use my ability to spot lies in my personal life," said Ekman, emeritus professor of psychology at the University of California, San Francisco. If his wife or two grown children want to lie to him, he said, that's their business: "They haven't given me the right to call them on their lies."

In his book "Telling Lies," Ekman underscored this point. His Facial Action Coding System, a precise categorization of the 10,000 or so expressions that are created by various combinations of 43 independent muscles in the face, allows him to do the same kind of mind reading that Vendemia can do with her E.R.P. cap. Facial expressions are hard-wired into the brain, according to Ekman, and can erupt without an individual's awareness about 200 milliseconds after a stimulus. Much like E.R.P. waves, then, a facial expression can give away your feelings before you are even aware of them, before you have made a conscious decision about whether to lie about those feelings or not. "Detecting clues to deceit is a presumption," Ekman wrote. "It takes without permission, despite the other person's wishes."

But in many situations, it's important to know who's lying to you, whether the liar wants you to or not. And for those times, Ekman said, his system of lie detection can be taught to anyone, with an accuracy rate of more than 95 percent. His holistic perspective is almost the polar opposite of brain mappers like Langleben's and Vendemia's: instead of focusing on the liar's neurons, Ekman takes a long, hard look at the liar's face.

The Facial Action Coding System is the key to Ekman's strategy. Basic emotions lead to characteristic facial expressions, which only a handful of really good liars manage to conceal. Part of lying is putting on a false face that's consistent with the lie. But even practiced liars, according to Ekman, may not always be able to control the "leakage" of their true feelings, which flit across the face in microexpressions that last less than half a second. These microexpressions indicate an incongruity between the liar's words and his emotions. "It doesn't mean he's lying necessarily," Ekman said. "It's what I call a 'hot spot,' a point of discontinuity that deserves investigation."

Ekman teaches police investigators, embassy officials and others how to spot liars, including how to read these microexpressions. He begins by showing photos of faces in apparently neutral poses. In each face, a microexpression appears for 40 milliseconds, and the trainee has to press a button to indicate which emotion was in that microexpression: fear, anger, surprise, happiness, sadness, contempt or disgust. When I took the pretest to measure my innate lie-detecting capabilities, I could see the microexpressions in about 70 percent of the examples. But after about 15 minutes of training, I improved. The training session let me stop the action if I missed a question, since Ekman's idea is that if you know what you're looking for — and the microexpressions, when frozen, are vivid and easy to name — you can spot them even when they flash by in an instant. In the post-training test, I scored an 86 percent.

In addition to microexpressions, Ekman said, certain aspects of a person's demeanor can indicate whether he is lying. Voice, hand movements, posture, speech patterns: when these vary from how the person usually speaks or gesticulates, or when they don't fit the situation, that's another hot spot to explore. Word choices often change with lying, too, with the speaker using "distancing language," like fewer first-person pronouns and more in the third person. Also common are what Ekman calls "verbal hedges," which liars might use to buy time as they figure out what they want to say. To illustrate a verbal hedge, Ekman pointed to one of the many cartoons he uses in his workshops: a shark standing in a courtroom, looking up at the judge and saying, "Define 'frenzy."'

Ekman enjoys using these insights to unmask the lies of public figures (though he has a rule that prohibits him from commenting on any elected official currently in office, no matter how tempting a target). At his home in the Oakland Hills, he has a videotape library of some of the most notable lies of recent history, and he showed me how to watch one when I visited last fall. It was from a presidential news conference in early 1998, during the first days of the Monica Lewinsky scandal. Ekman smiled as he watched it; he knows this clip well. "I want you to listen to me," President Bill Clinton was saying, shaking his forefinger like a schoolmarm. "I did not have sexual relations with that woman."

There it was: the president's "distancing language," calling Lewinsky "that woman," and an almost imperceptible softening of his voice at the end of the sentence. When this news conference was originally broadcast, Ekman said, "everyone I had ever trained from all over the country called me and said: 'Did you see the president? He's lying."'

Even though Ekman has been hired to teach his technique to embassy workers and military intelligence officers — to the tune of $35,000 for a five-day workshop — his low-tech approach to lie-catching is definitely out of vogue. "After 9/11," he said, "I contacted different federal agencies — the Defense Department, the C.I.A. — and said, 'I think there are some things I can teach your agents that can be of help right now."' But several turned him down, he said, with one person bluntly stating, "I can't support anything unless it ends in a machine doing it."


The First, Flawed Machine

The quest for such a machine has roots in the early 20th century, when the first modern lie detector, a rudimentary polygraph, was introduced. The man often cited as its inventor, William Moulton Marston, was a Harvard-trained psychologist who went on to make his mark as the creator of the comic-book character Wonder Woman. Not coincidentally, one of Wonder Woman's most potent weapons was her Magic Lasso, which made it impossible for anyone in its grip to tell a lie.

Marston spent 20 years trying to get his machine used by the military, in courts and even in advertising. After the success of Wonder Woman, however, he used it mostly for entertainment. His comic-book editor, Sheldon Mayer, recalled being hooked up to a polygraph during a party at Marston's home. After a few warm-up questions, Marston tossed him a zinger, "Do you think you're the greatest cartoonist in the world?"

As Mayer wrote in his memoir, "I felt I was being quite truthful when I said no, and it turned out I was lying!" What an interesting reaction — even if, as was likely, Mayer was just trying to be funny. Because how prescient, really, to joke that the machine must have been right, that the machine knew more about Mayer than he did himself. It's the power of a simple mechanical device to make you doubt your own concept of truth and lie — "It turned out I was lying" — that made the polygraph so alluring, and so disturbing. And it's that power, combined with the idea that the machines are peering directly into the brain, that makes the polygraph's modern counterparts even more so.

Today, the polygraph is the subject of much controversy, with organizations devoted to publicizing "countermeasures" — ways to subvert the results — to prove how unreliable it is. But the American Polygraph Association says it has "great probative value," and police departments still use it to help focus their criminal investigations and to try to extract confessions. The polygraph is also used to screen potential and current federal employees in law enforcement and for security clearances, although private employers are prohibited from using it as a pre-employment screen. Polygraphists are also routinely brought in to investigate such matters as insurance fraud, corporate theft and contested divorce.

But there is little scientific evidence to back up the accuracy of the polygraph. "There has been no serious effort in the U.S. government to develop the scientific basis for the psychophysiological detection of deception by any technique," stated a report issued by the National Research Council in 2003. Polygraph research has been "managed and supported by national security and law enforcement agencies that do not operate in a culture of science," the council said, suggesting that these are not the best settings for an objective assessment of any device's pros and cons.

The polygraph has many cons. It requires a suspect who is cooperative, feels guilty or anxious about lying and hasn't been educated to the various countermeasures that can thwart the results. Polygraph results can be more reliable in investigations in which the questioners already know what they're looking for. This allows investigators to develop a line of questioning that leads to something like the Guilty Knowledge Test. This is a multiple-choice test in which the answer is something only a guilty person would know — and only a guilty person's polygraph readings would indicate arousal upon hearing it.

The history of polygraphs is a cautionary tale, an example of how not to introduce the next generation of credibility-assessment devices. "Security and law enforcement agencies need to improve their capability to independently evaluate claims proffered by advocates of new techniques for detecting deception," the National Research Council said. "The history of the polygraph makes clear that such agencies typically let clinical judgment outweigh scientific evidence."


Thermal Scanners, Eye Trackers and Pupillometers

History is in some danger of repeating itself at the site of the government's most focused effort to look for the next generation of lie detectors, the Department of Defense Polygraph Institute. This is where the brain mapping of the academic investigators is turned into practical machinery. Scientists at Dodpi (pronounced DOD-pie) are an inventive bunch, investigating instruments that measure the body's emission of heat, light, vibration or any other physiological properties that might change when someone tells a lie.

The Dodpi facility sits at one end of the huge Army base at Fort Jackson, S.C., where Army recruits en route to Iraq go for basic training. Among the new machines being studied is a thermal scanner, in which a computer image of a person's face is color-coded according to how much heat it emits. The region of interest, just inside each eye, grows hotter when a person lies. It also grows hotter during many other cognitive tasks, however, so a more specific signature for deception might be required to keep the thermal scanner from falling prey to the same problems of imprecision as the polygraph.

Another machine is the eye tracker, which follows a person's gaze — its fixation, duration, rapid eye movements and scanning path — to determine if he's looking at something he has seen before. It can be thought of as a mute version of the Guilty Knowledge Test.

Other high-tech deception detectors — many of them capable of remote operation, so they could theoretically be used without a suspect's knowledge — are being developed at laboratories across the country, with financing from agencies like Dodpi, the Department of Homeland Security and the Defense Advanced Research Projects Agency. (Defense Department officials will not reveal the amount they spend on credibility assessment, nor the degree to which the budget has increased since 9/11, because some of the research is classified.) The detectors look for increases in physiological processes that are associated with lying: a sniffer test that measures levels of stress hormones on the breath, for instance, a pupillometer that measures pupil dilation and a near-infrared-light beam that measures blood flow to the cerebral cortex.

With this push for an automated lie detector, some observers worry that we'll see a replay of the polygraph experience: the marketing of a halfway technology not quite capable of separating lying from other cognitive or emotional tasks. The polygraph was a machine in search of an application, and it became entrenched in criminal justice more out of habit than out of proved efficacy. This could easily happen again, as credibility assessment is being lauded as a crucial counterterrorism tool.

"The fear is that because so much money has been put into homeland security, that people may be trying to find quick solutions to complex problems by buying something," said Tom Zeffiro, a neurologist at Georgetown University and chairman of a workshop on high-tech credibility assessment sponsored last summer by the National Science Foundation. "And technology that might not be thoroughly evaluated might be put into practice." Already there are efforts to sell computer algorithms and devices that some scientists believe to be insufficiently tested, products with names like Brain Fingerprinting and No Lie M.R.I. Zeffiro said that one of his workshop suggestions is to establish a neutral testing laboratory to keep such products from being used commercially before there is at least some minimum amount of evidence that they work.

Big Brother concerns hover in the background, too, with some of these instruments, especially the smallest ones. It is sobering to think that we might be moving toward a society in which hidden sensors are trying, in one way or another, to read our minds. At Dodpi, however, scientists don't seem to fret much about such things. "The operational use of what we develop is not something we think about," said Andrew Ryan, a former police psychologist who is the head of research at Dodpi. "Our job is to develop the science. Once that science is developed, how it's used is up to other people."


The Smudge of Ordinary Lies

Each day we walk a fine line between deception and discretion. "Everybody lies," Mark Twain wrote, "every day; every hour; awake; asleep; in his dreams; in his joy; in his mourning."

First there are the lies of omission. You go out to dinner with your sister and her handsome new boyfriend, and you find him obnoxious. When you and your sister discuss the evening later, isn't it a lie for you to talk about the restaurant and not about the boyfriend? What if you talk about his good looks and not about his offensive personality?

Then there are the lies of commission, many of which are harmless, the lies that allow us to get along with one another. When you receive a gift you can't use, or are invited to lunch with a co-worker you dislike, you're likely to say, "Thank you, it's perfect" or "I wish I could, but I have a dentist's appointment," rather than speak the harsher truth. These are the lies we teach our children to tell; we call them manners. Even our automatic response of "Fine" to a neighbor's equally automatic "How are you?" is often, when you get right down to it, a lie.

More serious lies can have a range of motives and implications. They can be malicious, like lying about a rival's behavior in order to get him fired, or merely strategic, like not telling your wife about your mistress. Not every one of them is a lie that needs to be uncovered. "We humans are active, creative mammals who can represent what exists as if it did not, and what doesn't exist as if it did," wrote David Nyberg, a visiting scholar at Bowdoin College, in "The Varnished Truth." "Concealment, obliqueness, silence, outright lying — all help to hold Nemesis at bay; all help us abide too-large helpings of reality."

Learning to lie is an important part of maturation. What makes a child able to start telling lies, usually at about age 3 or 4, is that he has begun developing a theory of mind, the idea that what goes on in his head is different from what goes on in other people's heads. With his first lie to his mother, the power balance shifts imperceptibly: he now knows something she doesn't know. With each new lie, he gains a bit more power over the person who believes him. After a while, the ability to lie becomes just another part of his emotional landscape.

"Lying is just so ordinary, so much a part of our everyday lives and everyday conversations," that we hardly notice it, said Bella DePaulo, a psychologist at the University of California, Santa Barbara. "And in many cases it would be more difficult, challenging and stressful for people to tell the truth than to lie."

In the 1990's, DePaulo asked 147 people to keep a diary of their social interactions for one week and to note "any time you intentionally try to mislead someone," either verbally or nonverbally. At the end of the week, the subjects had lied, on average, 1.5 times a day. "Lied about where I had been," read a diary entry. "Said that I did not have change for a dollar." "Told him I had done poorly on my calculus homework when I had aced it." "Said I had been true to my girl."

People didn't feel guilty about these lies, by and large, but lying still left them with what DePaulo called a "smudge," a sort of smarmy feeling after lying. Her subjects reported feeling less positive about their interactions with people to whom they had lied than to people to whom they had not lied.

Still, DePaulo said that her research led her to believe that not all lying is bad, that it often serves a perfectly respectable purpose; in fact, it is sometimes a nobler, or at least kinder, option than telling the truth. "I call them kindhearted lies, the lies you tell to protect someone else's life or feelings," DePaulo said. A kindhearted lie is when a genetic counselor says nothing when she happens to find out, during a straightforward test for birth defects, that a man could not possibly have fathered his wife's new baby. It's when a neighbor lies about hiding a Jewish family in Nazi-occupied Poland. It's when a doctor tells a terminally ill patient that the new chemotherapy might work. And it's when a mother tells her daughter that nothing bad will ever happen to her.

"We found in our studies that these were the lies that women most often told to other women," DePaulo said. "Women are the ones saying, 'You did the right thing,' 'I know just how you feel,' 'That was a lovely dinner,' 'You look great.' I don't think they're doing that because they think the truth is unimportant or because they have a casual attitude toward lying. I think they just value their friends' feelings more than they value the truth."

If the search for an all-purpose lie detector were successful, and these everyday lies were uncovered along with the threatening or malicious ones, we might, paradoxically, end up feeling a little less safe than we felt before. It would be destabilizing indeed to be stripped of the half-truths and delusions on which social life depends.


Does Lying Make Us Smarter?

Personally, I cannot tell a lie. This is not to say that I never lie; I'm just not very good at it. My lies are mostly lies of omission, secrets that I choose not to talk about at all, because when I do say something deceptive, people usually see right through me.

I realize that my honesty comes as much from ineptitude as from integrity. In fact, my own dirty little secret is that I wish I were a better liar; I think it would make me a more interesting person, maybe even a better writer. Still, when I told Paul Ekman that I hardly ever lie, I detected in my voice an unseemly amount of pride.

Ekman's response surprised me. He is, after all, one of the nation's leading experts on spotting liars; I expected him to nod sagely, approving of me and my truthful ways. Instead, he listened to my veiled boast with a patient little smile. Then, without missing a beat, he started enumerating the qualities that are required to lie. To lie, he told me, a person needs three things: to be able to think strategically and plan her moves ahead of time, like a good chess player; to read the needs of other people and put herself in their shoes, like a good therapist; and to manage her emotions, like a grown-up person.

So had this very nice, polite, accomplished man dissed me? Had Ekman told me, indirectly, that bad liars like me are immature, unempathetic and not especially bright? Had he pointed out that the skills of lying are the same skills involved in the best human social interactions?

Probably. (Would he tell me the truth if I asked him?) Deception is, after all, one trait associated with the evolution of higher intelligence. According to the Machiavellian Intelligence Hypothesis, developed by Richard Byrne and Andrew Whiten, two Scottish primatologists at the University of St. Andrews in Fife, the more social a species, the more intelligent it is. The hypothesis holds that as social interactions became more and more complex, our primate ancestors evolved so they could engage in the trickery, manipulation, skulduggery and sleight of hand needed to live in large social groups, which helped them avoid predators and survive.

"All of a sudden, the idea that intelligence began in social manipulation, deceit and cunning cooperation seems to explain everything we had always puzzled about," Byrne and Whiten wrote. In 2004, Byrne and another colleague, Nadia Corp, looked at the brains and behavior of 18 primate species and found empirical support for the hypothesis: the bigger the neocortex, the more deceptive the behavior.

But even if liars are smarter and more successful than the rest of us, most people I know seem to be like me: secretly smug about their honesty, rarely admitting to telling lies. One notable exception is a friend who blurted out, when she heard I was writing this article, "I lie all the time."

Lying started early for this friend, she wrote later in an e-mail message. She grew up in a big, competitive family (14 siblings and step-siblings in all), and lying was the easiest way to get a word in edgewise. "There were so many of us," she wrote, "asserting knowledge became even more prized than the knowledge itself — because you were heard." If she didn't know something, she made it up. It got her siblings' attention.

These days, my friend, who is a novelist, claims that she lies almost reflexively. Maybe it gives her power to have information she's not sharing with her loved ones, she wrote; maybe it's just something left over from her childhood struggle for attention and from her writerly need to flex her imagination. "If I'm on the phone and my husband walks in and says, 'Who's that?' I might say Jill if it's Joan," she wrote to me. "Or if my mother asks me who I had lunch with, I'll tell her Caroline when it was Alice, for no good reason. This kind of useless lying is harmless except when I get caught — and I do get caught, because keeping track of useless lies is both daunting and exhausting."

My friend might be aware of getting caught, but habitual liars like her are probably harder to spot than neophytes like me. People who lie a lot are good at it, and they don't worry as much as I do about getting caught. And no matter what device or technique of lie detection is used, the liar who doesn't strain at her deception is still less likely to be fingered than the liar who does.

In a recent study looking at the brain anatomy of pathological liars versus nonliars, researchers at the University of Southern California found that the liars had more white matter in their prefrontal cortexes. The investigators found their subjects partly through self-identification, an odd choice in a study of pathological liars. But it was an intriguing finding nonetheless. "White matter is pivotal to the connectivity and cognitive function of the human brain," Sean Spence, a deception researcher and psychiatrist at the University of Sheffield, wrote in an editorial accompanying the study's publication in the British Journal of Psychiatry last October. "And abnormal prefrontal white matter might affect complex behaviors such as deception."

As for my lying friend, she may or may not have an excess of white matter in the front of her brain. But she does lie, often. Her lies have no hidden purpose, she told me; she lies only for the sake of lying. "Of course," she added in her e-mail message, "I could be lying about all of this."

Today's federal effort to develop an efficient machine for credibility assessment has been compared to the Manhattan Project, the secret government undertaking to build the atomic bomb. This sounds hyperbolic, to compare a high-tech lie detector to a weapon of mass destruction, but Tom Zeffiro, who made the analogy, said that they raise similar moral quandaries, especially for the scientists doing the research. If a truly efficient lie detector could be developed, he said, we might find ourselves living in "a fundamentally different world than the one we live in today."

In the quest to make the country safer by looking for brain tracings of lies, it might turn out to be all but impossible to tell which tracings are signatures of truly dangerous lies and which are the images of lies that are harmless and kindhearted, or self-serving without being dangerous. As a result, we might find ourselves with instruments that can detect deception not only as an antiterrorism device but also in situations that have little to do with national security: job interviews, tax audits, classrooms, boardrooms, courtrooms, bedrooms.

This would be a problem. As the great physician-essayist Lewis Thomas once wrote, a foolproof lie-detection device would turn our quotidian lives upside down: "Before long, we would stop speaking to each other, television would be abolished as a habitual felon, politicians would be confined by house arrest and civilization would come to a standstill." It would be a mistake to bring such a device too rapidly to market, before considering what might happen not only if it didn't work — which is the kind of risk we're accustomed to thinking about — but also what might happen if it did. Worse than living in a world plagued by uncertainty, in which we can never know for sure who is lying to whom, might be to live in a world plagued by its opposite: certainty about where the lies are, thus forcing us to tell one another nothing but the truth.

Robin Marantz Henig, a contributing writer, is the author of "Pandora's Baby: How the First Test Tube Babies Sparked the Reproductive Revolution." Her most recent article for the magazine was about death.

Copyright 2006The New York Times Company