Saturday, August 06, 2005

"...the moment we begin to think of ourselves as great, we will have lost it."

Excerpted from a recent Charlie Rose transcript:

---

JIM COLLINS: Let me just share with you a little story that`s from back from when we were doing the research on "Good to Great" -- I had an executive who asked if we could take their company out of the book. This was on the positive side of the ledger.

CHARLIE ROSE: Right.

JIM COLLINS: . because, you know, we always have the good to greats or the great companies, and we have the direct comparisons. And -- and I said, well, you`re a publicly traded company, you beat the market by some huge amount, I think it was 15 times the market, you`re extraordinarily successful. I`m sorry, ball game, data is in. You went from good to great, you`re in the book. So there was this pause, and - and he said: "Well, I`m sort of disappointed to hear that," and I was puzzled and I said, well, why? And he said, "the moment we begin to think of ourselves as great, we will have lost it."

CHARLIE ROSE: Yeah.

JIM COLLINS: And I don`t want us or our people to pick up a copy of your book and say ....

CHARLIE ROSE: So, we`re great.

JIM COLLINS: We`re great. And in many ways, greatness is this elusive thing. Good to great means you`re always only good relative to.

CHARLIE ROSE: Yes. It also means it`s always a journey.

JIM COLLINS: It`s always a journey. It`s always what`s next. You`re up, every time you think you`re at the top, there`s another mountain.

---

Why do we laugh?

Theories of humour

Poking fun

Aug 4th 2005 | TUEBINGEN, GERMANY
From The Economist print edition

Why people laugh

THE true story of how your wife's stalker rang her to discuss killing you isn't supposed to provoke mirth. But when John Morreall, of the College of William and Mary in Virginia, related the events last week to a group of scholars in Tuebingen in Germany, they were in stitches as he divulged the details of how his wife tried to dissuade the confused young man by pleading that her mortgage was too large to pay without her husband's help.

So why did they laugh? Dr Morreall's thesis is that laughter, incapacitating as it can be, is a convincing signal that the danger has passed. The reaction of the psychologists, linguists, philosophers and professional clowns attending the Fifth International Summer School on Humour and Laughter illustrates his point. Dr Morreall survived to tell the tale and so had an easy time making it sound funny.

One description of how laughter is provoked is the incongruity theory developed by Victor Raskin of Purdue University and Salvatore Attardo of Youngstown State University, both in America. This theory says that all written jokes and many other humorous situations are based on an incongruity—something that is not quite right. In many jokes, the teller sets up the story with this incongruity present and the punch line then resolves it, in a way people do not expect. Alternatively, the very last words of the story may introduce the absurdity and leave the listeners with the task of reconciling it. For instance, many people find it funny that a conference on humour could take place in Germany.

Why do people laugh at all? What is the point of it? Laughter is very contagious and this suggests that it may have become a part of human behaviour because it promotes social bonding. When a group of people laughs, the message seems to be “relax, you are among friends”.

Indeed, humour is one way of dealing with the fact that humans are “excrement-producing poets and imperfect lovers”, says Appletree Rodden of the University of Tuebingen. He sees religion and humour as different, and perhaps competing, ways for people to accept death and the general unsatisfactoriness of the world. Perhaps that is why, as Dr Morreall calculates in a forthcoming article in the journal Humor, 95% of the writings that he sampled from important Christian scholars through the centuries disapproved of humour, linking it to insincerity and idleness.

Fear of idleness is why many managers discourage laughter during office hours, Dr Morreall notes. This is foolish, he claims. Laughter or its absence may be the best clue a manager has about the work environment and the mood of employees.

Indeed, another theory of why people laugh—the superiority theory—says that people laugh to assert that they are on a level equal to or higher than those around them. Research has shown that bosses tend to crack more jokes than do their employees. Women laugh much more in the presence of men, and men generally tell more jokes in the presence of women. Men have even been shown to laugh much more quietly around women, while laughing louder when in a group of men.

But laughter does not unite us all. There are those who have a pathological fear that others will laugh at them. Sufferers avoid situations where there will be laughter, which means most places where people meet. Willibald Ruch of Zurich University surveyed 1,000 Germans and asked them whether they thought they were the butts of jokes and found that almost 10% felt this way. These people also tended to classify taped laughter as jeering. Future research will focus on the hypothesis that there is something seriously wrong with their sense of humour.

Copyright © 2005 The Economist Newspaper and The Economist Group. All rights reserved.

We see what we want to see.

If we see at all.

---

To Master the Art Of Solving Crimes, Cops Study Vermeer --- Frick Museum Paintings Open New York Officers' Eyes; Just Like `the Seven Five'

By Ellen Byron

1254 words
27 July 2005
The Wall Street Journal
A1
English
(Copyright (c) 2005, Dow Jones & Company, Inc.)

One Monday earlier this year, when New York's Frick Collection was closed to the public, about 15 New York police officers were ushered inside. The officers, some wearing their holsters, solemnly gathered around a conference table in an ornate, wood-paneled room. Having no idea why they had been summoned there, some assumed it was for a security briefing. They were surprised when they were told the real reason: They were there to look at art.

Capt. Ernest Pappas frowned in concentration as he stood before Vermeer's "Mistress and Maid" in the Frick's plush West Gallery and was asked to describe the painting.

"This woman is right-handed, of well-to-do means, and the pen appears to be in the dropped position," Mr. Pappas said, assessing the mistress. Unsure about the other figure in the picture, the maid, the 42-year-old asked his colleagues whether they thought she was delivering bad news. "Is she assuming a defensive position? Do you think that's a smirk?"

Though he hadn't so carefully analyzed a painting before, Mr. Pappas immediately saw how it related to his detective work in Queens: "Crimes -- and art -- can be solved by looking at the little details."

Art lovers flock to the Frick to pay homage to one of the world's finest displays of Western European art. Masterpieces by Rembrandt, Titian and Renoir adorn the walls of the Fifth Avenue mansion, once the home of industrial magnate Henry Clay Frick, an avid collector of art from the Renaissance period to the end of the 19th century. The beaux-arts setting is hushed and formal. Children under 10 years of age aren't allowed inside.

It's not your usual urban crime scene. But now, in an unusual effort to improve observational and analytical skills, the New York Police Department is bringing newly promoted officers, including sergeants, captains and uniformed executives, to the Frick to examine paintings.

"In New York, the extraordinary is so ordinary to us, so in training we're always looking to become even more aware as observers," says Diana Pizzuti, deputy chief and commanding officer of New York City's police academy.

"Tell me the who, what, where, why and when of each piece," Amy Herman, head of education at the Frick, instructs each class before they descend the mansion's grand staircase and enter the public galleries. She limits the time her students have in front of each painting. "Just like when they arrive at a crime scene, they have to make observations and judgments quickly," she says.

The NYPD course began last year, inspired by similar classes the 38-year-old Ms. Herman teaches for New York medical students. Those classes are intended to develop diagnostic abilities through better observation of patients.

Capt. Kevin Hurley, 53, scrutinized a 1742 Hogarth painting called "Miss Mary Edwards." He studied the seated woman in a red dress, trying to determine how to best explain the portrait to the rest of the group.

"We decided she wanted to show off the fact that she's educated and wealthy," Mr. Hurley told the other officers, pointing out her straight posture, her jewels and the letter she held, which they guessed was from her rich husband. The hunting dog in the picture puzzled Mr. Hurley. "That doesn't seem like a dog that woman would have," he told the group. "Shouldn't it be a poodle or something?"

Ms. Herman explained that the portrait reflected the independent nature of Miss Edwards, an educated woman who divorced her extravagant husband and regained control of her household. "We had come up with a really good story for it, but everything wasn't as it appeared," Mr. Hurley concluded. "I now know how to look for more than what you first see."

Standing in front of El Greco's "The Purification of the Temple," David Grossi, an NYPD captain, recognized Jesus as the painting's central figure, characterized the scene as chaotic and explained the work's use of light and color.

"The gang unit would probably be called in," he continued. "It appears there's grand larceny here, felony assault there, and Jesus would probably be charged with inciting a riot." Counting 17 people in the scene, he added: "Good thing there are plenty of witnesses."

Mr. Grossi, 41, says his Frick class came to mind when he responded to a call in his Bronx precinct earlier this month[JULY 12]. A man had tried to evade an arrest warrant by jumping from one rooftop to another, and "he didn't make it," Mr. Grossi said. Though he has spent much of his 21 years as a police officer doing detective work, Mr. Grossi thought back to his training at the Frick when he began securing the scene after the man had been taken to the hospital.

When looking at a painting, he was taught to assess the entire canvas, from foreground to background, before drawing conclusions. So instead of just focusing on the immediate site of the fall, he widened the crime scene to include the sides of the building and a van in a driveway. "It reminded me to stop and take in the whole scene and not just have tunnel vision," Mr. Grossi said. Detectives later found the suspect's palm prints on the hood of the van, and that helped establish the route he had used in attempting to evade arrest.

The course has also given Ms. Herman new perspective on her day job. Though she has a master's degree in art history and is well versed in the Frick's art collection, she says that working with the officers has given her insight and appreciation of the art she sees every day. When leading a discussion about J.M.W. Turner's dramatic sea scene "Fishing Boats Entering Calais Harbor," an officer remarked that it seemed like a race. "I've always looked at that urgency in terms of impending danger, but he could see that same tension in a sporting context," she says. "Now, every time I see the painting, I look at it a little differently."

Noting the vivid chaos of the Turner painting, one sergeant blurted that it looked like "the seven five," drawing agreement and smiles from the other officers. Ms. Herman, not catching the reference, asked for an explanation and learned that Brooklyn's 75th precinct was one of the city's busiest and most dangerous. The painting hanging next to it, Turner's "Mortlake Terrace: Early Summer Morning," then elicited shouts of "The one nine!" -- the quiet Manhattan precinct in which the Frick is located.

Giovanni Bellini's "St. Francis in the Desert," one of the Frick's most prized works, is usually considered a masterpiece of landscape or spirituality, or both. This summer, a group of captains offered a more modern assessment of the 15th-century work. "As a police officer, I have to say we have an EDP here," said Capt. Donald McHugh, using the police code word for emotionally disturbed person. Pointing to a skull and a jug of wine near St. Francis's feet, Mr. McHugh argued the piece could be depicting a crime scene. "Even people of God can be suspicious," he told the group. "He'd probably be a voluntary arrest, though, no handcuffs."

Nation of nerds.

Is this the first step towards regaining world stature, or another in a series of futile throes?

---

August 06, 2005

Who is King of the Geeks?

From Leo Lewis in Tokyo

Thousands are taking part in a bizarre contest in the hope of being named Japan’s leading nerd

THIS morning more than half a million self-declared nerds across Japan will be locked in their rooms, frenziedly racking their brains over an examination posing 100 of the most obscure questions imaginable.

If they don’t know, for example, precisely how many more people attended the Tokyo Comiket Manga (comic strip) convention in 2002 than in 2001, they are unlikely to make it past the first section.

Each nerd will be completely alone in this mental endeavour. Internet chatrooms and cyber cafés will be empty. There will be no conferring and the winner will take the greatest pop-culture prize of all — being officially recognised as Japan’s biggest geek, or otaku.

The country’s first Biblos National Proficiency Test for Geeks demonstrates Japan’s otaku boom — a recent phenomenon that has demonstrated the cultural and economic power of young men and women whose obsessive interests and hobbies once pushed them to the margins of Japanese society.

One of the organisers of the examination said: “Our aim is to nurture an otaku elite to carry the otaku culture through the 21st century.”

But the nerds’ passions are their biggest appeal. Prime-time television dramas have been based on their lives, their blogs have become best-sellers and districts of major cities are being refurbished to cater to them.

Major investment houses have begun studying the world of the otaku. Etsuko Suwa, a self-confessed otaku, who has spent the past fortnight preparing for the examination, said: “There has been a lot of discussion in the chatrooms about the questions, but I think I am prepared.

“Just to get within the top 100 in the country will be fantastic, but my parents say I don’t really need an exam to prove I’m an otaku.”

More than a million comic-book obsessives in Japan spend the equivalent of £0.5 billion every year buying comics and travelling to conventions. An estimated 800,000 worship pop stars and fritter the equivalent of about £300 million on attending every event in which their idol is involved. According to a study by the Nomura Research Institute, otaku command a market worth about £1.6 billion a year, without including the Japanese video games market.

Ken Kitabayashi, who compiled the report, said: “We are already working on a revised estimate that includes many other areas of otaku interest we didn’t bring into the initial calculations. If you add in areas such as toy trains, real train-spotting and the new breed of mobile phone otaku, the figure will be vast.”

But the most striking change wrought by the otaku boom has been in Akihabara, Tokyo’s electronics district. For decades the area existed only as a tourist attraction, and for electronics and video game obsessives. But during the past year it has evolved spectacularly. Colourful cafés have sprung up to meet the demand for geek meeting places, and a train line has been constructed to link Akihabara with Tsukuba, a city north of Tokyo where the country’s foremost scientific research takes place.

A massive industrial complex has also been constructed in Akihabara, containing the first university specifically aimed at harnessing the talents of young otaku.

The Digital Hollywood University, which is recognised by the Ministry of Education, offers postgraduate degrees in subjects relating to technology, design and animation.

Tomo Sugiyama, the university’s founder, said: “People describe these young, enthusiastic Japanese as otaku, but I see students who will put everything into turning their interests into a marketable skill.”

THE NERD INSTINCT

Japan’s three million otaku are generally men in their 20s, but the word covers all obsessives, from teenage girls who lose themselves in romantic manga, to trainspotters.

The biggest subset of otaku are manga-otaku — people who trade comics and copy out the images.

Although the word otaku is in everyday use, nobody knows its etymology. Some think it refers to a Japanese word for “house” — implying that they don’t get out much.

It is thought that the word first appeared in print when a columnist used it in a 1983 article about a Tokyo comic convention.

Copyright 2005 Times Newspapers Ltd.

Friday, August 05, 2005

Maybe we are all ugly.

Those who are intelligent just do a comparatively better job in hiding it. Thus the apparent coexistence of beauty and intelligence.

---

The Ugliness Problem; Is it irrational to discriminate against the homely? Not entirely.

Dan Seligman
752 words
15 August 2005
Forbes
96
Volume 176 Issue 3
English
(c) 2005 Forbes Inc.

Is it irrational to discriminate against the appearance-challenged? Not entirely.

A sizeable and growing body of literature attests to the fact that homely people confront disadvantages not only in the competition for spouses but in many other areas of life. They have lower incomes than handsome types. When accused of crime, they tend to be dealt with more harshly by judges and juries. One recent report, sorrowfully dwelt upon by New York Times columnist Maureen Dowd, concludes that less attractive children are discriminated against by their own parents. (Parents are alleged to be less mindful of the safety of unattractive tots.)

In most academic venues and popular media the reaction has been to emphasize the irrational thinking that underlies discrimination against the ugly. The alternative perspective, about to be advanced on this page, questions whether the discrimination really is so irrational.

The classic article about the economic effects of physical appearance, published in the December 1994 American Economic Review, was written by Daniel S. Hamermesh (University of Texas, Austin) and Jeff E. Biddle (Michigan State). It relies on three studies (two American, one Canadian) in which interviewers visited people's homes, asked the occupants a lot of questions about their education, training and job histories, and discreetly (one hopes) rated each man or woman on physical attractiveness. The ratings were on a scale of one (best) to five (worst). In the larger of the two American samples 15% of interviewees were rated "quite plain" or "homely"--categories four and five.

Hamermesh and Biddle found that men in the top two categories enjoyed incomes 5% above those of men rated merely average in appearance. The unfortunate fellows in the two bottom categories were paid 9% below the average. The results for women workers were somewhat similar, except that the workplace effects were smaller. The study controlled for differences in education, experience and several other factors affecting pay but did not measure (and thus did not adjust for) intelligence.

Hamermesh and Biddle agree that it's rational to pay more for good looks in some occupations, e.g., salesperson, but deny that this explains much of the pay gap. They leave you thinking that the basic dynamic is pure employer discrimination--a simple preference for good-looking people. Their paper says nothing about the policy implications of this perspective, but in a recent conversation with Hamermesh I discovered that he is sympathetic to ugly people who want laws to bar the discrimination.

But is it entirely irrational to view ugly people as generally less competent than beautiful people? It is hard to accept that employers in a competitive economy would irrationally persist in paying a premium for beauty--while somehow never noticing that all those lookers were in fact no more intelligent and reliable than the ugly characters being turned down. In the standard economic model of discrimination put forward years ago by Gary Becker of the University of Chicago, employers who discriminate irrationally get punished by the market, i.e., by competitors able to hire competence at lower rates.

The mating practices of human beings offer a reason for thinking beauty and intelligence might come in the same package. The logic of this covariance was explained to me years ago by a Harvard psychologist who had been reading a history of the Rothschild family. His mischievous but astute observation: The family founders, in 18th-century Frankfurt, were supremely ugly, but several generations later, after successive marriages to supremely beautiful women, the men in the family were indistinguishable from movie stars. The Rothschild effect, as you could call it, is well established in sociology research: Men everywhere want to marry beautiful women, and women everywhere want socially dominant (i.e., intelligent) husbands. When competent men marry pretty women, the couple tends to have children above average in both competence and looks. Covariance is everywhere. At the other end of the scale, too, there is a connection between looks and smarts. According to Erdal Tekin, a research fellow at the National Bureau of Economic Research, low attractiveness ratings predict lower test scores and a greater likelihood of criminal activity.

Antidiscrimination laws being what they are, it is sometimes difficult for an employer to give intelligence tests or even to ascertain criminal histories. So maybe the managers who subconsciously award a few extra points to the handsome applicants are rational. Or at least not quite as stupid as they look.

Waters of truth.

Has anyone ever thought about shorting a country based on drug consumption?

link to original piece.

From Wired News:

Rivers of Coke

by Stephen Leahy

Drug enforcement officials may soon have an accurate yet secret way to detect drug use -- the toilets of the world.

Italian scientists discovered that nearly 10 pounds of cocaine residues flow into Italy's Po River every day.

How is Italy's biggest river getting all that coke? From urine. Turns out that coke users, like beer drinkers, just rent their substance of choice. Although in the case of cocaine, it's transformed by the liver into benzoylecgonine, or BE, before being excreted. BE can't be produced by any other means, so when it's found in your urine sample, that spells trouble with a big T.

Revealed Friday in the journal Environmental Health, this is the first time the byproducts of illicit drugs like cocaine have been detected in river water.

More surprisingly, the level of residues translates into at least 40,000 daily doses of coke snorted by residents of the Po Valley -- nearly three times more than official estimates of 15,000 doses of cocaine per month.

"We expected our field data on cocaine consumption to give estimates within the range of the official estimates, or perhaps lower, but certainly not higher," wrote Ettore Zuccato, of the Mario Negri Institute for Pharmacological Research in Milan, Italy.

Zuccato and his co-researchers decided they could use standard lab techniques to test entire cities or regions and get a rough idea of the level of cocaine use. Statistics about drug use are notoriously inaccurate, given that drug users don't generally like to fill out surveys. Since chemistry doesn't lie, this method offers a direct way of measuring how much coke is actually being used.

The researchers first developed a method to measure how much BE was in the surface waters of rivers or in wastewater at sewage-treatment plants using liquid chromatographic separation.

Sampling done at other sewage-treatment plants in various Italian cities confirmed the results.

"There is in fact no reasonable mechanism by which cocaine excretion products could accumulate in flowing surface waters," the authors wrote.

"It's a seminal piece of research," said Christian Daughton, chief of the environmental chemistry branch at the Environmental Protection Agency's Las Vegas laboratory.

Daughton, an expert on pharmaceutical products that find their way into rivers and lakes, first suggested that illicit drug use could be measured this way in 2001. The technique is an anonymous, noninvasive method of measuring drug use in a city or community, he said.

Depending on how far up the sewage system you go, it could also be used to measure drug use in a prison or neighborhood, Daughton said.

"I was shocked that so few scientists showed any interest in the idea at the time," he said.

Since nearly all illicit drugs have unique metabolites akin to BE, all types of drug use could be monitored, Daughton said. Measuring metabolites instead the drug itself also eliminates false readings from dumping large amounts of drugs down the drain.

Daughton said more work is needed to verify that actual drug use corresponds to Zuccato's estimates.

"There's likely more cocaine being used than Zuccato estimates," Daughton said.

Thursday, August 04, 2005

A hierarchy of motivation based on regret avoidance.

Education is the number one source of regret. Leisure trails.

---

Personality and Social Psychology Bulletin, Vol. 31, No. 9, 1273-1285 (2005)
DOI: 10.1177/0146167205274693
© 2005 Society for Personality and Social Psychology, Inc.

What We Regret Most... and Why.

Neal J. Roese
University of Illinois, roese@uiuc.edu

Amy Summerville

University of Illinois

Which domains in life produce the greatest potential for regret, and what features of those life domains explain why? Using archival and laboratory evidence, the authors show that greater perceived opportunity within life domains evokes more intense regret. This pattern is consistent with previous publications demonstrating greater regret stemming from high rather than low opportunity or choice. A meta-analysis of 11 regret ranking studies revealed that the top six biggest regrets in life center on (in descending order) education, career, romance, parenting, the self, and leisure. Study Set 2 provided new laboratory evidence that directly linked the regret ranking to perceived opportunity. Study Set 3 ruled out an alternative interpretation involving framing effects. Overall, these findings show that people's biggest regrets are a reflection of where in life they see their largest opportunities; that is, where they see tangible prospects for change, growth, and renewal.

You can fool all of the people all of the time.

If they really want to be fooled.

link to original article.

Watch my hands deceive you
By Rachael Buchanan
BBC News

Magicians have been using a clever mix of dexterity and deception for centuries to astound and captivate their audiences.

But how do they fool people who know they are going to be duped?

Well, cutting edge-psychology is now being applied to this most ancient of entertainment forms, to understand how these masters of legerdemain trick the complexities of the human brain.

The techniques involved have been discussed this week at the Science Museum's Dana Centre, being held to mark the centenary of the Magic Circle.

At the vanguard of this unusual appliance of science is University of Hertfordshire psychologist Professor Richard Wiseman.

As a former conjurer, he is uniquely qualified to understand the social dynamics between a magician and his audience and he argues that there is a lot more happening in a magic show than people realise.

"The really good performers," he said, "the ones who know what they're doing, have an incredible grasp of psychology", and use it to convince you to see their version of events.

One, two, three

Many of their mind tricks of the trade are surprisingly simple and boil down to misdirection - of gaze, attention, suspicion or even memory.

Body language is key. An audience can be misled about the location of an object by tensing or relaxing hands to make them look full or empty.

Another important social cue is direction of gaze and head movement. If a performer looks in a particular direction, the natural reaction is to follow, giving him a brief window to make his move.

It is vital that when the audience realises they have been had, they cannot properly recall the fine details in the series of events which led up to the effect, and so deconstruct it.

"For example," said Professor Wiseman, "a magician might cut some cards and say 'Right, they're mixed up now'. Then he'll do something else and then say 'Now, remember I shuffled the cards at the start'.

"That word - 'shuffled' - has gone in, and people think 'Yeah, that's right, the cards were shuffled'. But they weren't, he just cut them. It's cut to mix to shuffle. Small steps. If you had gone from cut to shuffle, it's too much and people notice."

And magicians are not the only ones playing tricks; your mind is at it, too.

"People think the way their perception works is simple," said University College London neuroscientist Geraint Rees. "They think it's like having a viewing room inside their brain with a little man sitting there monitoring a big bank of video feeds from outside, but it's not so straight forward."

Invisible walk

Vision is not the only sensory data the brain receives at any one time.

Every moment, the mind is bombarded with a cacophony of sights, sounds, smells, tastes and physical sensations. Your consciousness has to continually meld all this data together and make sense of it.


So it takes a few shortcuts and unless a major change occurs to grab your attention, your brain will not refresh the scene. In short, you see what you expect to see - and that is very helpful to an illusionist.
Scientists have devised a myriad of weird and wonderful ways to test this - to show how much people miss right in front of their eyes.

US scientists Daniel Simons and Christopher Chabris are keen students of what is called "inattention blindness".

They have a test which involves showing subjects a video of two basketball teams, one wearing white and the other black. Subjects are told to count the passes between the white team and most become so focused, they fail to notice a woman walk across the court with an umbrella.

"We have a limited processing capacity and can only perceive what we attend to," said UCL psychologist Dr Nilli Lavie.

"If the information we are paying attention to is taking all of our capacity, it doesn't leave capacity to perceive anything else."

On the spot

Dr Lavie specialises in inattention blindness. "The skill of a good magician is to make a very interesting, dramatic act with complex actions and interesting verbal utterances," she said.

"He loads your attention with all this information, but it's irrelevant to the act that he presents. It is so you don't notice the deception."

However, humans being natural sceptics go to magic shows knowing they are going to be deceived so they pay special attention to everything. But being overly focused can also be turned to the illusionist's advantage.

Dr Lavie asked a couple of thousand people at the Science Museum in London to help her in a study.

They were told to spot whether an "X" or an "N" appeared within a briefly displayed ring of random letters.

They were also told to expect the occasional brief appearances of a small circle. They were even shown the circle, yet most failed to see it because their attention was overly focused on the letters.

She sees parallels with magic shows. "With magic you expect something is going to happen but you still don't see it because your attention is being otherwise engaged."

Who's a sucker?

To test this theory, Durham University psychologist Gustav Kuhn employed an eye-tracking device to monitor exactly where people look when watching magic.

In his trick, a cigarette vanishes simply by being dropped. He found that even when viewers were told that the cigarette would disappear, the majority still missed the method.

"It doesn't depend on where you look," he said. "The most important fact is what you are attending to. People can look straight at the cigarette and not see it drop."
So what motivates people to let magicians mess with their minds. After all, there is a fundamental problem with this relationship. At the end of the day, you are paying someone, to deceive you; you are twice the sucker.

Professor Wiseman calls it a "social contract for disaster".

"I show you a trick, you want to know how it's done, I am not going to tell you. Conflict."

'Kings of cool'

Magicians have to manage that. And to do that they use these psychological techniques, but they are also giving something back.

In a renowned essay, Paul Harris, one of David Blaine's technical advisers, has argued it is "astonishment". This, he claims, is our natural state of mind; analogous to a childlike awe, that is lost in adulthood. Good magic, Harris claims, returns people to that state.

Today, it is the edgier performers like Derren Brown and David Blaine that people want to see.

They are both charged with wresting magic back from the grip of the likes of David Copperfield, Paul Daniels and Lance Burton - the poster boys for high-camp family entertainment.

Yes, Blaine and Brown will astonish, but they will also unsettle; they take you on a journey to the parts of your childhood inhabited by the scary clown, not Peter Pan and the fairies.
In reality, their tricks are the same as those played out over generations. Blaine's genius was to strip out the outdated paraphernalia of corny catch-phrases, tuxedos and Vegas-scale stunts and work on the effect instead.

His TV show was successful because it was street magic with the safety catch off.

He sprang his tricks on unsuspecting passers-by, and then lingered on their reaction, focussing more on their discomfort at what they had experienced than the trick itself.

Transporting modern cynical audiences back to a childlike state of wonder is a tall order even for these kings of cool. But Wiseman believes Derren Brown is doubly astute, using people's natural scepticism and demand for answers to his advantage.

"If he passed himself off as having psychic abilities, he wouldn't be half as successful," Professor Wiseman insists.

Instead, Brown offers a rational explanation dressed up in science to explain the tricks he does. He presents himself as having amazing powers of memory and psychological manipulation and so offers the audience a "believable" solution, albeit often the wrong one.

"Derren's trick is not the magic, it's creating the illusion that he has these fantastic abilities and getting you to believe him."

There never seems to be a last word on anything.

This is no exception.

link to original article.

Why Truman Dropped the Bomb

From the August 8, 2005 issue: Sixty years after Hiroshima, we now have the secret intercepts that shaped his decision.
by Richard B. Frank
08/08/2005, Volume 010, Issue 44

The sixtieth anniversary of Hiroshima seems to be shaping up as a subdued affair--though not for any lack of significance. A survey of news editors in 1999 ranked the dropping of the atomic bomb on August 6, 1945, first among the top one hundred stories of the twentieth century. And any thoughtful list of controversies in American history would place it near the top again. It was not always so. In 1945, an overwhelming majority of Americans regarded as a matter of course that the United States had used atomic bombs to end the Pacific war. They further believed that those bombs had actually ended the war and saved countless lives. This set of beliefs is now sometimes labeled by academic historians the "traditionalist" view. One unkindly dubbed it the "patriotic orthodoxy."

But in the 1960s, what were previously modest and scattered challenges of the decision to use the bombs began to crystallize into a rival canon. The challengers were branded "revisionists," but this is inapt. Any historian who gains possession of significant new evidence has a duty to revise his appreciation of the relevant events. These challengers are better termed critics.

The critics share three fundamental premises. The first is that Japan's situation in 1945 was catastrophically hopeless. The second is that Japan's leaders recognized that fact and were seeking to surrender in the summer of 1945. The third is that thanks to decoded Japanese diplomatic messages, American leaders knew that Japan was about to surrender when they unleashed needless nuclear devastation. The critics divide over what prompted the decision to drop the bombs in spite of the impending surrender, with the most provocative arguments focusing on Washington's desire to intimidate the Kremlin. Among an important stratum of American society--and still more perhaps abroad--the critics' interpretation displaced the traditionalist view.

These rival narratives clashed in a major battle over the exhibition of the Enola Gay, the airplane from which the bomb was dropped on Hiroshima, at the Smithsonian Institution in 1995. That confrontation froze many people's understanding of the competing views. Since then, however, a sheaf of new archival discoveries and publications has expanded our understanding of the events of August 1945. This new evidence requires serious revision of the terms of the debate. What is perhaps the most interesting feature of the new findings is that they make a case President Harry S. Truman deliberately chose not to make publicly in defense of his decision to use the bomb.

When scholars began to examine the archival records in the 1960s, some intuited quite correctly that the accounts of their decision-making that Truman and members of his administration had offered in 1945 were at least incomplete. And if Truman had refused to disclose fully his thinking, these scholars reasoned, it must be because the real basis for his choices would undermine or even delegitimize his decisions. It scarcely seemed plausible to such critics--or to almost anyone else--that there could be any legitimate reason that the U.S. government would have concealed at the time, and would continue to conceal, powerful evidence that supported and explained the president's decisions.

But beginning in the 1970s, we have acquired an array of new evidence from Japan and the United States. By far the most important single body of this new evidence consists of secret radio intelligence material, and what it highlights is the painful dilemma faced by Truman and his administration. In explaining their decisions to the public, they deliberately forfeited their best evidence. They did so because under the stringent security restrictions guarding radio intercepts, recipients of this intelligence up to and including the president were barred from retaining copies of briefing documents, from making any public reference to them whatsoever at the time or in their memoirs, and from retaining any record of what they had seen or what they had concluded from it. With a handful of exceptions, they obeyed these rules, both during the war and thereafter.

Collectively, the missing information is known as The Ultra Secret of World War II (after the title of a breakthrough book by Frederick William Winterbotham published in 1974). Ultra was the name given to what became a vast and enormously efficient Allied radio intelligence organization, which secretly unveiled masses of information for senior policymakers. Careful listening posts snatched copies of millions of cryptograms from the air. Code breakers then extracted the true text. The extent of the effort is staggering. By the summer of 1945, Allied radio intelligence was breaking into a million messages a month from the Japanese Imperial Army alone, and many thousands from the Imperial Navy and Japanese diplomats.

All of this effort and expertise would be squandered if the raw intercepts were not properly translated and analyzed and their disclosures distributed to those who needed to know. This is where Pearl Harbor played a role. In the aftermath of that disastrous surprise attack, Secretary of War Henry Stimson recognized that the fruits of radio intelligence were not being properly exploited. He set Alfred McCormack, a top-drawer lawyer with experience in handling complex cases, to the task of formulating a way to manage the distribution of information from Ultra. The system McCormack devised called for funneling all radio intelligence to a handful of extremely bright individuals who would evaluate the flood of messages, correlate them with all other sources, and then write daily summaries for policymakers.

By mid-1942, McCormack's scheme had evolved into a daily ritual that continued to the end of the war--and is in essence the system still in effect today. Every day, analysts prepared three mimeographed newsletters. Official couriers toting locked pouches delivered one copy of each summary to a tiny list of authorized recipients around the Washington area. (They also retrieved the previous day's distribution, which was then destroyed except for a file copy.) Two copies of each summary went to the White House, for the president and his chief of staff. Other copies went to a very select group of officers and civilian officials in the War and Navy Departments, the British Staff Mission, and the State Department. What is almost as interesting is the list of those not entitled to these top-level summaries: the vice president, any cabinet official outside the select few in the War, Navy, and State Departments, anyone in the Office of Strategic Services or the Federal Bureau of Investigation, or anyone in the Manhattan Project building the atomic bomb, from Major General Leslie Groves on down.

The three daily summaries were called the "Magic" Diplomatic Summary, the "Magic" Far East Summary, and the European Summary. ("Magic" was a code word coined by the U.S. Army's chief signal officer, who called his code breakers "magicians" and their product "Magic." The term "Ultra" came from the British and has generally prevailed as the preferred term among historians, but in 1945 "Magic" remained the American designation for radio intelligence, particularly that concerning the Japanese.) The "Magic" Diplomatic Summary covered intercepts from foreign diplomats all over the world. The "Magic" Far East Summary presented information on Japan's military, naval, and air situation. The European Summary paralleled the Far East summary in coverage and need not detain us. Each summary read like a newsmagazine. There were headlines and brief articles usually containing extended quotations from intercepts and commentary. The commentary was critical: Since no recipient retained any back issues, it was up to the editors to explain how each day's developments fitted into the broader picture.

When a complete set of the "Magic" Diplomatic Summary for the war years was first made public in 1978, the text contained a large number of redacted (literally whited out) passages. The critics reasonably asked whether the blanks concealed devastating revelations. Release of a nonredacted complete set in 1995 disclosed that the redacted areas had indeed contained a devastating revelation--but not about the use of the atomic bombs. Instead, the redacted areas concealed the embarrassing fact that Allied radio intelligence was reading the codes not just of the Axis powers, but also of some 30 other governments, including allies like France.

The diplomatic intercepts included, for example, those of neutral diplomats or attachés stationed in Japan. Critics highlighted a few nuggets from this trove in the 1978 releases, but with the complete release, we learned that there were only 3 or 4 messages suggesting the possibility of a compromise peace, while no fewer than 13 affirmed that Japan fully intended to fight to the bitter end. Another page in the critics' canon emphasized a squad of Japanese diplomats in Europe, from Sweden to the Vatican, who attempted to become peace entrepreneurs in their contacts with American officials. As the editors of the "Magic" Diplomatic Summary correctly made clear to American policymakers during the war, however, not a single one of these men (save one we will address shortly) possessed actual authority to act for the Japanese government.

An inner cabinet in Tokyo authorized Japan's only officially sanctioned diplomatic initiative. The Japanese dubbed this inner cabinet the Big Six because it comprised just six men: Prime Minister Kantaro Suzuki, Foreign Minister Shigenori Togo, Army Minister Korechika Anami, Navy Minister Mitsumasa Yonai, and the chiefs of staff of the Imperial Army (General Yoshijiro Umezu) and Imperial Navy (Admiral Soemu Toyoda). In complete secrecy, the Big Six agreed on an approach to the Soviet Union in June 1945. This was not to ask the Soviets to deliver a "We surrender" note; rather, it aimed to enlist the Soviets as mediators to negotiate an end to the war satisfactory to the Big Six--in other words, a peace on terms satisfactory to the dominant militarists. Their minimal goal was not confined to guaranteed retention of the Imperial Institution; they also insisted on preservation of the old militaristic order in Japan, the one in which they ruled.

The conduit for this initiative was Japan's ambassador in Moscow, Naotake Sato. He communicated with Foreign Minister Togo--and, thanks to code breaking, with American policymakers. Ambassador Sato emerges in the intercepts as a devastating cross-examiner ruthlessly unmasking for history the feebleness of the whole enterprise. Sato immediately told Togo that the Soviets would never bestir themselves on behalf of Japan. The foreign minister could only insist that Sato follow his instructions. Sato demanded to know whether the government and the military supported the overture and what its legal basis was--after all, the official Japanese position, adopted in an Imperial Conference in June 1945 with the emperor's sanction, was a fight to the finish. The ambassador also demanded that Japan state concrete terms to end the war, otherwise the effort could not be taken seriously. Togo responded evasively that the "directing powers" and the government had authorized the effort--he did not and could not claim that the military in general supported it or that the fight-to-the-end policy had been replaced. Indeed, Togo added: "Please bear particularly in mind, however, that we are not seeking the Russians' mediation for anything like an unconditional surrender."

This last comment triggered a fateful exchange. Critics have pointed out correctly that both Under Secretary of State Joseph Grew (the former U.S. ambassador to Japan and the leading expert on that nation within the government) and Secretary of War Henry Stimson advised Truman that a guarantee that the Imperial Institution would not be eliminated could prove essential to obtaining Japan's surrender. The critics further have argued that if only the United States had made such a guarantee, Japan would have surrendered. But when Foreign Minister Togo informed Ambassador Sato that Japan was not looking for anything like unconditional surrender, Sato promptly wired back a cable that the editors of the "Magic" Diplomatic Summary made clear to American policymakers "advocate[s] unconditional surrender provided the Imperial House is preserved." Togo's reply, quoted in the "Magic" Diplomatic Summary of July 22, 1945, was adamant: American policymakers could read for themselves Togo's rejection of Sato's proposal--with not even a hint that a guarantee of the Imperial House would be a step in the right direction. Any rational person following this exchange would conclude that modifying the demand for unconditional surrender to include a promise to preserve the Imperial House would not secure Japan's surrender.

Togo's initial messages--indicating that the emperor himself endorsed the effort to secure Soviet mediation and was prepared to send his own special envoy--elicited immediate attention from the editors of the "Magic" Diplomatic Summary, as well as Under Secretary of State Grew. Because of Grew's documented advice to Truman on the importance of the Imperial Institution, critics feature him in the role of the sage counsel. What the intercept evidence discloses is that Grew reviewed the Japanese effort and concurred with the U.S. Army's chief of intelligence, Major General Clayton Bissell, that the effort most likely represented a ploy to play on American war weariness. They deemed the possibility that it manifested a serious effort by the emperor to end the war "remote." Lest there be any doubt about Grew's mindset, as late as August 7, the day after Hiroshima, Grew drafted a memorandum with an oblique reference to radio intelligence again affirming his view that Tokyo still was not close to peace.

Starting with the publication of excerpts from the diaries of James Forrestal in 1951, the contents of a few of the diplomatic intercepts were revealed, and for decades the critics focused on these. But the release of the complete (unredacted) "Magic" Far East Summary, supplementing the Diplomatic Summary, in the 1990s revealed that the diplomatic messages amounted to a mere trickle by comparison with the torrent of military intercepts. The intercepts of Japanese Imperial Army and Navy messages disclosed without exception that Japan's armed forces were determined to fight a final Armageddon battle in the homeland against an Allied invasion. The Japanese called this strategy Ketsu Go (Operation Decisive). It was founded on the premise that American morale was brittle and could be shattered by heavy losses in the initial invasion. American politicians would then gladly negotiate an end to the war far more generous than unconditional surrender. Ultra was even more alarming in what it revealed about Japanese knowledge of American military plans. Intercepts demonstrated that the Japanese had correctly anticipated precisely where U.S. forces intended to land on Southern Kyushu in November 1945 (Operation Olympic). American planning for the Kyushu assault reflected adherence to the military rule of thumb that the attacker should outnumber the defender at least three to one to assure success at a reasonable cost. American estimates projected that on the date of the landings, the Japanese would have only three of their six field divisions on all of Kyushu in the southern target area where nine American divisions would push ashore. The estimates allowed that the Japanese would possess just 2,500 to 3,000 planes total throughout Japan to face Olympic. American aerial strength would be over four times greater.

From mid-July onwards, Ultra intercepts exposed a huge military buildup on Kyushu. Japanese ground forces exceeded prior estimates by a factor of four. Instead of 3 Japanese field divisions deployed in southern Kyushu to meet the 9 U.S. divisions, there were 10 Imperial Army divisions plus additional brigades. Japanese air forces exceeded prior estimates by a factor of two to four. Instead of 2,500 to 3,000 Japanese aircraft, estimates varied between about 6,000 and 10,000. One intelligence officer commented that the Japanese defenses threatened "to grow to [the] point where we attack on a ratio of one (1) to one (1) which is not the recipe for victory."

Concurrent with the publication of the radio intelligence material, additional papers of the Joint Chiefs of Staff have been released in the last decade. From these, it is clear that there was no true consensus among the Joint Chiefs of Staff about an invasion of Japan. The Army, led by General George C. Marshall, believed that the critical factor in achieving American war aims was time. Thus, Marshall and the Army advocated an invasion of the Home Islands as the fastest way to end the war. But the long-held Navy view was that the critical factor in achieving American war aims was casualties. The Navy was convinced that an invasion would be far too costly to sustain the support of the American people, and hence believed that blockade and bombardment were the sound course.

The picture becomes even more complex than previously understood because it emerged that the Navy chose to postpone a final showdown over these two strategies. The commander in chief of the U.S. fleet, Admiral Ernest King, informed his colleagues on the Joint Chiefs of Staff in April 1945 that he did not agree that Japan should be invaded. He concurred only that the Joint Chiefs must issue an invasion order immediately to create that option for the fall. But King predicted that the Joint Chiefs would revisit the issue of whether an invasion was wise in August or September. Meanwhile, two months of horrendous fighting ashore on Okinawa under skies filled with kamikazes convinced the commander in chief of the Pacific Fleet, Admiral Chester Nimitz, that he should withdraw his prior support for at least the invasion of Kyushu. Nimitz informed King of this change in his views in strict confidence.

In August, the Ultra revelations propelled the Army and Navy towards a showdown over the invasion. On August 7 (the day after Hiroshima, which no one expected to prompt a quick surrender), General Marshall reacted to weeks of gathering gloom in the Ultra evidence by asking General Douglas MacArthur, who was to command what promised to be the greatest invasion in history, whether invading Kyushu in November as planned still looked sensible. MacArthur replied, amazingly, that he did not believe the radio intelligence! He vehemently urged the invasion should go forward as planned. (This, incidentally, demolishes later claims that MacArthur thought the Japanese were about to surrender at the time of Hiroshima.) On August 9 (the day the second bomb was dropped, on Nagasaki), King gathered the two messages in the exchange between Marshall and MacArthur and sent them to Nimitz. King told Nimitz to provide his views on the viability of invading Kyushu, with a copy to MacArthur. Clearly, nothing that had transpired since May would have altered Nimitz's view that Olympic was unwise. Ultra now made the invasion appear foolhardy to everyone but MacArthur. But King had not placed a deadline on Nimitz's response, and the Japanese surrender on August 15 allowed Nimitz to avoid starting what was certain to be one of the most tumultuous interservice battles of the whole war.

What this evidence illuminates is that one central tenet of the traditionalist view is wrong--but with a twist. Even with the full ration of caution that any historian should apply anytime he ventures comments on paths history did not take, in this instance it is now clear that the long-held belief that Operation Olympic loomed as a certainty is mistaken. Truman's reluctant endorsement of the Olympic invasion at a meeting in June 1945 was based in key part on the fact that the Joint Chiefs had presented it as their unanimous recommendation. (King went along with Marshall at the meeting, presumably because he deemed it premature to wage a showdown fight. He did comment to Truman that, of course, any invasion authorized then could be canceled later.) With the Navy's withdrawal of support, the terrible casualties in Okinawa, and the appalling radio-intelligence picture of the Japanese buildup on Kyushu, Olympic was not going forward as planned and authorized--period. But this evidence also shows that the demise of Olympic came not because it was deemed unnecessary, but because it had become unthinkable. It is hard to imagine anyone who could have been president at the time (a spectrum that includes FDR, Henry Wallace, William O. Douglas, Harry Truman, and Thomas Dewey) failing to authorize use of the atomic bombs in this circumstance. Japanese historians uncovered another key element of the story. After Hiroshima (August 6), Soviet entry into the war against Japan (August 8), and Nagasaki (August 9), the emperor intervened to break a deadlock within the government and decide that Japan must surrender in the early hours of August 10. The Japanese Foreign Ministry dispatched a message to the United States that day stating that Japan would accept the Potsdam Declaration, "with the understanding that the said declaration does not comprise any demand which prejudices the prerogatives of His Majesty as a Sovereign Ruler." This was not, as critics later asserted, merely a humble request that the emperor retain a modest figurehead role. As Japanese historians writing decades after the war emphasized, the demand that there be no compromise of the "prerogatives of His Majesty as a Sovereign Ruler" as a precondition for the surrender was a demand that the United States grant the emperor veto power over occupation reforms and continue the rule of the old order in Japan. Fortunately, Japan specialists in the State Department immediately realized the actual purpose of this language and briefed Secretary of State James Byrnes, who insisted properly that this maneuver must be defeated. The maneuver further underscores the fact that right to the very end, the Japanese pursued twin goals: not only the preservation of the imperial system, but also preservation of the old order in Japan that had launched a war of aggression that killed 17 million.

This brings us to another aspect of history that now very belatedly has entered the controversy. Several American historians led by Robert Newman have insisted vigorously that any assessment of the end of the Pacific war must include the horrifying consequences of each continued day of the war for the Asian populations trapped within Japan's conquests. Newman calculates that between a quarter million and 400,000 Asians, overwhelmingly noncombatants, were dying each month the war continued. Newman et al. challenge whether an assessment of Truman's decision can highlight only the deaths of noncombatant civilians in the aggressor nation while ignoring much larger death tolls among noncombatant civilians in the victim nations.

There are a good many more points that now extend our understanding beyond the debates of 1995. But it is clear that all three of the critics' central premises are wrong. The Japanese did not see their situation as catastrophically hopeless. They were not seeking to surrender, but pursuing a negotiated end to the war that preserved the old order in Japan, not just a figurehead emperor. Finally, thanks to radio intelligence, American leaders, far from knowing that peace was at hand, understood--as one analytical piece in the "Magic" Far East Summary stated in July 1945, after a review of both the military and diplomatic intercepts--that "until the Japanese leaders realize that an invasion can not be repelled, there is little likelihood that they will accept any peace terms satisfactory to the Allies." This cannot be improved upon as a succinct and accurate summary of the military and diplomatic realities of the summer of 1945.

The displacement of the so-called traditionalist view within important segments of American opinion took several decades to accomplish. It will take a similar span of time to displace the critical orthodoxy that arose in the 1960s and prevailed roughly through the 1980s, and replace it with a richer appreciation for the realities of 1945. But the clock is ticking.

Richard B. Frank, a historian of World War II, is the author of Downfall: The End of the Imperial Japanese Empire.

© Copyright 2005, News Corporation, Weekly Standard, All Rights Reserved.

errors of perception

on ray's blog. a must read for the modern consumers of information to protect against illegitimate signalling, as well as modern producers of information that want to gain advantage.

http://techbuz.blogspot.com/2005/07/psychology-of-intelligence-analysis.html

http://www.cia.gov/csi/books/19104/

Wednesday, August 03, 2005

Just like everything else right now in China.

Please put away any notions of any other motive operating at this time.

link to original article.

---

August 04, 2005

Barefaced greed has replaced barefoot doctors, says bold report on Chinese healthcare

From Jane Macartney in Beijing

OLD Mrs Wang hardly dares to go to hospital these days. She is deterred not by fear of the doctor’s diagnosis but by the size of hs bill.

A report from a think-tank that advises the Chinese Cabinet brands modern healthcare a failure. It says that medical services have been turned into the preserve of the rich in a country that, a generation ago, prided itself on the provision of free medical treatment for all.

The health review, by the Development Research Centre, is unusual not only for its forthright assessment of healthcare inadequacies but also for having been published at all under an authoritarian system that usually keeps its weaknesses out of the public eye.

A senior official at the Ministry of Health said privately that many people who lived in rural areas died at home, having been deterred from going to hospital by the expense.

The numbers are irrefutable. In 1993 about 10 per cent of patients in small towns chose not to seek medical treatment because of the cost. In 1998 that number had more than quadrupled to 42 per cent. In rural areas, where incomes are lower and where more than 700 million of China’s 1.3 billion people live, the figures are even more alarming.

The report points out that in such poverty-stricken areas 80 per cent of people in need of hospital treatment did not seek such care because they could not afford it.

Mrs Wang had joined dozens of people in a waiting room at the Capital Paediatrics Institute, in central Beijing, to pay 40p to register her granddaughter to see a doctor. To register to see a specialist would cost £5 — and this is in a country where the average annual income is £1,130.

“Since medical reforms, we almost don’t dare to go to hospital,” Mrs Wang said. “No matter what disease it is, no matter if it’s necessary, doctors are bound to ask you to undergo a complete check. How can normal people afford that?” Two decades ago, healthcare was universal and free. Chairman Mao’s “barefoot doctors” gained fame throughout the world for fanning out across China’s poorest areas to treat the sick.

Questions may remain over whether they were effective, but they did provide care. Today, China’s public health system has become one of the least efficient and least equitable in the world, the think-tank report says.

Hospitals routinely overprescribe medicines to boost revenue. One writer in an internet chatroom worked in a clinic where, he said, doctors delivered 95 per cent of babies by Caesarean section to make more money.

Medical institutions deprived of government funding for nearly a quarter of a century of market-driven economic reforms have been transformed into cash-hungry businesses. The World Health Organisation, which sponsored the report, ranked China the fourth- worst country in the world for the fairness of its allocation of medical resources.

The report says: “Most of the medical needs of society cannot be met because of economic reasons. Poor people cannot even enjoy the most basic healthcare.”

Cao Hui would agree. His six-year-old daughter was being treated for an ear infection picked up while swimming. He had paid £160 at the Beijing hospital for a check-up and medicines. “I don’t know if the treatment is correct, but the cost is really too much and this is a small illness,” he said. “But I can’t begrudge the money to treat my child.”

It is not uncommon for hospitals to demand a deposit before treating emergency cases. State media expressed shock at the story of a three-year-old boy who was scalded by boiling water in an accident at his home in western Xinjiang and who died of his injuries after his parents were turned away from one hospital because they could not afford a £1,500 deposit demanded by doctors.

The answer is for the Government to step back into the picture and reform the health system, the report says. The consequences of not doing so could be far-reaching, leading to an erosion of public support for reform and exacerbating social instability. “Owing to the overall decline of medical service, the Government will become a target of fierce public criticism,” the report concludes.

Copyright 2005 Times Newspapers Ltd.
This service is provided on Times Newspapers' standard Terms and Conditions . Please read our Privacy Policy . To inquire about a licence to reproduce material from The Times, visit the Syndication website .

Is multitasking counterproductive?

IS MULTITASKING MORE EFFICIENT? SHIFTING MENTAL GEARS COSTS TIME, ESPECIALLY WHEN SHIFTING TO LESS FAMILIAR TASKS

Studying The "Inner CEO" Can Improve Interface Design, Personnel Training And Diagnosis Of Brain Damage

WASHINGTON - New scientific studies reveal the hidden costs of multitasking, key findings as technology increasingly tempts people to do more than one thing (and increasingly, more than one complicated thing) at a time. Joshua Rubinstein, Ph.D., of the Federal Aviation Administration, and David Meyer, Ph.D., and Jeffrey Evans, Ph.D., both at the University of Michigan, describe their research in the August issue of the Journal of Experimental Psychology: Human Perception and Performance, published by the American Psychological Association (APA).

Whether people toggle between browsing the Web and using other computer programs, talk on cell phones while driving, pilot jumbo jets or monitor air traffic, they're using their "executive control" processes -- the mental CEO -- found to be associated with the brain's prefrontal cortex and other key neural regions such as the parietal cortex. These interrelated cognitive processes establish priorities among tasks and allocate the mind's resources to them. "For each aspect of human performance -- perceiving, thinking and acting -- people have specific mental resources whose effective use requires supervision through executive mental control," says Meyer.

To better understand executive control, as well as the human capacity for multitasking and its limitations, Rubinstein, Meyer and Evans studied patterns in the amounts of time lost when people switched repeatedly between two tasks of varying complexity and familiarity. In four experiments, young adult subjects (in turn, 12, 36, 36 and 24 in number) switched between different tasks, such as solving math problems or classifying geometric objects. The researchers measured subjects' speed of performance as a function of whether the successive tasks were familiar or unfamiliar, and whether the rules for performing them were simple or complex.

The measurements revealed that for all types of tasks, subjects lost time when they had to switch from one task to another, and time costs increased with the complexity of the tasks, so it took significantly longer to switch between more complex tasks. Time costs also were greater when subjects switched to tasks that were relatively unfamiliar. They got "up to speed" faster when they switched to tasks they knew better, an observation that may lead to interfaces designed to help overcome people's innate cognitive limitations.

The researchers say their results suggest that executive control involves two distinct, complementary stages: goal shifting ("I want to do this now instead of that") and rule activation ("I'm turning off the rules for that and turning on the rules for this"). Both stages help people unconsciously switch between tasks.

Rule activation itself takes significant amounts of time, several tenths of a second -- which can add up when people switch back and forth repeatedly between tasks. Thus, multitasking may seem more efficient on the surface, but may actually take more time in the end. According to the authors, this insight into executive control may help people choose strategies that maximize their efficiency when multitasking. The insight may also weigh against multitasking. For example, Meyer points out, a mere half second of time lost to task switching can mean the difference between life and death for a driver using a cell phone, because during the time that the car is not totally under control, it can travel far enough to crash into obstacles the driver might have otherwise avoided.

Understanding executive mental control may help solve "fundamental problems," says Meyer, "associated with the design of equipment and human-computer interfaces for vehicle and aircraft operation, air traffic control, and many other activities in which people must monitor and manipulate the environment through technologically advanced devices." The research may also aid in personnel selection (given individual differences in executive control), training, assessment and diagnosis of brain-damaged patients (given advances in brain imaging and mapping), rehabilitation, and formulation of government and industrial regulations and standards. In addition, results from the study of executive control may foster a more general understanding of how the brain and human consciousness normally work.

Article: "Executive Control of Cognitive Processes in Task Switching," Joshua S. Rubinstein, U.S. Federal Aviation Administration, Atlantic City, N.J.; David E. Meyer and Jeffrey E. Evans, University of Michigan, Ann Arbor, Mich., Journal of Experimental Psychology - Human Perception and Performance, Vol 27. No.4

Joshua Rubinstein can be reached by phone at (609) 485-4463. David Meyer can be reached by phone at (734) 763-1477.

Full text of the article is available from the APA Public Affairs Office and at
http://www.apa.org/journals/releases/xhp274763.pdf.

Cost of living the "good life" in the US

The following Yahoo article outlines the cost of living an upper-middle class life in various parts of the US, particularly the popular urban areas. While not entirely scientific, the article's estimates provide a generally idea of the cost of living the "good life" in the US today. As of 2003, only 1.4% of US households made $250k or more (pre-tax household income). According to the article, a household needs to have $250k-$275k of after-tax income to live the good life in many of the popular cities in the US. I don't have any solid data but my sense is that the number of people enjoying the good life in the US has been steadily declining over the last few decades. One reason for the decline appears to be the cost of housing, which is uniformly high in many of the affluent neighborhoods throughout the US.

http://biz.yahoo.com/special/live05.html

Tuesday, August 02, 2005

Trinkets, knick-knacks, gewgaws?

Everyone needs something useless, and plenty of it. Wear the 100% cotton trappings of corporate enslavement with pride, and drink liberally from the handled repository of indentured servitude.

---

The $16 Billion Opportunity in—Tchotchkes?

Former venture capitalist Jerry McLaughlin co-founded Branders.com to explore the online opportunity to sell tchotchkes—those coffee mugs, T-shirts, and other give-aways favored by conference organizers. Could he build a better business model?

From Strategy & Innovation.

by Matthew Q. Christensen

Most people wouldn’t leave a leading Silicon Valley venture capital firm to work at an online start-up that specializes in selling paperweights, mugs, mouse pads, and T-shirts featuring corporate logos.

But that’s exactly what Jerry McLaughlin did in 1999 when he left Altos Ventures, with its prestigious Sand Hill Road address, to co-found Branders.com. What the former venture capitalist saw, and what most people missed, was a compelling opportunity to disrupt the promotional products business.

According to the Promotional Products Association International, promotional products—items that companies hand out as premiums or as marketing mementos at conferences and conventions, and which are affectionately referred to by customers and beneficiaries as tchotchkes—constituted a $16.3 billion industry in 2003. A quick look at the North American promotional products industry shows that it is highly fragmented and made up of a network of small to midsize companies specializing in one piece of the promotional products supply chain.

He sought to address the challenge of customer retention by building a different type of relationship with customers.

At the heart of this fragmented network are two tiers of distributors. At one end are 20,150 small-scale suppliers with less than $2.5 million in revenue; at the other end are 815 larger distributors with $2.5 million or more in revenue. Most distributors get their products from the approximately 4,000 U.S. wholesalers that import their wares from a variety of manufacturers in Asia.

Historically, promotional products have been sold by individual salespeople who cover a specified territory and function almost like independent contractors. Although the distributors they work for might set them up with an office, a desk, and a phone, the compensation they receive is based largely on commissions. Typically, distributors don’t offer employee benefits or health insurance.

In general, a traditional promotional products salesperson needs approximately eighteen months to build up a respectable roster of customers. When a salesperson has successfully grown sales within an assigned geographic area, he tends to turn around and demand higher commissions from the distributor. If the distributor refuses to ante up, there’s little stopping the salesperson from taking his customer list and going to work for a competitor.

Customers generally don’t develop much loyalty to a specific distributor because the products offered by competing distributors are essentially undifferentiated across the industry. Rather, the charm and charisma of a salesperson is often the critical sales driver.

Higher customer-retention levels, lower compensation costs

McLaughlin believed there was an opportunity within this framework for someone to build a large company that could handle more than just distribution, retain customers better, use its size to leverage economies of scale with wholesalers, and perhaps pass along some of those savings to customers.

He sought to address the challenge of customer retention by building a different type of relationship with customers. With the Internet boom at its height, McLaughlin hit upon the idea of allowing customers to interact with the company via the Internet rather than through a corps of salespeople. This, he reasoned, would solve the problem of salespeople defecting to competitors and taking their customers with them.

The online business model brought added bonuses as well. Sales force compensation eats up 18 percent to 20 percent of traditional distributors’ revenues. McLaughlin figured that, if the Internet were the sales interface, compensation costs would decrease and the company could sell products at lower prices than its competitors, even while operating at slightly better margins.

What’s more, he reasoned, online operations would speed ordering and delivery.

Another unique advantage was Branders.com’s ability to sell from its own catalog. Products usually moved across the industry through a cumbersome and time-consuming sales process that offered few benefits to distributors. A salesperson would show customers a variety of manufacturers’ catalogs, each of which offered slightly different variations on the same theme—say, a mouse pad or a bag with the customer’s logo on it.

The challenge for Branders.com will be to continue to execute according to the principles of disruptive innovation.

Although this gave customers plenty of options, it meant that distributors could not concentrate their sales with a few key suppliers to negotiate for bulk discounts from manufacturers.

Because Branders.com’s Web site functions as its catalog, the company determines ahead of time which items it wants to feature, and then it concentrates sales with its key suppliers. This increases Branders.com’s ability to guarantee a certain volume of sales to wholesalers while allowing the company to garner bulk discounts. Any savings that result can then be passed along to customers. And customers get their orders more quickly, too.

Navigating rough spots, learning along the way

Operating completely online might seem like an ideal business model, but Branders.com has had to navigate some rough spots since its beginnings.

McLaughlin initially assumed that customers could get what they needed exclusively through the Web site. During the first several months of operation, however, very few registered visitors actually became customers, despite numerous site visits.

Using the small team he already had in place to handle customer service, McLaughlin contacted people who had registered on the site to ask why they hadn’t bought anything from the company. The calls were a revelation. Because potential customers were used to some form of human interaction, most people were turned off by the lack of personal contact.

So McLaughlin added call centers to help take customer orders. Though the cost of doing business increased, the company’s interface with its customers remained tightly controlled. Today, 15 percent of Branders.com’s customers place their orders via the Web site alone. The rest use the phone center sales force.

Having learned some interesting lessons and applied them to enhance the company’s business model, McLaughlin once again believes that Branders.com is poised for new growth. The company currently employs 225 people; has 30,000 customers; is profitable; and is growing at a rate of nearly 50 percent per year.

Going forward, the challenge for Branders.com—and for McLaughlin—will be to continue to execute according to the principles of disruptive innovation. Although the company’s investors want rapid growth, customer-acquisition costs could put Branders.com’s low-price business model in jeopardy. McLaughlin and his staff will need to figure out how to keep customer-acquisition costs low even as they adhere to one of the basic principles of disruption: patience for growth and impatience for profit.

Reproduced with permission from "Transforming Tschotschkes," from Strategy and Innovation, Vol. 3, No. 3, May/June 2005.

From InfoWorld: Google now a hacker's tool.

link to original article.

Everything that has great capacity to be used for good also has great potential to be exploited for evil. Pity those naive souls who would dare to think that they can will it otherwise.

Monday, August 01, 2005

The brief would be nothing without its length.

"The legal-size legal pad has been under attack since as early as 1982, when then Chief Justice Warren Burger banished legal-size documents from federal courts. One informal survey estimated Burger's move saved almost $16 million through more efficient use of storage space."

"AMPAD maintains four factories in different locations, but the Holyoke facility was its first, and it is the site where the first legal pad was ruled, cut, and stitched. (This historical distinction was apparently not enough to save the plant, however: AMPAD will close the Holyoke facility within the year and open one in Matamoros, Mexico.) "

---

From Legal Affairs.

link.

Old Yeller

The illustrious history of the yellow legal pad.

By Suzanne Snider

ON A MONDAY AFTERNOON LAST SUMMER in the town of Hastings, Minn., David Norman Wigen parked his white pickup truck near the intersection of Fourth and Vermillion Streets, walked into the Wells Fargo branch there, and attempted to stage a heist. The 49-year-old construction worker handed a bank teller a curt, crumpled note: "Money now, do not make me hurt someone!" The teller handed him $260. Wigen ?ed with the money and within the next five hours bought some methamphetamine, confessed his crime to his sister, and was arrested by an off-duty police officer. Other than its clumsiness, Wigen's failed robbery is remarkable for one detail: the medium he selected to convey his instructions to the teller. His note had been scrawled in pencil on a sheet from a yellow legal pad.

Once used only by law students and lawyers, the yellow legal pad is now employed to a degree unrivaled in stationery. "End career as a fighter," President Richard Nixon wrote on a legal pad in August 1974. Five days later, on the top of another one, he scratched, "Resignation Speech." Jeff Tweedy, front man for the rock band Wilco, writes his songs on a legal pad. Jim Harrison, the laureate of the untamed heart, wrote Legends of the Fall on legal pads; Elmore Leonard writes his crime novels on them. Nonfiction criminals, it appears, are fond of them, too. How did they get so popular? And how so yellow?

RECENTLY, MIKE WYSZYNSKI, THE PLANT MANAGER of American Pad & Paper Company's factory in Holyoke, Mass., walked between piles of paper stacked over four feet high. He stopped by several large rolls of yellow paper standing on their ends. A workhorse of a machine was busy feeding a swath of yellow paper from one of these rolls, mechanically ruling the paper with calibrated pins dipped in blue ink. The oversize page would ultimately be made into 25 5-inch-by-8-inch tablets, known in the industry as "Junior" legal pads.

AMPAD (as the company is known) manufactures legal pads under its own name, but it also makes pads that are later stamped with other brand names, like Staples and Wal-Mart. AMPAD maintains four factories in different locations, but the Holyoke facility was its first, and it is the site where the first legal pad was ruled, cut, and stitched. (This historical distinction was apparently not enough to save the plant, however: AMPAD will close the Holyoke facility within the year and open one in Matamoros, Mexico.)

In 1888, Thomas W. Holley, a 24-year-old paper mill worker in Holyoke, had an idea for how to use the paper scraps, known as sortings, discarded by the mill. Sortings were anything trimmed away as scrap or considered of lesser quality than the writing paper eventually packaged and sold. Holley's notion was to bind the scraps into pads that could be sold at a cut rate. Convinced he had a winning idea, he founded his own company to collect the sortings from local mills (Holyoke was then the papermaking capital of the world) and began churning out bargain-price pads.

The legal pad's margins, also called down lines, are drawn 1.25 inches from the left edge of the page. (This is the only requirement for a pad to qualify as a legal pad, though the iconic version has yellow paper, blue lines, and a red gummed top.) Holley added the ruling that defined the legal pad in the early 1900s at the request of a local judge who was looking for space to comment on his own notes.

That, at least, is the story AMPAD tells. Holley never filed a patent for his invention; no other company in the legal pad market has ever come forward with a competing claim. Like many origin myths, AMPAD's answers some essential questions but leaves others unresolved. It doesn't, for instance, explain the emergence of yellow as the standard legal pad color. Holley is thought to have created white pads, not yellow ones. Yellow paper is about 10 to 20 percent more expensive than white paper, due to the cost of dye and the additional cleanup the dyeing process necessitates, an extravagance the thrifty Holley would likely have dismissed.

EXPLAINING THE ORIGINS OF THE YELLOW LEGAL PAD is as difficult as explaining consumers' attraction to it. But the attraction does seem to be there: The yellow-to-white sales ratio can be as high as 2 to 1, as it is at University Stationery in New York City, near New York University and The New School University.

Some believe that writing on a yellow pad is easier to read than writing on a white pad. But Israel Abramov, a professor of psychology at Brooklyn College and a specialist in color vision, dismisses the theory. Readability, he says, is more a matter of contrast—how the color of the ink interacts with the color of the paper—than of the paper color alone. The highest contrast scenario is black ink on white paper, though Abramov concedes that in specific conditions, yellow paper might be preferable in terms of readability. "If the light is too intense, the paper can be glaring, and yellow cuts down the glare," he said.

Abramov prefers a psychological to a physiological explanation for yellow's predominance. "White paper that sits around starts to look yellow and old," he said. "I heard of one professor who used yellow paper for his lecture notes because he didn't want his students to know how old the notes were."

Legal pad enthusiasts do seem to have a psychological connection to their writing tablets. Philip Moustakis, a mid-level associate at the New York firm of Curtis, Mallet-Prevost, Colt & Mosle, uses one legal pad per case, and prefers yellow over white pads and a faint, as opposed to a dark, rule. "The darker lines intrude upon my thinking—they're yelling back at you," he explained. "You want a more subtle line."

Moustakis is a connoisseur. Firms that are big enough to order their pads in significant bulk qualify to have their firm name stamped on the pads' binding. (At AMPAD, a law firm must order a minimum of 790 pads to qualify for the stamped insignia.) Moustakis collects the blank pads of competing firms. (His collection, once larger, is now down to two pristine pads; he doesn't just collect them, he uses them.) He said he picks them up at conferences with other law firms, and at other events where large stacks are left lying around.

Iris Harris, the assistant director of purchasing at Mayer, Brown, Rowe & Maw, says that her firm no longer leaves stacks of pads lying around on conference tables. On average, her firm consumes 1,200 legal-size legal pads, 12,000 letter-size legal pads, and 4,200 Junior-size legal pads a year. Her firm switched from yellow to white pads four years ago. "Yellow wasn't recyclable," she said. Today, the standard pad at Mayer, Brown is a white legal pad, with a blue chipboard binding and silver stamp bearing the firm's name. And the pads are letter-size, not legal-size.

The legal-size legal pad has been under attack since as early as 1982, when then Chief Justice Warren Burger banished legal-size documents from federal courts. One informal survey estimated Burger's move saved almost $16 million through more efficient use of storage space. Several states followed the federal government's lead; in Florida, a group appeared called "Eliminate Legal Files," or ELF.

The movement reveals perhaps the strangest element of the legal pad's popularity: Despite their loyalty to the pad, its enthusiasts seem never to be quite satisfied with its constitution and are forever seeking the modification that will perfect it. One law professor likes his margin on the right; another prefers a margin 4 to 6 inches from the left side of the page. Jonathan Dee is a novelist who has written the first drafts of four novels on legal pads. He said that he dreams of a 14-inch pad that has the spiral binding of an 11-inch variation. "There's something about the inventiveness of the first draft that requires I go fast," Dee said. "The quiet of this arrangement is very important."...

Wired News: Router Flaw Is a Ticking Bomb

Router Flaw Is a Ticking Bomb The ultimate terrorist event will end not with a bang, but a whimper. If Cisco's IOS really is vulnerable, and the bad guys exploit this, they could bring down the Internet. The thing I didn't appreciate is--how do you fix it? You can't download a patch to a router if the Internet isn't working. Mailing a CD with the patch wouldn't do you any good either--most routers don't have CD drives. Looking at the problem now, the risk seems obvious, but also kind of scary. I'm not sure of the ways that people could die if the entire network went down, but I've got to imagine that hospitals would be in complete chaos, ambulance dispatch would be a mess, etc. Aside from the human tragedy, commerce would be brought to it's knees--for an indefinite period of time. It's not like 911 where the FAA could just decide a week later that it's OK to fly again. More troublesome is the notion that as time passes, the chances are just increasing of this risk being realized.

virtual laboratory for human behavior

ray talked about this trend a couple years ago. a great laboratory for human behavior. reality is a virtual representation so what's the difference between reality and virtual reality anyways?


Economists to explore world of online games Researchers could assess players' response to change
Tom Abate, Chronicle Staff Writer

Now, some economists and social scientists say these Internet worlds could be a new type of laboratory to study economic behavior, such as how consumers respond to inflation.
"I think there's an incredible opportunity here to run controlled experiments on economic questions," said Edward Castronova, an economist at Indiana University at Bloomington.
Castronova co-founded Terra Nova, a group blog that explores the technological, commercial and social dimensions of these virtual worlds.
Dimitri Williams, a communications professor at the University of Illinois at Urbana-Champaign and fellow Terra Nova contributor, said scientists could subtly alter the software that governs these worlds, tweaking the rules of the games, then measuring how these changes affect behavior.
One of the most sophisticated of these artificial worlds is Second Life. Based in San Francisco, it is not so much a game as it is a cross between an Internet chat room and an unscripted movie set.
The real people who play Second Life create proxy characters, called avatars, that can mimic human behaviors like flirting, gossiping or showing off. Some players, through their avatars, even make imaginary goods that they sell -- sometimes for real money.
Virtual worlds like Second Life, for it is only one of hundreds of such realms, began in the 1990s as places for Internet users to act out dungeon-and- dragon fantasies with other people.
"This is a social science petri dish," Williams said. "You can see everything, and they don't know you're watching"
If this virtual world stuff sounds like science fiction and the scientists like mythic gods toying in human affairs, a little background will help bring this online-play phenomenon into focus.
Online games emerged in the late 1980s and early 1990s as the explosion of computer networks made it possible for many, widely dispersed individuals to take part.
This genre of games is called MMOGs, or massively multiplayer online games. More than 100 MMOGs exist today. The number of players has been estimated at between 2 million and 10 million.
The best known of these fantasy worlds include Lineage, World of Warcraft and EverQuest. Second Life is smaller and newer, more lifelike and less gamelike. But as is the case with these other online realms, Second Life has its own currency, the Linden, which participants can use to buy and sell goods.
It was this trade in virtual goods that caught the attention of economists, who were fascinated to discover several years ago that much of the activity in these online worlds consists of commerce, not fighting.
What's more, players are often willing to spend real dollars to acquire the totems that confer status, power or bragging rights. In the second quarter of 2005, for instance, Second Lifers exchanged goods and services worth about $4.2 million in real money, according to Linden Lab, the San Francisco company that created this virtual world.
Ken Selden, a Los Angeles screenwriter, helped pioneer currency exchanges between the real and virtual worlds in the mid-1990s when he was playing an online game called Gemstone.
That game was more in the sword-and-sorcery tradition. Players advanced by finding treasures, which they used to buy weapons, rising in proficiency to gain rank in that feudal system. But some players wanted to flatten the learning curve by paying real money to buy artifacts that got them ahead. So Selden started buying these tokens of the game and reselling them, using dollars.
"In my top year of operation, I probably made half a million dollars in profit," he said. "I got to be king and J.P. Morgan and Alan Greenspan all at the same time."
Selden has left the virtual trading business and instead helps design games. But the trend he helped pioneer, selling virtual goods for real cash, continues on a huge scale.
MMOG players spent an estimated $880 million for online goods and services in 2004, said Steve Salyer, president of IGE, a Los Angeles firm whose 250 employees help players buy, sell and trade across the real and imaginary worlds.
"This is commerce in places that don't really exist, over items that don't really exist, in tangible currency,'' Salyer said.
But that doesn't mean the goods aren't real to the players. Salyer recalls how outraged he felt when his castle in the online realm Dark Age of Camelot was ransacked by a hacker who looted all his virtual possessions. "These cyber worlds are extensions of what we really value,'' he said.
It was Castronova, the Indiana University professor, who focused scientific attention on the economics of online worlds. In 2001, he wrote a scholarly article about the commercial life of Norrath, the online home world of the popular game EverQuest.
Castronova discovered that an unofficial trading network had grown in which gamers traded platinum pieces -- the basic units of Norrathian currency -- for dollars. Moreover, this funny money had a real world exchange rate of a little more than a U.S. penny, which was "higher than the yen and the lira." When players left Norrath and auctioned off their avatars, these used personas fetched prices between $500 and $1,000.
When Castronova wrote his paper, Sony Online Entertainment, which created EverQuest and its graphic-rich sequel EverQuest II, tried to discourage this unsanctioned foreign exchange. But Chris Kramer, spokesman for Sony Online in San Diego, said the company got so many angry calls from gamers who went outside the game environment and got fleeced in exchanges that it decided very recently to allow sanctioned trading, for which it takes a cut.
"If you can't beat it, own it," Kramer said.
As the economic activity of these games continues to grow, Castronova wants to do more than simply observe the commercial actions of gamers, as he did when he first entered Norrath.
He wants to tinker with the economic rules of the game in a way that would allow him to draw cause-and-effect relationships between changes in rules and changes in behavior. This is possible, he said, because of the way these virtual realms are embodied in server computers.
Castronova explained that with most current games each server can accommodate only a finite number of players. As more players join the virtual world, the game host adds new servers and starts filling them with players. The result is that a game with 400,000 players might actually consist of 40 servers, each with 10,000 players -- 40 parallel worlds, each with the same economic rules.
What Castronova would like to do, but so far hasn't accomplished, is gain access to the software that sits on those servers, changing some while leaving others alone.
For instance, he could create a strong central bank to control inflation in one set of servers. A second set could have a central bank controlled by that world's political leader. A third set might have no central bank. All the other rules of the game would be left alone.
Assuming a random distribution of players to the various servers, differences in economic outcomes and behaviors should be traceable back to the changes, he said.
"Instead of theorizing about central banks, we could play out our economic policy scenarios through these games,'' Castronova said.
His notion has elicited cautious interest but no ringing endorsements from other economists. Harvard University's Alvin Roth is noted for lab-based economic experiments. Subjects are recruited to take part in carefully controlled scenarios designed to answer straightforward questions such as what kinds of rules encourage or discourage last-minute bidding in online auctions.
Roth said researchers must go to great pains to avoid prejudicing the results of these experiments by the selection of subjects or design of the tests. He called Castronova's proposal attractive because players come to online games willingly and because the underlying economics could be changed in invisible ways and therefore less likely to alter behavior.
"But you have to worry about the fact that these are games, and not everybody plays them," Roth said. In short, are the economic behaviors of people who play online game the same as people who watch television or engage in other diversions?
Meanwhile, on Second Life's servers in San Francisco, a different sort of open-ended economic experiment is under way. Williams, the communications professor from Illinois, said Second Life's underlying software gives knowledgeable residents of this virtual world the ability to create just about anything they can imagine.
"This is this single best suite of tools that have been given to users to create content that the world has ever seen,'' he said.
The results are evident as Second Life's founder, Philip Rosedale, shows off some of what the world's 39,000 residents have created since the software started to take shape a little under three years ago. There are flying cars, ray guns that sell for $5 each, clothing that glitters and avatars who make and sell virtual pets to other avatars who pay for these imaginary playthings.
"We're not going to be a game," Rosedale said. "We're literally going to put people down on the 'land' and tell them they can build things."
He said some Second Lifers have ventured beyond the environment's mainland to form semi-independent colonies with their own rules and social structures. Some colonies have strong leaders, others are based on languages, while others still are organized around lifestyle choices -- such as the so- called furries who choose animals avatars.
Rosedale said so many social scientists have logged on to Second Life to observe these virtual communities that it has provoked debate. "People say they don't want to live inside a fishbowl, but of course in a sense they do,'' he said. The compromise has been to set up ethics guidelines, borrowed from the social scientists, to govern what these observers can and can't do.
Where all this is heading, no one can say, but Rosedale says that insofar as people have always learned by observing the behavior of others, we may now have a social feedback system that moves faster than the speed of history.