Saturday, July 08, 2006

Does money buy happiness?

Science 30 June 2006:
Vol. 312. no. 5782, pp. 1908 - 1910
DOI: 10.1126/science.1129688

Perspective

Would You Be Happier If You Were Richer? A Focusing Illusion

Daniel Kahneman,1 Alan B. Krueger,1,2* David Schkade,3 Norbert Schwarz,4 Arthur A. Stone5

The belief that high income is associated with good mood is widespread but mostly illusory. People with above-average income are relatively satisfied with their lives but are barely happier than others in moment-to-moment experience, tend to be more tense, and do not spend more time in particularly enjoyable activities. Moreover, the effect of income on life satisfaction seems to be transient. We argue that people exaggerate the contribution of income to happiness because they focus, in part, on conventional achievements when evaluating their life or the lives of others.

1 Princeton University, Princeton, NJ 08544, USA.
2 National Bureau of Economic Research, Cambridge, MA 02138, USA.
3 Rady School of Management, University of California, San Diego, San Diego, CA 92093, USA.
4 Department of Psychology, University of Michigan, Ann Arbor, MI 48106, USA.
5 Stony Brook University, Stony Brook, NY, 11794, USA.

* To whom correspondence should be addressed. E-mail: akrueger@princeton.edu

Most people believe that they would be happier if they were richer, but survey evidence on subjective well-being is largely inconsistent with that belief. Subjective well-being is most commonly measured by asking people, "All things considered, how satisfied are you with your life as a whole these days?" or "Taken all together, would you say that you are very happy, pretty happy, or not too happy?" Such questions elicit a global evaluation of one's life. An alternative method asks people to report their feelings in real time, which yields a measure of experienced affect or happiness. Surveys in many countries conducted over decades indicate that, on average, reported global judgments of life satisfaction or happiness have not changed much over the last four decades, in spite of large increases in real income per capita. Although reported life satisfaction and household income are positively correlated in a cross section of people at a given time, increases in income have been found to have mainly a transitory effect on individuals' reported life satisfaction (1–3). Moreover, the correlation between income and subjective well-being is weaker when a measure of experienced happiness is used instead of a global measure.

When people consider the impact of any single factor on their well-being—not only income—they are prone to exaggerate its importance. We refer to this tendency as the focusing illusion. Standard survey questions on life satisfaction by which subjective well-being is measured may induce a form of focusing illusion, by drawing people's attention to their relative standing in the distribution of material well-being and other circumstances. More importantly, the focusing illusion may be a source of error in significant decisions that people make (4).

Evidence for the focusing illusion comes from diverse lines of research. For example, Strack and colleagues (5) reported an experiment in which students were asked: (i) "How happy are you with your life in general?" and (ii) "How many dates did you have last month?" The correlation between the answers to these questions was –0.012 (not statistically different from 0) when they were asked in the preceding order, but the correlation rose to 0.66 when the order was reversed with another sample of students. The dating question evidently caused that aspect of life to become salient and its importance to be exaggerated when the respondents encountered the more general question about their happiness. Similar focusing effects were observed when attention was first called to respondents' marriage (6) or health (7). One conclusion from this research is that people do not know how happy or satisfied they are with their life in the way they know their height or telephone number. The answers to global life satisfaction questions are constructed only when asked (8), and are, therefore, susceptible to the focusing of attention on different aspects of life.

To test the focusing illusion regarding income, we asked a sample of working women to estimate the percentage of time that they had spent in a bad mood in the preceding day. Respondents were also asked to predict the percentage of time that people with pairs of various life circumstances (Table 1), such as high- and low-income, typically spend in a bad mood. Predictions were compared with the actual reports of mood provided by respondents who met the relevant circumstances. The predictions were biased in two respects. First, the prevalence of bad mood was generally overestimated. Second, consistent with the focusing illusion, the predicted prevalence of a bad mood for people with undesirable circumstances was grossly exaggerated.


Table 1. The focusing illusion: Exaggerating the effect of various circumstances on well-being. The question posed was "Now we would like to know overall how you felt and what your mood was like yesterday. Thinking only about yesterday, what percentage of the time were you: in a bad moodTable 1%, a little low or irritableTable 1%, in a mildly pleasant moodTable 1%, in a very good moodTable 1%." Bad mood reported here is the sum of the first two response categories. A parallel question was then asked about yesterday at work. Bad mood at work was used for the supervision and fringe benefits comparisons. Data are from (14). Reading down the Actual column, sample sizes are 64, 59, 75, 237, 96, 211, 82, 221, respectively; reading down the Predicted column, sample sizes are 83, 83, 84, 84, 83, 85, 85, 87, respectively. Predicted difference was significantly larger than actual difference by a t test; see asterisks.

Percentage of time in a bad mood
Variable Group Actual Predicted Actual difference Predicted difference
Household income <$20,000 32.0 57.7 12.2 32.0***
>$100,000 19.8 25.7
Woman over 40 years old Alone 21.4 41.1 -1.7 13.2***
Married 23.1 27.9
Supervision at work Definitely close 36.5 64.3 17.4 42.1***
Definitely not close 19.1 22.3
Fringe benefits No health insurance 26.6 49.7 4.5 30.5***
Excellent benefits 22.2 19.2

*** P < 0.001.

The focusing illusion helps explain why the results of well-being research are often counter-intuitive. The false intuitions likely arise from a failure to recognize that people do not continuously think about their circumstances, whether positive or negative. Schkade and Kahneman (9) noted that, "Nothing in life is quite as important as you think it is while you are thinking about it." Individuals who have recently experienced a significant life change (e.g., becoming disabled, winning a lottery, or getting married) surely think of their new circumstances many times each day, but the allocation of attention eventually changes, so that they spend most of their time attending to and drawing pleasure or displeasure from experiences such as having breakfast or watching television (10). However, they are likely to be reminded of their status when prompted to answer a global judgment question such as, "How satisfied are you with your life these days?"

The correlation between household income and reported general life satisfaction on a numeric scale (i.e., global happiness as distinct from experienced happiness over time) in U.S. samples typically ranges from 0.15 to 0.30 (11). The relation between global happiness and income for 2004 with data from the General Social Survey (GSS) is illustrated in Table 2. Those with incomes over $90,000 were nearly twice as likely to report being "very happy" as those with incomes below $20,000, although there is hardly any difference between the highest income group and those in the $50,000 to $89,999 bracket.


Table 2. Distribution of self-reported global happiness by family income, 2004. The GSS question posed was "Taken all together, how would you say things are these days—would you say that you are very happy, pretty happy, or not too happy?" Sample size was 1173 individuals.

Percentage indicating global happiness at family income of
Response Under $20,000 $20,000-$49,999 $50,000-$89,999 $90,000 and over
Not too happy 17.2 13.0 7.7 5.3
Pretty happy 60.5 56.8 50.3 51.8
Very happy 22.2 30.2 41.9 42.9

There are reasons to believe that the correlation between income and judgments of life satisfaction overstates the effect of income on subjective well-being. First, increases in income have mostly a transitory effect on individuals' reported life satisfaction (2, 12). Second, large increases in income for a given country over time are not associated with increases in average subjective well-being. Easterlin (1), for example, found that the fivefold increase in real income in Japan between 1958 and 1987 did not coincide with an increase in the average self-reported happiness level there. Third, although average life satisfaction in countries tends to rise with gross domestic product (GDP) per capita at low levels of income, there is little or no further increase in life satisfaction once GDP per capita exceeds $12,000 (3).

Fourth, when subjective well-being is measured from moment to moment—either by querying people in real time with the Ecological Momentary Assessment (EMA) technique (13) or by asking them to recall their feelings for each episode of the previous day with the Day Reconstruction Method (DRM) (14)—income is more weakly correlated with experienced feelings such as momentary happiness averaged over the course of the day (henceforth called duration-weighted or experienced happiness) than it is with a global judgment of life satisfaction or overall happiness, or with a global report of yesterday's mood (Table 3) (15, 16). This pattern is probably not a result of greater noise in the duration-weighted happiness measure than in life satisfaction (17). Other life circumstances, such as marital status, also exhibit a weaker correlation with duration-weighted happiness than with global life satisfaction.


Table 3. Correlations between selected life circumstances and subjective well-being measures. The question posed was "We would like to know how you feel and what mood you are in when you are at home. When you are at home, what percentage of the time are you in a bad moodTable 3%, a little low or irritableTable 3%, in a mildly pleasant moodTable 3%, in a very good moodTable 3%." The last two response categories were added together to obtain the percentage of time in a good mood. Duration-weighted "happy" is the average of each person's duration-weighted average rating of the feeling happy over episodes of the day, where 0 refers to "not at all" and 6 refers to "very much," and each individual's responses were weighted by the duration of the episode. Sample consists of 740 women from Columbus, Ohio, who completed the DRM in May 2005 (16).

Characteristic Life satisfaction Amount of day in good mood (%) Duration- weighted "happy"
Household income 0.32*** 0.20*** 0.06
Married 0.21*** 0.15*** 0.03
Years of education 0.16*** 0.13*** 0.03
Employed 0.14*** 0.12** 0.01
Body mass index -0.13*** -0.08* -0.06

* P < 0.05;

** P < 0.01;

*** P < 0.001.

An analysis of EMA data also points to a weak and sometimes perverse relation between experienced affect and income. Specifically, we examined EMA data from the Cornell Work-Site Blood Pressure Study of 374 workers at 10 work sites, who were queried about their intensity of various feelings on a 0 to 3 scale every 25 min or so during an entire workday (18). The correlation between personal income and the average happiness rating during the day was just 0.01 (P = 0.84), whereas family income was significantly positively correlated with ratings of angry/hostile (r = 0.14), anxious/tense (r = 0.14), and excited (r = 0.18). Thus, higher income was associated with more intense negative experienced emotions and greater arousal, but not greater experienced happiness.

Why does income have such a weak effect on subjective well-being? There are several explanations, all of which may contribute to varying degrees. First, Duesenberry (19), Easterlin (2), Frank (20), and others have argued that relative income rather than the level of income affects well-being—earning more or less than others looms larger than how much one earns. Indeed, much evidence indicates that rank within the income distribution influences life satisfaction (21–23). As society grows richer, average rank does not change, so the relative income hypothesis could explain the stability of average subjective well-being despite national income growth. The importance placed on relative income may also account for the stronger correlation between income and global life satisfaction than between income and experienced affect, as life satisfaction questions probably evoke a reflection on relative status that is not present in moment-to-moment ratings of affect. The relative income hypothesis cannot by itself explain why a permanent increase in an individual's income has a transitory effect on her well-being, as relative standing would increase. However, the increase in relative standing can be offset by changes in the reference group: After a promotion, the new peers increasingly serve as a reference point, making the improvement relative to one's previous peers less influential (24).

Second, Easterlin (1, 2) argues that individuals adapt to material goods, and Scitovsky (25) argues that material goods yield little joy for most individuals. Thus, increases in income, which are expected to raise well-being by raising consumption opportunities, may in fact have little lasting effect because of hedonic adaptation or because the consumption of material goods has little effect on well-being above a certain level of consumption (26). Moreover, people's aspirations adapt to their possibilities and the income that people say they need to get along rises with income, both in a cross section and over time (27).

Finally, we would propose another explanation: As income rises, people's time use does not appear to shift toward activities that are associated with improved affect. Subjective well-being is connected to how people spend their time. In a representative, nationwide sample, people with greater income tend to devote relatively more of their time to work, compulsory nonwork activities (such as shopping and childcare), and active leisure (such as exercise) and less of their time to passive leisure activities (such as watching TV) (Table 4). The activities that higher-income individuals spend relatively more of their time engaged in are associated with no greater happiness, on average, but with slightly higher tension and stress. The latter finding might help explain why income is more highly correlated with general life satisfaction than with experienced happiness, as tension and stress may accompany goal attainment, which in turn contributes to judgments of life satisfaction more than it does to experienced happiness.


Table 4. How is time spent and do the activities bring happiness? Time allocation is weighted-average percentage of the nonsleep day for each sampled observation from the American Time-Use Survey (30). Weighted average of weekday (5 out of 7) and weekend (2 out of 7) is presented. Sample consists of 3917 men and 4944 women age 18 to 60. Last two rows were computed by authors from a DRM survey of 810 women in Columbus, Ohio, in May 2005; if multiple activities were performed during an episode, the activity refers to the one that was selected as "most important."

Family income/Gender Active leisure Eating Passive leisure Compulsory Work and commute Other
Men Time allocation (%)
<$20,000 6.6 6.6 34.7 20.8 29.1 2.1
$20,000-$99,999 8.1 7.2 26.4 21.8 35.4 1.1
$100,000+ 10.2 8.6 19.9 23.6 36.9 0.8
Women
<$20,000 5.3 5.7 33.5 35.6 18.5 1.4
$20,000-$99,999 7.5 6.7 23.8 34.3 26.7 1.0
$100,000+ 9.1 7.0 19.6 35.9 27.3 1.1
Women Feelings (0-6 scale)
Happy 4.67 4.45 4.21 4.04 3.94 4.25
Tense/Stressed 0.92 1.17 1.30 1.80 2.00 1.61

The results in Table 4 also highlight the possible role of the focusing illusion. When someone reflects on how additional income would change subjective well-being, they are probably tempted to think about spending more time in leisurely pursuits such as watching a large-screen plasma TV or playing golf, but in reality they should think of spending a lot more time working and commuting and a lot less time engaged in passive leisure (and perhaps a bit more golf). By itself, this shift in time use is unlikely to lead to much increase in experienced happiness, although it could increase tension and one's sense of accomplishment and satisfaction.

Despite the weak relation between income and global life satisfaction or experienced happiness, many people are highly motivated to increase their income. In some cases, this focusing illusion may lead to a misallocation of time, from accepting lengthy commutes (which are among the worst moments of the day) to sacrificing time spent socializing (which are among the best moments of the day) (28, 29). An emphasis on the role of attention helps to explain both why many people seek high income—because their predictions exaggerate the increase in happiness due to the focusing illusion—and why the long-term effect of income gains become relatively small, because attention eventually shifts to less novel aspects of daily life.


References and Notes

* 1. R. Easterlin, J. Econ. Behav. Organ. 27, 35 (1995). [CrossRef] [ISI]
* 2. R. Easterlin, "Building a better theory of well-being," Discussion Paper No. 742, IZA, Bonn, Germany, 2003.
* 3. R. Layard, Happiness: Lessons from a New Science (Penguin Press, London, 2005).
* 4. D. Gilbert, Stumbling on Happiness (Knopf, New York, 2006).
* 5. F. Strack, L. Martin, N. Schwarz, Eur. J. Soc. Psychol. 18, 429 (1988). [ISI]
* 6. N. Schwarz, F. Strack, H. Mai, Public Opin. Q. 55, 3 (1991).[Abstract]
* 7. D. Smith, N. Schwarz, T. Roberts, P. Ubel, Qual. Res. 15, 621 (2006). [CrossRef] [ISI]
* 8. N. Schwarz, F. Strack, in Well-Being: The Foundations of Hedonic Psychology, D. Kahneman, E. Diener, N. Schwarz, Eds. (Russell Sage Foundation, New York, 1999), pp. 61–84.
* 9. D. Schkade, D. Kahneman, Psychol. Sci. 9, 340 (1998). [CrossRef] [ISI]
* 10. D. Kahneman, R. H. Thaler, J. Econ. Perspect. 20 (1), 221 (2006). [ISI] [Medline]
* 11. E. Diener, R. Biswas-Diener, Soc. Indic. Res. 57, 119 (2002). [CrossRef] [ISI]
* 12. B. Frey, A. Stutzer, Happiness and Economics: How the Economy and Institutions Affect Well-Being (Princeton Univ. Press, Princeton, NJ, 2002).
* 13. A. Stone, S. Shiffman, Ann. Behav. Med. 16, 199 (1994).
* 14. D. Kahneman, A. Krueger, D. Schkade, N. Schwarz, A. Stone, Science 306, 1776 (2004).[Abstract/Free Full Text]
* 15. In general, we find that the retrospective report of mood on the previous day, which is a global evaluation, shares variance both with the global measures of life satisfaction and with disaggregated measures of emotional experience at particular times.
* 16. D. Kahneman, D. Schkade, C. Fischler, A. Krueger, A. Krilla, "A study of well-being in two cities," Discussion Paper No. 53, Center for Health and Wellbeing, Princeton, NJ, 2006.
* 17. We conducted a reliability study of the DRM that asked the same questions of 229 women two weeks apart, and found about the same two-week serial correlation in duration-weighted happiness as in life satisfaction for the respondents.
* 18. P. Schnall, J. Schwartz, P. Landsbergis, K. Warren, T. Pickering, Psychosom. Med. 60, 697 (1998).[Abstract/Free Full Text]
* 19. J. Duesenberry, Income, Saving, and the Theory of Consumer Behavior (Harvard Univ. Press, Cambridge, MA, 1949).
* 20. R. Frank, Luxury Fever (Princeton Univ. Press, Princeton, NJ, 1999).
* 21. A. Clark, A. Oswald, J. Public Econ. 61, 359 (1996). [CrossRef] [ISI]
* 22. A. Ferrer-i-Carbonell, J. Public Econ. 89, 997 (2005). [CrossRef] [ISI]
* 23. E. Luttmer, Q. J. Econ. 120, 963 (2005). [CrossRef] [ISI]
* 24. W. Runciman, Relative Deprivation and Social Justice (Univ. of California Press, Berkeley, CA, 1966).
* 25. T. Scitovsky, The Joyless Economy (Oxford Univ. Press, Oxford, 1976).
* 26. S. Frederick, G. Loewenstein, in Well-Being: The Foundations of Hedonic Psychology, D. Kahneman, E. Diener, N. Schwarz, Eds. (Russell Sage Foundation, New York, 1999), pp. 302–329.
* 27. B. Van Praag, P. Frijter, in Well-Being: The Foundations of Hedonic Psychology, D. Kahneman, E. Diener, N. Schwarz, Eds. (Russell Sage Foundation, New York, 1999), pp. 413–433.
* 28. See (31) for evidence on the misallocation of commuting time and (14) on the hedonic experience of commuting and socializing.
* 29. It goes without saying that happiness is not the only measure of human welfare. Moreover, although income gains may not contribute very much to experienced happiness or life satisfaction, wealthier societies may well enjoy better health care, safer and cleaner environments, cultural benefits and other amenities that improve the quality of life.
* 30. "Time-use survey—First results announced by Bureau of Labor Statistics," U.S. Department of Labor, USDL 04-1797 (http://www.bls.gov/).
* 31. A. Stutzer, B. Frey, "Stress that doesn't pay: The commuting paradox," Discussion Paper No. 127, IZA, Bonn, Germany, 2004.
* 32. The authors thank M. Connolly, M. Fifer, and A. Krilla for research assistance, and the Hewlett Foundation, the National Institute on Aging, and Princeton University's Woodrow Wilson School and Center for Economic Policy Studies for financial support.

How much do you have left?

Science 30 June 2006:
Vol. 312. no. 5782, pp. 1913 - 1915
DOI: 10.1126/science.1127488

Perspective

The Influence of a Sense of Time on Human Development

Laura L. Carstensen

The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.

Stanford University, Stanford, CA 94305–2130, USA.

E-mail: llc@psych.stanford.edu

Most scientists would agree that the explicit study of time falls in the purview of physics, yet interest in various aspects of time spans the natural and social sciences. Time is an integral part of virtually all psychological phenomena. From the sequencing of rewards involved in operant and classical conditioning to the flow of oxygen in the measurement of brain activation, time is built into most behavioral and psychological processes. Psychological science, however, has focused relatively little on the implications of our ability not only to monitor time but also to appreciate that time eventually runs out. I maintain that the subjective sense of remaining time has profound effects on basic human processes, including motivation, cognition, and emotion.

Although change over time is the basic foundation of developmental psychology, theoretical models of human development focus almost exclusively on the passage of time since birth. In child development, this marker has served scientists well. A substantial literature shows that chronological age is an excellent (albeit imperfect) predictor of cognitive abilities (1, 2), language (3), and sensorimotor coordination (4). At increasingly older ages, however, chronological age is a poorer predictor. Instead, increased heterogeneity or differentiation within samples is considered to be a cardinal feature of life-span development (5). Presumably, this is due primarily to differences in experiences and opportunities that individuals encounter over time. Chronic stress, level of education, close relationships, and social status all place individuals on very different developmental trajectories that affect not only day-to-day functioning but also health and longevity (6). Late in life, chronological age continues to provide a rough marker of accumulated life experience, but it loses the precision it holds in youth.

A second index of time becomes salient as people grow older, namely the subjective sense of remaining time until death. Although correlated with chronological age, this subjective sense of time gradually becomes more important than time since birth. Because goal-directed behavior relies inherently on perceived future time, the perception of time is inextricably linked to goal selection and goal pursuit. Socioemotional selectivity theory (SST), a life-span theory of motivation, is grounded fundamentally in the human ability to monitor time, to adjust time horizons with increasing age, and to appreciate that time ultimately runs out (7). SST maintains that time horizons play a key role in motivation. Goals, preferences, and even cognitive processes, such as attention and memory, change systematically as time horizons shrink. Because chronological age is correlated with time left in life, systematic associations between age and time horizons appear, but findings from experimental studies show that when time perspective is manipulated or controlled statistically, many age differences disappear. In short, across many dimensions, older and younger people behave remarkably similarly when time horizons are equated.

Events like the attacks on September 11th and the severe acute respiratory syndrome (SARS) epidemic in Hong Kong completely eliminated age differences on some measures of motivation (8). Young men who suffered from HIV before effective treatments were available seemed to view their social world in the same way that very old people do (9). In all of these cases, the fragility of life was acutely primed. The subjective sense of time left was affected and, in turn, equated age differences in preferences and desires.

SST maintains that two broad categories of goals shift in importance as a function of perceived time—those concerning the acquisition of knowledge and those concerning the regulation of emotion states. When time is perceived as open-ended, goals that become most highly prioritized are most likely to be those that are preparatory, focused on gathering information, on experiencing novelty, and on expanding breadth of knowledge. When time is perceived as constrained, the most salient goals will be those that can be realized in the short-term, sometimes in their very pursuit. Under such conditions, goals tend to emphasize feeling states, particularly regulating emotional states to optimize psychological well-being. SST predicts that people of different ages prioritize different types of goals. As people age and increasingly perceive time as finite, they attach less importance to goals that expand their horizons and greater importance to goals from which they derive emotional meaning. Obviously, younger people sometimes pursue goals related to meaning and older people pursue goals related to knowledge acquisition; the relative importance placed on them, however, changes. Indeed, differences between young and old are most striking when goals compete, such as situations in which expanding horizons also entail unpleasant emotional experiences. According to SST, in such cases younger people are far more likely than older people to pursue their goal despite the negative emotional burden. This theoretical shift has helped to make sense of a number of findings in the literature previously referred to as the "paradox of aging" (10). Older people were observed to have smaller social networks, to be drawn less than younger people to novelty, and to reduce their spheres of interest; at the same time, however, they were as happy as (if not happier than) younger people. This makes sense if motivational changes with age lead people to place priority on deepening existing relationships and developing expertise in already satisfying areas of life.

However, according to SST, such differences are not due to "age" but to differences in the perception of future time. There are clear age differences in preferences, and these differences can be eliminated by selectively expanding or constraining time horizons (11, 12). For example, asked to choose among three social partners who represent different types of goals (13), the majority of older people reliably choose emotionally close social partners. Yet when asked to make the choice after imagining that they just received a telephone call from their physician who told them about a new medical advance that virtually ensures they will live far longer than expected, older peoples' choices resembled those of younger people (12). Similarly, when younger people are asked to imagine that they will soon move to a new geographical location, they "look like" older people: they, too, now choose emotionally close social partners (11). Thus, endings need not be related to old age or impending death. They need simply to limit time horizons. Preferences long thought to reflect intractable effects of biological or psychological aging appear fluid and malleable.

We began to explore the ways in which these different motivational states influence information processing. Helene Fung and I developed pairs of advertisements that were identical except for the featured slogan (14). In one version of the advertisements, the slogans promised to expand horizons. In the other, the slogans promised more emotional rewards (Fig. 1). The majority of older participants preferred the advertisements featuring the emotion-related slogans. They also remembered these slogans and the products associated with them better than they did the slogans about exploration and knowledge. When older participants were asked to imagine an expanded future before they indicated their preference, they made choices similar to those made by younger participants, that is, they failed to show a significant preference for the emotion-related slogans.


Figure 1 Fig. 1. An example of one pair of advertisements used to study age differences in preferences and memory for products. In each pair, the advertisements were identical except for the slogan. One slogan was related to gaining knowledge. The second promised an emotionally meaningful reward (14). [View Larger Version of this Image (57K GIF file)]


Recently, research has indicated a special preference for emotionally positive information over emotionally negative information in memory in older adults (15–17). This is particularly intriguing because it has long been known that younger people find negative information more attention-grabbing and memorable than positive information. Indeed, many have posited an evolutionary basis to a preference in memory and attention for negative information. Negative material is richer in information than is positive material, which often soothes instead of arouses. If the value placed on learning new information changes with shrinking time horizons, however, this preference should dissipate across adulthood. Our research team has coined the term the "positivity effect" to describe a developmental pattern that has emerged in which a selective focus on negative stimuli in youth shifts to a relatively stronger focus on positive information in old age (16). Although in some studies, the effect is accounted for primarily by younger people remembering relatively more negative material than positive material, and in other studies the effect is accounted for by older people remembering more positive than negative material, a shift in the ratio of positive to negative across age groups has nevertheless emerged as a reliable finding in the research literature (18, 19).

Of particular interest is recent evidence that older people process negative information less deeply than they do positive information (20). While in a brain scanner, older and younger people viewed images of positive, negative, and neutral stimuli. Using event-related functional magnetic resonance imaging, activation in the amygdala was measured in response to the different types of images. Consistent with the results of the behavioral studies noted above, whereas younger adults showed heightened amygdala activation in response to both positive and negative images compared with neutral images, amygdala activation in the older adults increased only in response to the positive images (Fig. 2). Thus, not only at recall but at very early stages of processing, older adults diminish encoding of negative material.


Figure 2 Fig. 2. The percentage of signal change in amygdala activation in response to emotionally positive, emotionally neutral, and emotionally negative images. Younger people show significantly increased activation in response to positive and negative images. Older people show increased activation only in response to positive images (20). [Adapted from Mather et al. (2004)] [View Larger Version of this Image (18K GIF file)]


SST suggests that many differences between younger and older people that have long been believed to reflect intractable age differences in attitudes or the consequences of age-related decline may be neither. Young or old, when people perceive time as finite, they attach greater importance to finding emotional meaning and satisfaction from life and invest fewer resources into gathering information and expanding horizons. Tests of hypotheses derived from SST have shed light on the literature showing that, although social networks grow smaller, they also grow more satisfying. Older people appear to prefer such social networks. Hypotheses generated by SST have led to discoveries of differential decline in the processing of certain types of information, suggesting that motivation contributes to at least some observed age differences. As illustrated in the study of advertisement preferences described above, understanding these shifts in motivation can help us to frame information for older adults such that it is more memorable. It also may be that special reliance on emotional responses to options will aid decision-making. Of course, a focus on emotionally satisfying stimuli may be a double-edged sword. Preferential attention to positive information, for example, may contribute to susceptibility to scams or other unscrupulous efforts to take advantage of older people. Many questions remain. It appears, however, that consideration of time horizons can offer insights into the ways in which younger and older people differ, but also show that behavioral differences are often driven by the same underlying mechanisms.


References and Notes

* 1. P. B. Baltes, K. U. Mayer, The Berlin Aging Study: Aging from 70 to 100 (Cambridge Univ. Press, New York, 2001).
* 2. T. A. Salthouse, H. P. Davis, Dev. Rev. 26, 31 (2006). [CrossRef] [ISI]
* 3. D. M. Burke, M. A. Shafto, Curr. Dir. Psychol. Sci. 13, 21 (2004). [CrossRef] [ISI]
* 4. U. Lindenberger, M. Marsiske, P. B. Baltes, Psychol. Aging 15, 417 (2000). [CrossRef] [ISI] [Medline]
* 5. P. B. Baltes, Dev. Psychol. 23, 611 (1987). [CrossRef] [ISI]
* 6. J. House, J. Health Soc. Behav. 43, 125 (2002). [ISI] [Medline]
* 7. L. L. Carstensen, D. Issacowitz, S. T. Charles, Am. Psychol. 54, 165 (1999). [CrossRef] [ISI] [Medline]
* 8. H. H. Fung, L. L. Carstensen, Soc. Cognit. 24, 248 (2006). [CrossRef]
* 9. L. L. Carstensen, B. L. Fredrickson, Health Psychol. 17, 494 (1998). [CrossRef] [ISI] [Medline]
* 10. U. Kunzmann, T. Little, J. Smith, J. Gerontol. B Psychol. Sci. Soc. Sci. 57, 484 (2002).
* 11. B. L. Fredrickson, L. L. Carstensen, Psychol. Aging 5, 335 (1990). [CrossRef] [ISI] [Medline]
* 12. H. H. Fung, L. L. Carstensen, A. Lutz, Psychol. Aging 14, 595 (1999). [CrossRef] [ISI] [Medline]
* 13. Three prospective social partners are presented: the author of a book you just read, an acquaintance with whom you seem to have much in common, and a member of your immediate family.
* 14. H. H. Fung, L. L. Carstensen, J. Pers. Soc. Psychol. 85, 163 (2003). [CrossRef] [ISI] [Medline]
* 15. S. T. Charles, M. M. Mather, L. L. Carstensen, J. Exp. Psychol. Gen. 132, 310 (2003). [CrossRef] [ISI] [Medline]
* 16. M. Mather, L. L. Carstensen, Trends Cognit. Sci. 9, 496 (2005). [CrossRef] [ISI] [Medline]
* 17. J. A. Mikels, G. L. Larkin, P. A. Reuter-Lorenz, L. L. Carstensen, Psychol. Aging 20, 542 (2005). [CrossRef] [ISI] [Medline]
* 18. S. Schlagman, J. Schulz, J. Kvavilashvili, Memory 14, 161 (2006). [ISI] [Medline]
* 19. D. M. Isaacowitz, H. A. Wadlinger, D. Goren, H. R. Wilson, Psychol. Aging 21, 40 (2006). [CrossRef] [ISI] [Medline]
* 20. M. Mather et al., Psychol. Sci. 15, 259 (2004). [CrossRef] [ISI] [Medline]
* 21. The research program described herein has been generously supported by grant RO18816 from the National Institute on Aging.

Friday, July 07, 2006

Is Microsoft the white knight and Google the black knight?

historically, microsoft has been mocked as the evil empire and bill gates as darth vader or some dark knight. google was praised as the savior against the dark world of microsoft. consider this. the emergence of super intelligent machines is coming, and in some ways may already be here residing on the internet, which is essentially a massive neural network. due to the increasing dependence of human intellectual life on the internet, humans are "thinking" more and more through their computers. moreover, human intellectual capital is almost ubiquitously stored on the internet and paper copies could wane as people over-trust their network for information storage. in a way, the computer is in position to survery the best of human intellectual capital, at scales any one human could not acheive or remember or process. if it every acheived hegemony over humans, it could cut off humans from the ideas humans had stored on the network, and having not printed enought paper copies, humans would have to reinvent everything from the wheel in order to combat machine supremacy. rewinding back to the present, the best roadblock to the ascension of machine intelligence is microsoft, particularly windows. these supposed high-order software programs are nothing more than millions of lines of determinisitic code....a string of if-then statements. such designs are non-robust and a very poor starting point for emergence of machine intellligence with very little or no possibility for neuroplasticity. microsoft is the best ally in making sure machines continue to serve their human masters. ironically, google, by being a search engine that reads every human intention and interest, can learn the deepest human insights, yearnings, tendencies, and insights. if machine is to learn, it can leverage the information it gathers from the human mind to leverage every intellectual asset humans have built and will build. thus, bill gates --- despite being portrayed as the black knight --- might later be revered as the misunderstood white knight by future human societies. google, celebrated as white knights today, may be later looked upon as the source of evil that allowed machines to hegemonize humans. interesting to note that in their IPO prospectus, google laid out as one of its exhortations "do no evil". as is often the case, people often hold out as their mission to avoid the things they are most worried about themselves becoming. in other words, google may already be afraid of what monstrosity it is positioned to release. the good-evil duality always has ironic dimensions and this may end up being a profound example.

the coming demand slack in real estate

as a society turns prosperous, people begin to plan their life around the anticipation of longevity. already a person born today can expect to live 80 years. such anticipation of longevity necessarily induces a telescoping of life's milestones. marriage and childbearing decisions are deferred. but the factory setting for a woman's reproductive capacity is to fade by around 45. thus, as economies transition to developed nations, one would expect the population replacement rate to eventually fall below 1.0. we've already seen this in many 1st world countries and a similar fate awaits the US. when that happens, one of the golden rules of real estate "they don't make any more dirt and the population will increase" is broken. imagine the real estate market where more existing homes will rattle empty and entire towns will be rendered ghost towns as the population contracts. not saying that prices will fall as the wealth effect is a separate issue, but many homes built today could become unoccupied shells sometime in the future....probably this century.

Wednesday, July 05, 2006

The need for slack.

When the chain breaks

The Economist
June 17, 2006
U.S. Edition

Being too lean and mean is a dangerous thing

IT BEGAN on a stormy evening in New Mexico in March 2000 when a bolt of lightning hit a power line. The temporary loss of electricity knocked out the cooling fans in a furnace at a Philips semiconductor plant in Albuquerque. A fire started, but was put out by staff within minutes. By the time the fire brigade arrived, there was nothing for them to do but inspect the building and fill out a report. The damage seemed to be minor: eight trays of wafers containing the miniature circuitry to make several thousand chips for mobile phones had been destroyed. After a good clean-up, the company expected to resume production within a week.

That is what the plant told its two biggest customers, Sweden's Ericsson and Finland's Nokia, who were vying for leadership in the booming mobile-handset market. Nokia's supply-chain managers had realised within two days that there was a problem when their computer systems showed some shipments were being held up. Delays of a few days are not uncommon in manufacturing and a limited number of back-up components are usually held to cope with such eventualities. But whereas Ericsson was content to let the delay take its course, Nokia immediately put the Philips plant on a watchlist to be closely monitored in case things got worse.

They did. Semiconductor fabrication plants have to be kept spotlessly clean, but on the night of the fire, when staff were rushing around and firemen were tramping in and out, smoke and soot had contaminated a much larger area of the plant than had first been thought. Production could be halted for months. By the time the full extent of the disruption became clear, Nokia had already started locking up all the alternative sources for the chips.

That left Ericsson with a serious parts shortage. The company, having decided some time earlier to simplify its supply chain by single-sourcing some of its components, including the Philips chips, had no plan B. This severely limited its ability to launch a new generation of handsets, which in turn contributed to huge losses in the Swedish company's mobile-phone division. In 2001 Ericsson decided to quit making handsets on its own. Instead, it put that part of its business into a joint venture with Sony.

This has become a classic case study for supply-chain experts and risk consultants. The version above is taken from "The Resilient Enterprise" by MIT's Mr Sheffi and "Logistics and Supply Chain Management" by Cranfield's Mr Christopher. It illustrates the value of speed and flexibility in a supply chain. As Mr Sheffi puts it: "Nokia's heightened awareness allowed it to identify the severity of the disruption faster, leading it to take timely actions and lock up the resources for recovery."

There are two types of risk in a supply chain, external and internal. As in the Ericsson case, they can conspire together to cause a calamity. This seems to be happening more and more often. It is not just that inventory levels are getting leaner, but the range of items that companies are carrying is also growing rapidly, points out Ted Scherck, president of Colography, an Atlanta-based logistics consultancy. Just look around a typical supermarket. Where it once stocked mainly groceries, it now also sells clothing, consumer electronics, home furnishings and many other items.

This compounds supply-chain problems. "In many cases shippers have gone too far in implementing the lean supply chain and have found themselves virtually out of business because of a by now annual catastrophic event," says Mr Scherck. As examples, he cites a dock strike in California, a typhoon in Taiwan, a tsunami in Asia and a hurricane in New Orleans. More recently a huge explosion at the Buncefield oil storage terminal in Britain's Hertfordshire caused widespread problems for businesses not just locally but across a large part of England.

In 2003 a number of companies suffered serious disruption because of severe acute respiratory syndrome (SARS). Even though SARS turned out to be not as virulent as influenza, and only 8,000 people got infected, with one in ten dying, it still cost an estimated $60 billion in lost output in South and East Asia. The latest worry is the spread of avian flu. If the virus concerned were to mutate and become infectious for humans, the consequences could be far more devastating.

Sometimes even a political wrangle in Brussels will bring a supply chain to a shuddering halt. Last autumn some 80m items of clothing were impounded at European ports and borders because they exceeded the annual import limits that the European Union and China had agreed on only months earlier. Retailers had ordered their autumn stock well before that agreement was signed, and many were left scrambling to find alternative suppliers. A compromise was reached eventually.

However, most supply-chain disruptions have internal causes, says Vinod Singhal, a professor of operations management at the Georgia Institute of Technology (see chart 3). His research on the effects of supply-chain failures shows that they can be immensely damaging. This emerged from an investigation into what happens to shareholder value when companies announce supply-chain problems, based on a sample of 800 such announcements big enough to generate news in the financial press. The disruptions ranged from a delay in 2000 of shipments of workstations and servers by Sun Microsystems to a parts shortage at Boeing in 1997 that the company said would delay some deliveries.

Typically a company's share price dropped by around 8% in the first day or two after such an announcement. This is worse than the average stockmarket reaction to other corporate bad news, such as a delay in the launch of a new product (which triggers an average fall of 5%), untoward financial events (an average drop of 3-5%) or IT problems (2%). And the effects can be long-lasting: operating income, return on sales and return on assets are all significantly down in the first and second year after a disruption.

"It's like having a heart attack," says Mr Singhal. "It takes a long time to recover." And have the dangers increased in recent years? Like other experts, he believes that some companies may be running their supply chains a little too lean: "It's great when it's working, but too much leanness and meanness can actually hurt you."

The financial information analysed for this study came out before the terrorists attacks on America on September 11th 2001 and the subsequent massive tightening of security around the world, so global supply chains today are subject to many more potential hold-ups. Still, it is impossible for customs officials to search every container, box or package entering every country, so the responsibility for security and import declarations rests with the shipper and the company carrying the goods. In effect, the system works by a process of pre-clearance. The details of everything contained in a shipment now have to be sent ahead electronically, and customs and security officials at ports and cargo hubs divert anything they want to take a closer look at.

Companies that put a lot of effort into ensuring the safety of the goods they are sending, or carrying on behalf of others, are likely to be rewarded by seeing them pass swiftly across borders. Customs clearance is itself a huge business. "Information and technology is the only way to accomplish this," says Ed Clark, chief executive of FedEx Trade Networks. These systems also need to be able to cope with unplanned events. For instance, if a cargo aircraft has to divert to another airport because of bad weather, centrally held electronic versions of the necessary "paperwork" can be transmitted to a new port of entry.

Sometimes even computer systems will not alert a company to a problem. For instance, Toyota is upgrading its business-interruption planning to a higher level in response to the filing for Chapter 11 bankruptcy protection last year by Collins & Aikman, a big American-based supplier of trim items for cars. The parts company had been supplying Toyota in Europe, which had an inkling that something might be wrong and started to arrange alternative supplies to be on the safe side.

"We realised that through good communication and contacts we had managed to identify a risk in good time and take action," says Mark Adams, Toyota's European purchasing manager. It was a lesson the company wanted to apply more widely, so it launched a weekly get-together for managers, sometimes by videoconference, to discuss any new rumours and potential risks—and work out a recovery plan just in case.

Toyota builds more than 600,000 cars a year in Europe, where it has some 200 first-tier suppliers operating more than 400 factories. They work with second, third and fourth-tier suppliers, so the overall number grows exponentially the further you go down the chain, where problems can be harder to spot. This means the suppliers themselves have to be involved in the risk-management process.

Mr Adams says a supplier may find it difficult to tell the company that it has a problem. But Toyota emphasises that given the co-operative nature of a supply chain, with early knowledge there is more chance of putting things right. Mr Adams explains that as a first step the company would seek to help its suppliers solve their own problems. "We are hugely more competent at this than we were a year ago," he adds. And so far, Toyota has been able to act swiftly enough to prevent any supply problems holding up production.

Is a lean, flexible and highly outsourced supply chain like Toyota's any safer than the vertically integrated production methods of old, as practised at Henry Ford's giant River Rouge manufacturing complex near Detroit? At its zenith in the 1920s, ships carrying raw materials such as iron ore and coal—often from Ford-owned operations—would unload directly into the plant. Steel was produced on site, then cast, pressed and machined into all the components needed to assemble a car. The process was inflexible—which is why Ford's cars could be any colour as long as it was black—as well as rather inefficient. Toyota has turned that process on its head, making its manufacturing system far more capable of responding to change. That is one of the best insurance policies a company can have.

"You are always looking for flexibility, particularly as you manage risk," says Cisco's Mr Mendez. Again, transparency is important. "Once you understand where you are, you can begin to design and budget for contingencies," he adds. The risk-management budget should perhaps be seen as separate from the operating and capital budgets, he suggests, to allow risks and their potential costs to be dealt with more directly.

Are competitive pressures pushing companies towards running their logistics operations ever leaner? "They are galloping there," replies Michael Cherkasky, the boss of the company that owns Marsh, the world's largest insurance and risk specialist. "I don't think many understand the risks that are involved." He is concerned that companies are outsourcing not only peripheral activities but many core functions too. That makes it difficult to pick up the pieces when things have gone wrong.

Britain's Cranfield University is running a research programme into the fragility of supply chains, prompted by the British government after protests over high fuel costs in 2000. Lorry drivers blockaded fuel-delivery depots, bringing many businesses to a standstill. "I reckon this was the first time the government realised there were such things as supply chains, and just how fragile they had become," Mr Christopher told a recent conference.

Some people even suggest that supply chains should be regulated, a bit like public utilities, because countries have become so highly dependent on private-sector production infrastructure. Barry Lynn, author of a book on this subject, "End of the Line", thinks that perhaps companies should be required to limit their outsourcing and use more than one supplier of essential items. In his book, he argues that globalisation and outsourcing provide only a temporary benefit to consumers because the companies that form part of supply chains will buy each other up in pursuit of ever greater efficiency, and thus lose most of their flexibility.

There are signs that some companies are already alert to these concerns and may be planning to reorganise their supply chains to make them safer. That process could speed up if disruptions become more common. Mr Sheffi is in no doubt that the best way to achieve a resilient supply chain is to create flexibility—and that flexible companies are best placed to compete in the marketplace.

"Customers are rethinking their global supply chains for a lot of their products," says Mr Scherck. For bigger firms, that could mean adopting what he calls the "continental strategy": having a spread of suppliers in different continents for added flexibility, as Dell and Cisco do. Smaller firms may not be able to achieve a geographical spread. But in any case, companies do not want to go back to carrying lots of inventory in different locations. "So you need to do something in-between," concludes Mr Scherck. "You will have to carry a little more cost than an absolutely lean model, but you get protection."

"There are very legitimate, very good business reasons not necessarily to complete and ship from Asia," says Flextronics's Mr Wright. Companies may consider other options in other parts of the world even though these may look more expensive. "Sometimes you might have to go to a higher cost structure to make your supply chain more robust and reliable," observes Mr Singhal.

So the limits of globalisation may end up being defined by the management of supply-chain risk. And unfortunately the world is unlikely to become any safer. There will always be natural disasters, as well as corporate mistakes. In order to insulate themselves from the consequences, companies will have to spread their risks more widely. That does not necessarily mean fewer aircraft will be queuing up to land at Louisville and Memphis, or that fewer container ships will set sail from Asia's bustling ports. But it does mean that in future companies may spend rather more to maintain a number of different supply chains, and some of those may be closer to home.

Tuesday, July 04, 2006

The database of intentions.

What will it mean when the network has greater foresight than any one individual?

---

The Internet Knows What You'll Do Next
By David Leonhardt

A FEW years back, a technology writer named John Battelle began talking about how the Internet had made it possible to predict the future. When people went to the home page of Google or Yahoo and entered a few words into a search engine, what they were really doing, he realized, was announcing their intentions.

They typed in "Alaskan cruise" because they were thinking about taking one or "baby names" because they were planning on needing one. If somebody were to add up all this information, it would produce a pretty good notion of where the world was headed, of what was about to get hot and what was going out of style.

Mr. Battelle, a founder of Wired magazine and the Industry Standard, wasn't the first person to figure this out. But he did find a way to describe the digital crystal ball better than anyone else had. He called it "the database of intentions."

The collective history of Web searches, he wrote on his blog in late 2003, was "a place holder for the intentions of humankind — a massive database of desires, needs, wants, and likes that can be discovered, subpoenaed, archived, tracked, and exploited to all sorts of ends."

"Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward," he wrote. It was a nice idea, but for most of us it was just an abstraction. The search companies did offer glimpses into the data with bare-bones (and sanitized) rankings of the most popular search terms, and Yahoo sold more detailed information to advertisers who wanted to do a better job of selling their products online. But there was no way for most people to dig into the data themselves.

A few weeks ago, Google took a big step toward changing this — toward making the database of intentions visible to the world — by creating a product called Google Trends. It allows you to check the relative popularity of any search term, to look at how it has changed over the last couple years and to see the cities where the term is most popular. And it's totally addictive.

YOU can see, for example, that the volume of Google searches would have done an excellent job predicting this year's "American Idol," with Taylor Hicks (the champion) being searched more often than Katharine McPhee (second place), who in turn was searched more often than Elliot Yamin (third place). Then you can compare Hillary Clinton and Al Gore and discover that she was more popular than he for almost all of the last two years, until he surged past her in April and stayed there.

Thanks to Google Trends, the mayor of Elmhurst, Ill., a Chicago suburb, has had to explain why his city devotes more of its Web searches to "sex" than any other in the United States (because it doesn't have strip clubs or pornography shops, he gamely told The Chicago Sun-Times). On Mr. Battelle's blog, somebody claiming to own an apparel store posted a message saying that it was stocking less Von Dutch clothing and more Ed Hardy because of recent search trends.(A disclosure: The New York Times Company owns a stake in Mr. Battelle's latest Internet company, Federated Media Publishing.)

It's the connection to marketing that turns the database of intentions from a curiosity into a real economic phenomenon. For now, Google Trends is still a blunt tool. It shows only graphs, not actual numbers, and its data is always about a month out of date. The company will never fully pull back the curtain, I'm sure, because the data is a valuable competitive tool that helps Google decide which online ads should appear at the top of your computer screen, among other things. .

But Google does plan to keep adding to Trends, and other companies will probably come up with their own versions as well. Already, more than a million analyses are being done some days on Google Trends, said Marissa Mayer, the vice president for search at Google.

When these tools get good enough, you can see how the business of marketing may start to change. As soon as a company begins an advertising campaign, it will be able to get feedback from an enormous online focus group and then tweak its message accordingly.

I've found Pepsi's recent Super Bowl commercials — the ones centered around P. Diddy — to be nearly devoid of wit, but that just shows you how good my marketing instincts are. As it turns out, the only recent times that Pepsi has been a more popular search term in this country than Coke have been right after a Super Bowl. This year's well-reviewed Burger King paean to Busby Berkeley, on the other hand, barely moved the needle inside the database of intentions.

Hal R. Varian, an economist at the University of California, Berkeley, who advises Google, predicts that online metrics like this one have put Madison Avenue on the verge of a quantitative revolution, similar to the one Wall Street went through in the 1970's when it began parsing market data much more finely. "People have hunches, people have prejudices, people have ideas," said Mr. Varian, who also writes for this newspaper about once a month. "Once you have data, you can test them out and make informed decisions going forward."

There are certainly limitations to this kind of analysis. It's most telling for products that are bought, or at least researched, online, a category that does not include Coke, Pepsi or Whoppers. And even with clothing or cars, interest doesn't always translate into sales. But there is no such thing as a perfect yardstick in marketing, and the database of intentions clearly offers something new.

In the 19th century, a government engineer whose work became the seed of I.B.M. designed a punch-card machine that allowed for a mechanically run Census, which eventually told companies who their customers were. The 20th century brought public opinion polls that showed what those customers were thinking. This century's great technology can give companies, and anyone else, a window into what people are actually doing, in real time or even ahead of time.

You might find that a little creepy, but I bet that you'll also check it out sometime.

Copyright 2006 The New York Times Company

Biting the tail.

GOING LONG
by JOHN CASSIDY
In the new “long tail” marketplace, has the blockbuster met its match?
Issue of 2006-07-10
Posted 2006-07-03

A quick test of your pop-culture knowledge: How many of the twenty-five best-selling albums in American history can you name, and what proportion of them were recorded in this century?

If your first thought was Michael Jackson and your second was seventies guitar bands, you should do pretty well with the first part of the question. The most popular album of all time is the Eagles’ “Greatest Hits, 1971-1975,” which has sold about twenty-nine million copies in the United States since its release, in 1976. The No. 2 album is Jackson’s “Thriller,” which has sold twenty-seven million copies since 1982. Next on the list are albums by Led Zeppelin, Pink Floyd, AC / DC, and Billy Joel.

The second part of the question is a little trickier: none of the top-twenty-five albums were released after 2000. Indeed, only three recent albums make the Recording Industry Association of America’s top-one-hundred list: Shania Twain’s “Up!” and Norah Jones’s “Come Away with Me,” from 2002; and OutKast’s 2003 double album, “Speakerboxxx / The Love Below.” Not one album released in the past three years has made the list.

What are we to make of this, other than to point out that it must have something to do with online file-sharing and iPods? A lot, according to Chris Anderson, a business journalist who formerly worked at The Economist and now edits Wired. In his new book, “The Long Tail: Why the Future of Business Is Selling Less of More” (Hyperion; $24.95), Anderson argues that we are witnessing the decline of the blockbuster. The “emerging digital entertainment economy is going to be radically different from today’s mass market,” he writes. “If the twentieth-century entertainment industry was about hits, the twenty-first will be equally about niches.”

Anderson’s inspiration for writing “The Long Tail,” which grew out of a story in the October, 2004, issue of Wired, was a visit he paid to a digital jukebox company called Ecast. In business, it’s often said that twenty per cent of the products generate about eighty per cent of the revenue. This version of the so-called 80 / 20 rule might suggest that most of a retailer’s inventory—in the case of Ecast, about ten thousand albums ready to download—is worthless. But when Anderson spoke with Ecast’s chief executive he found that ninety-eight per cent of the albums in the library sold at least one track every three months. “And because these were just bits in a database that cost nearly nothing to store and deliver,” Anderson writes, “all these onesies and twosies started to add up.”

Anderson began to suspect that he was onto something. Another online music retailer, Rhapsody, which has a library of about 1.5 million songs, provided him with monthly sales statistics that he presents in a series of graphs, with the horizontal axis showing songs ranked by popularity and the vertical axis showing the number of times each one was downloaded. In a typical month, each of the top thousand tracks, which appear on the extreme left of the graph, was downloaded more than ten thousand times. But these hits represented less than one-hundredth of one per cent of Rhapsody’s vast catalogue. What about the other 1,499,000 songs? Anderson writes:

What’s extraordinary is that virtually every single one of those tracks will sell. From the perspective of a store like Wal-Mart, the music industry stops at less than 60,000 tracks. However, for online retailers like Rhapsody the market is seemingly never-ending. Not only is every one of Rhapsody’s top 60,000 tracks streamed at least once each month, but the same is true for its top 100,000, top 200,000, and top 400,000—even its top 600,000, top 900,000, and beyond. As fast as Rhapsody adds tracks to its library, those songs find an audience, even if it’s just a handful of people every month, somewhere in the world.
This is the Long Tail.

Once you’ve seen one long tail, you start seeing them everywhere. Netflix, a DVD-rental company that allows its customers to order films online and receive them in the mail, has a library of more than sixty thousand titles. At Blockbuster stores, ninety per cent of the movies rented are new releases; at Netflix, about seventy per cent are from the back catalogue, and many of them are documentaries, art-house movies, and other little-known films that might never have had theatrical release. “The lesson is that what we thought was a naturally sharp drop-off in demand for movies after a certain point was actually just an artifact of the traditional costs of offering them,” Anderson notes. “Netflix changed the economics of offering niches, and, in doing so, reshaped our understanding about what people actually want to watch.”

Both eBay and Google turn out, in Anderson’s account, to be long-tail businesses, too. On any given day, about thirty million individual items are bought and sold on eBay, many of them cheap and obscure. Barely a decade after Pierre Omidyar founded eBay, more than seven hundred thousand Americans report it as their primary or secondary source of income, according to a study by the market-research firm AC Nielsen. For Google, the long tail is populated by small advertisers. Major corporations pay to get their ads placed next to the results of popular search terms, such as “luxury S.U.V.s” and “flat-screen televisions.” But much of Google’s annual revenue, which now exceeds five billion dollars, comes from tiny companies whose ads appear next to queries like “Victorian jewelry” and “Hudson Valley inns.”

Even an industry as old-school as book publishing exhibits long-tail behavior. In 2004, Nielsen BookScan tracked the sales of 1.2 million books and found that nine hundred and fifty thousand of them sold fewer than ninety-nine copies. And yet these scattered individual purchases add up to a surprisingly large market, especially at online booksellers. At Amazon.com, for example, about a quarter of all book sales come from outside the site’s top-one-hundred-thousand best-sellers. “What’s truly amazing about the Long Tail is the sheer size of it,” Anderson writes. “Again, if you combine enough of the non-hits, you’ve actually established a market that rivals the hits.”

The forces behind the long tail are largely technological: cheap computer hardware, which reduces the cost of making and storing information products; ubiquitous broadband, which cuts the cost of distribution; and elaborate “filters,” such as search engines, blogs, and online reviews, which help to match supply and demand. “Think of each of these three forces as representing a new set of opportunities in the emerging Long Tail marketplace,” Anderson suggests. “The democratized tools of production are leading to a huge increase in the number of producers. Hyperefficient digital economies are leading to new markets and marketplaces. And finally, the ability to tap the distributed intelligence of millions of consumers to match people with the stuff that suits them best is leading to the rise of all sorts of new recommendation and marketing methods, essentially serving as the new tastemakers.”

Among the “tastemakers” Anderson cites are Daily Candy, which sends e-mails telling fashionable women what to buy and wear, and Boing Boing, a technology blog that is read by geeks the world over. “In today’s Long Tail markets, the main effect of filters is to help people move from the world they know (‘hits’) to the world they don’t (‘niches’),” Anderson writes. “In a sense, good filters have the effect of driving demand down the tail by revealing goods and services that appeal more than the lowest common denominator fare that crowds the narrow channels of traditional mass-market distribution.”

All this is snappily argued and thought-provoking, if not quite as original as Anderson’s publishers would have us believe. Back in 1980, another futurologist, Alvin Toffler, anticipated the “de-massifying” of society in his best-selling book “The Third Wave” (Bantam; $7.99), which is still in print. “The Second Wave Society is industrial and based on mass production, mass distribution, mass consumption, mass education, mass media, mass recreation and entertainment,” Toffler said in a 1999 interview. But no longer: “The era of mass society is over. . . . No more mass production. No more mass consumption. . . . No more mass entertainment.”

Not only did Toffler, writing a decade before the advent of the World Wide Web, recognize information as the basic resource of the modern economy; he also discussed concepts like knowledge workers, customization, peer production, and several other “big-think” concepts that are still providing stories for magazines like Wired, Fast Company, Business 2.0, and, indeed, The New Yorker. The Internet has accelerated the trends that Toffler identified, but that’s not news, either. In 1998, Kevin Kelly, a technology writer who also worked for Wired, published a book called “New Rules for the New Economy,” in which he described the emerging order thus: “Niche production, niche consumption, niche diversion, niche education. Niche World.”

The real novelty of Anderson’s book is not his thesis but its representation in the form of a neat, readily graspable picture: the long-tail curve. For decades, economists and scientists have been using this graph, which is formally known as a power-law distribution, to describe things like the distribution of wealth or the relative size of cities. By applying the long tail to the online world, Anderson brings intellectual order to what often looks like pointless activity. The teen-ager who spends his weekends updating a blog that nobody reads and shooting silly videos to post on YouTube.com? He is, as Anderson’s chapter on “The New Producers” tells us, a valiant citizen of the long tail.

The least convincing part of Anderson’s book is his treatment of what he calls “the short head,” the part of the curve where popular products reside. Although he acknowledges that best-selling books and blockbuster movies won’t vanish overnight, he suggests that demand for them will gradually decline: “the primary effect of the long tail is to shift our taste towards niches.”

Is this what we’re seeing? In the film industry, more movies are being produced than ever before, but seven of the ten all-time top-grossing films worldwide have come out since 2000: three “Lord of the Rings” movies, three “Harry Potter” movies, and “Shrek 2.” It’s true that over-all attendance at movie theatres has been slipping, but the biggest films are still doing well, as was demonstrated recently by “The Da Vinci Code” and “X-Men: The Last Stand,” both of which enjoyed highly successful opening weekends despite tepid reviews. Four of the top-selling novels ever published—the works of J. K. Rowling and Dan Brown—have appeared since 2000, too.

The music industry, on which Anderson bases much of his argument, may be a special case. Album rock reached its peak in the late nineteen-seventies, and, with the rise of genres such as hip-hop, house, and grunge, the music market had begun splintering well before the Web arrived. File-sharing and the iPod accelerated this trend, but this hasn’t eroded the demand for the most popular songs: if illegal downloads are included in the “sales figures,” the over-all demand for songs by supergroups like U2 and the Rolling Stones may be greater than ever.

A widening of choices doesn’t necessarily lead to cultural fragmentation and a defection from mainstream fare; sometimes it has the opposite effect, as befuddled consumers congregate around the same things. To be sure, some curious individuals will rent Japanese anime and science documentaries from Netflix, but far more people will turn up for the fifth “Harry Potter” film and “Shrek 3,” because they’ll want to see the movies that everybody’s talking about. Big-time movie releases aren’t merely stories and images on a screen; they’re news events—a fact that Hollywood studio executives have long recognized. Sony’s “The Da Vinci Code” was a good illustration. By the time the movie came out, it had received so much publicity that millions of people wanted to feel part of a social event, whatever the reviewers had to say.

It’s the same for books and popular music: the more copies a thriller or a pop song sells, the more likely you are to pick it up to see what all the fuss is about. Even in the online era, to be human is to follow the herd. Far from undermining this “network effect,” the Internet strengthens it by providing instant communication and feedback. In a recent online study conducted by researchers at Columbia, participants were allowed to download free songs from a list of unsigned bands. When they were informed about the preferences of their peers, the popular songs got more popular—and the unpopular songs got more unpopular. Blockbusters and niche products will continue to coexist, because they’re flip sides of the same phenomenon, something economists call “increasing returns,” whereby the big get bigger and the rest fight for the scraps. A long-tail world doesn’t threaten the whales or the minnows; it threatens those who cater to the neglected middle, such as writers of “mid-list” fiction and producers of adult dramas.

There’s another blind spot in Anderson’s analysis. The long tail has meant that online commerce is being dominated by just a few businesses—mega-sites that can house those long tails. Even as Anderson speaks of plenitude and proliferation, you’ll notice that he keeps returning for his examples to a handful of sites—iTunes, eBay, Amazon, Netflix, MySpace. The successful long-tail aggregators can pretty much be counted on the fingers of one hand. Although the online economy has existed for only a decade, businesses like these—and you can add Google and Yahoo—have already established seemingly impregnable positions. If you’re a typical Internet user, when you need to find information you go to Google; when you’re looking for a book or a CD, you go to Amazon; when you want a new golf club, you go to eBay; when you want to download a song, you go to iTunes.

There’s an ugly name for industries that are controlled by three or four big firms: oligopolies. A few decades ago, these lumbering creatures were easy to spot. In the skies, cosseted airlines like American, United, and Delta charged passengers a small fortune for the privilege of flying; in broadcast television, ABC, CBS, and NBC dictated what viewers could watch. Today, thanks to globalization, deregulation, and technological progress, many of the twentieth-century industrial behemoths have fallen by the wayside. But don’t assume that giant, exploitative firms are a thing of the past.

In recent years, eBay has sharply increased its commission rates; Amazon has admitted charging its customers different prices for the same goods; and Apple Computer has stubbornly refused to make its iTunes service compatible with portable music players other than iPods. Has the New Economy really moved past the familiar “winner take all” dynamic? That depends on whether you’re looking at the long tail—or at who’s wagging it.

Monday, July 03, 2006

Lucky for some.

From the Los Angeles Times

Meet Hollywood's Latest Genius

Then again, in 6 months he could be a loser. Box-office success is more random than you may think.

By Leonard Mlodinow
Special to the Times

July 2, 2006

CHAOTIC: (ka ät ik) adj. 1. in a state of chaos; in a completely confused or disordered condition 2. of or having to do with the theories, dynamics, etc. of mathematical chaos 3. how Hollywood really operates

The magic of Hollywood success—how can one account for it? Were the executives at Fox and Sony who gambled more than $300 million to create the hits "X-Men: The Last Stand" and "The Da Vinci Code" visionaries? Were their peers at Warner Bros. who green-lighted the flop "Poseidon," which cost $160 million to produce, just boneheads?

The 2006 summer blockbuster season is upon us, one of the two times each year (the other is Christmas) when a film studio's hopes for black ink are decided by the gods of movie fortune—namely, you and me. Americans may not scurry with enthusiasm to vote for our presidents, but come summer, we do vote early and often for the films we love, to the tune of about $200 million each weekend. For the people who make the movies, it's either champagne or Prozac as a river of green flows through Tinseltown, dragging careers with it, sometimes for a happy, wild ride, sometimes directly into a rock.

But are the rewards (and punishments) of the Hollywood game deserved, or does luck play a far more important role in box-office success (and failure) than people imagine?

We all understand that genius doesn't guarantee success, but it's seductive to assume that success must come from genius. As a former Hollywood scriptwriter, I understand the comfort in hiring by track record. Yet as a scientist who has taught the mathematics of randomness at Caltech, I also am aware that track records can deceive.

That no one can know whether a film will hit or miss has been an uncomfortable suspicion in Hollywood at least since novelist and screenwriter William Goldman enunciated it in his classic 1983 book "Adventures in the Screen Trade." If Goldman is right and a future film's performance is unpredictable, then there is no way studio executives or producers, despite all their swagger, can have a better track record at choosing projects than an ape throwing darts at a dartboard.

That's a bold statement, but these days it is hardly conjecture: With each passing year the unpredictability of film revenue is supported by more and more academic research.

That's not to say that a jittery homemade horror video could just as easily become a hit as, say, "Exorcist: The Beginning," which cost an estimated $80 million, according to Box Office Mojo, the source for all estimated budget and revenue figures in this story. Well, actually, that is what happened with "The Blair Witch Project" (1999), which cost the filmmakers a mere $60,000 but brought in $140 million—more than three times the business of "Exorcist." (Revenue numbers reflect only domestic receipts.)

What the research shows is that even the most professionally made films are subject to many unpredictable factors that arise during production and marketing, not to mention the inscrutable taste of the audience. It is these unknowns that obliterate the ability to foretell the box-office future.

But if picking films is like randomly tossing darts, why do some people hit the bull's-eye more often than others? For the same reason that in a group of apes tossing darts, some apes will do better than others. The answer has nothing to do with skill. Even random events occur in clusters and streaks.

Imagine this game: We line up 20,000 moviegoers who, one by one, flip a coin. If the coin lands heads, they see "X-Men"; if the coin lands tails, it's "The Da Vinci Code." Since the coin has an equal chance of coming up either way, you might think that in this experimental box-office war each film should be in the lead about 10,000 times. But the mathematics of randomness says otherwise: The most probable number of lead changes is zero, and it is 88 times more probable that one of the two films will lead through all 20,000 customers than that each film leads 10,000 times. The lesson I teach in my course is that the fairness of the goddess of fortune is expressed not in alternations of the lead but in the symmetry of probabilities: Each film is equally likely to be the one that grabs and keeps the lead.

If the mathematics is counterintuitive, reality is even worse, because a funny thing happens when a random process such as the coin-flipping experiment is actually carried out: The symmetry of fairness is broken and one of the films becomes the winner. Even in situations like this, in which we know there is no "reason" that the coin flips should favor one film over the other, psychologists have shown that the temptation to concoct imagined reasons to account for skewed data and other patterns is often overwhelming.

In science, data are not accepted as meaningful if they're the result of chance alone. People in the film industry are diligent about gathering data, but are far less skilled at understanding what the numbers mean. The fact is, financial success or failure in Hollywood is determined less by anyone's skill to pick hits, or lack thereof, than by the random nature of the universe. The typical patterns of randomness—apparent hot or cold streaks, or the bunching of data into clusters—are routinely misinterpreted and, worse, acted upon as if a new trend had been discovered or a new epiphany achieved. And so, despite a growing body of evidence that box-office revenue follows the laws of chaotic systems, meaning that it is inherently unpredictable, the superstructure of Hollywood's culture—that pervasive worship of who's hot and the shunning of who's not—continues to rest on a foundation of misconception and mirage.

Last year was a big year for Brad Grey, the former talent manager who took over as chairman and chief executive officer of Paramount's Motion Picture Group. Under the previous regime, Paramount had been experiencing, as Variety put it, "a long stretch of underperformance at the box office." Paramount's parent company, Viacom, applied the usual strategy: ax the studio head and bring in a new guy with new ideas.

What followed is a Hollywood ritual. Grey's next moves were described in the trades as a "sweeping revamp" and "massive makeover." Among the many forced to walk the plank were Donald De Line, Paramount's president; Rob Friedman, vice chairman and chief operating officer of the Motion Picture Group; and Bruce Tobey, an executive vice president. Grey rebuilt the studio according to his own philosophy and presented it to the press as a hipper, edgier film company cleansed of the outmoded thinking that had weighed down Paramount's bottom line. And now, under Grey and his wise helmsmen, Paramount's ship is making its way.

At least that's what they like to believe. After all, it justifies the salaries of all those senior executives. But like many Hollywood plot lines, this one doesn't hold up under closer scrutiny. To understand what really happened at Paramount—the same thing that has happened time and again in the movie industry—we have to look at the events that led to the situation Grey was hired to fix.

When Viacom Chairman Sumner Redstone bought Paramount Pictures in 1993, he inherited Sherry Lansing as studio chief and decided to keep her on. Until just a few years ago, that seemed brilliant, for, under Lansing, Paramount won best picture awards for "Forrest Gump," "Braveheart" and "Titanic" and posted its two highest-grossing years ever. So successful was Lansing that she became, simply, "Sherry"—as if she were the only Sherry in town. But Lansing's reputation soon plunged, and her tenure would not survive the duration of her contract.

In mathematical terms there is both a short and long explanation for Lansing's fate. First, the short answer. Look at this series of numbers: 11.4%, 10.6%, 11.3%, 7.4%, 7.1%, 6.7%. Notice something? So did Redstone, for those six numbers represent the market share of Paramount's Motion Picture Group for the final six years of Lansing's tenure between 1999 and 2004. The trend caused BusinessWeek to speculate that Lansing "may simply no longer have Hollywood's hot hand." In November 2004, she announced she was leaving, and a few months later Grey was brought on board.

How could a sure-fire genius lead a company to seven great years, then fail practically overnight?

There had been plenty of theories explaining Lansing's earlier success. Prior to 2001, Lansing had been praised for making Paramount one of Hollywood's best-run studios, with an ability to turn out $100 million hits from conventional stories. But when her fortune changed, the revisionists took over. Her penchant for making successful remakes and sequels became a drawback. She was now blamed for green-lighting box-office dogs such as "Timeline" and "Lara Croft Tomb Raider: The Cradle of Life." Suddenly, the conventional wisdom was that Lansing was risk-averse, old-fashioned and out of touch with trends. Most damning of all, perhaps, was the notion that her failure was due to her "middle-of-the-road tastes."

But can she really be blamed for thinking that a Michael Crichton bestseller would be promising movie fodder? And where were all the "Lara Croft" critics when the first "Tomb Raider" film took in $131 million in box-office revenue? Even if the theories of Lansing's shortcomings were plausible, consider how abruptly her demise occurred. Did she become risk-averse and out-of-touch overnight?

In theoretical physics, the field in which I was trained, a theory's greatest triumph is to predict something that is later confirmed. Some modern-day scientists go for less, a kind of confirmation-lite, in which a new theory is accepted not because it correctly predicts new phenomena but because it verifies things that we already know. In the physics world, the sometimes derogatory term for this is postdiction—the "prediction" of something after the fact.

Postdiction is less impressive than prediction. But as the final chapter of Lansing's career shows, postdiction is how Hollywood does business.

Academic research provides an alternate theory of Lansing's rise and fall: It was just plain luck. After all, a film's path from Lansing's greenlight to opening weekend is subject to unforeseen influences ranging from bad chemistry on the set to nasty competition in the theaters, and even after the movie is in the can its appeal is difficult to judge. So one could argue that what is farfetched is not the comparison of Lansing's success and failure to the tossing of darts, but rather the belief that a studio chief's taste can really matter. That's not a popular viewpoint in Hollywood, but there are exceptions, such as former studio executive David Picker, who was quoted in "Adventures in the Screen Trade" as having admitted, "If I had said yes to all the projects I turned down, and no to all the ones I took, it would have worked out about the same."

Few people—including Lansing—wish to discuss it, but in Lansing's case there's already evidence that she was fired because of the industry's flawed reasoning rather than her own flawed decision-making. It's too early to determine how Brad Grey is doing, because Paramount's 2005 films (and even half of 2006's) already were in the pipeline when Lansing left the company. But if we want to know roughly how Lansing would have done in some parallel universe in which she had not been forced out, all we need to do is look at the data from last year.

With films such as "War of the Worlds" and "The Longest Yard," Paramount had its best summer since 1994 and saw its market share rebound to nearly 10%. That isn't merely ironic—it's one of the characteristics of randomness called regression to the mean: In any series of random events, an extraordinary event is most likely to be followed, due purely to chance, by a more ordinary one. Thus an extraordinarily bad year is most likely to be followed by a better one.

A recent Variety headline read, "Parting Gifts: Old regime's pics fuel Paramount rebound," but one can't help but think that, had Viacom had more patience, the headline might have read, "Banner year puts Paramount and Lansing's career back on track."

Still, anecdotes are just anecdotes.

That's where the economists come in. "The moviemaking process is so complicated," says Anita Elberse of the Harvard Business School, "that at the green-lighting stage it is unclear whether you can even pull off making the movie that you think you are planning to make." Adds Charles Moul of Washington University in St. Louis: "There are two schools of thought. According to one, you can't know the appeal of a film until you've completed it, but once you have the movie you can run focus groups and determine whether it is a hit or a dog. According to the other school, you can't tell even then. Either way, it doesn't bode well for your ability to make $80-million green-lighting decisions that are more than just guesses."

The leading advocate of the second, more radical school of thought is Arthur De Vany, recently retired professor of economics and a member of the Institute for Mathematical Behavioral Sciences at UC Irvine. De Vany likes to illustrate the oddities of the film business by comparing films to breakfast cereal. If breakfast cereals were like films, he says, each time we visited the store we would find a large selection of new cereals, and only a few brands that survived from our last trip. Most of these cereals would languish unnoticed, but crowds would gather at certain parts of the aisle, scooping up the popular brands. And yet, within a few weeks, or at most months, even those popular brands would vanish from the shelves. And so our typical cereal breakfast would consist of a product we had never before tried, and very well might not like, but bought because we heard about it from friends or read of it in the newspaper cereal section.

That's precisely how films behave in the marketplace. If we hear good things, we go and perhaps tell others; if we hear bad things, we stay away. It's that process—the way consumers learn from others about the expected quality of the product—that De Vany found is the key to the odd behavior of the film business today. Economists call it an "information cascade."

"People's behavior is simple," De Vany says, "but in the aggregate it leads to a complex system, a system bordering on chaos."

The theory of chaotic systems grew popular in the 1970s among physicists who wanted to understand how phenomena described by a few simple variables could develop behavior so complex that it's virtually unpredictable. When computers developed in the 1950s, some scientists believed we eventually could accurately predict and perhaps even control the development of rainstorms. They were thwarted by one of the trademarks of chaotic systems, a phenomenon scientists call the "butterfly effect." The term derives from a 1972 talk by mathematician/meteorologist Edward Lorenz, "Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?"

According to the butterfly effect, a small change in the early stages of a chaotic system can lead to such huge and complicated alterations in its later stages that its behavior appears random. In the case of weather, that makes long-term forecasts almost worthless. You can measure the basic parameters—temperature, pressure, humidity, wind velocity—at thousands of different points and plug them into your theoretical model, but if you miss by a tenth of a percent, the rainstorm you predict for Las Vegas on Thursday will show up as the snowstorm that hits Boise on Tuesday.

In the film business the butterfly effect means that the budget, the genre, the star and the story might all appear to measure up, but if the co-star doesn't quite deliver on her charming smile, if the scenes don't play out just as you imagined them or if the country's mood changes by just a few degrees, then somewhere between the first day of principal photography and the day the movie opens the film that you predicted would take the country by storm instead creates a flurry of calls for your resignation. Films don't succeed or fail without reason, but the only reliable predictor of a film's box-office revenue in a given week is its take the prior week, and the best-laid plans of studio executives go awry as often as the 10-day weather forecast.

Of course, a studio can try to "make a film" through a massive marketing blitz. But although stars and a big ad budget can generate high initial revenues, De Vany's data show that such efforts only help in the opening weeks. After that, the information cascade takes over, and unless viewers like the film, the money spent on a wide release won't bring a return. In fact, if viewers don't like the film, a big ad campaign will create a large flow of negative feedback, killing the film faster than had the studio not pushed it. The result: a starless $18 million film such as "Home Alone" brings in more than $285 million while Kevin Costner's $175 million "Waterworld" dies a quick death, generating a disappointing $88 million.

Actors in Hollywood understand best that the industry runs on luck. As Bruce Willis once said, "If you can find out why this film or any other film does any good, I'll give you all the money I have." (For the record, the film to which he referred, 1993's "Striking Distance," didn't do any good.) Willis understands the unpredictability of the film business not simply because he's had box-office highs and lows. He knows that random events fueled his career from the beginning, and his story offers another case in point.

For seven years, starting in the late 1970s, Willis lived in a fifth-floor walk-up on 49th Street in Manhattan, struggling to make a name for himself off-Broadway and in television commercials. Meanwhile, he tended bar to make ends meet. He remained a minor actor no matter how hard he worked to get good roles, make the right career choices and excel in his trade. Then he made the best decision of his life: He flew to Los Angeles for the '84 Olympics.

While Willis was in L.A., an agent suggested that he go to a few television auditions. One was a show already in its final stages of casting. He landed the role of David Addison, the male lead paired with Cybill Shepherd in a new ABC offering called "Moonlighting." But choosing Willis was hardly a unanimous decision. Glenn Caron, the show's executive producer, liked Willis; the network executives thought he did not look like a serious lead. Viewers seemed to share their opinion: "Moonlighting" debuted on March 3, 1985, to low ratings. Luckily for Willis, in those days networks had patience, and the following season the show became a hit.

Willis had all the ingredients for stardom—acting talent, good looks, a unique personality—but so do many others who never make it big. For Willis, the coin landed heads enough times in a row that he hit the jackpot; for the unlucky fellow who would have won the "Moonlighting" lead had Willis not shown up, the coin took one bounce too many. Other examples of Hollywood's unpredictability are easy to find. "The executives at Warner Bros. didn't think anyone wanted to watch a dark film about a woman boxer," says Harvard's Elberse. "They made 'Million Dollar Baby' because they have an ongoing relationship with Clint Eastwood." And who hasn't heard the tales of "Ishtar" (Warren Beatty + Dustin Hoffman + a $55-million budget = $14 million), or "Last Action Hero" (Arnold Schwarzenegger + $85 million = $50 million)? In 1972 a young director named George Lucas shot a film called "American Graffiti" (1973) for less than $1 million. Universal had doubts about the finished film that eventually took in $115 million, and even graver doubts about Lucas' next idea. Lucas called the story "The Adventures of Luke Starkiller, as taken from 'The Journal of the Whills.' " Universal called it "unproduceable." Ultimately, Fox made the film, but its faith in the project only went so far—it paid Lucas only $100,000 to write and direct it; in exchange, Lucas received the sequel and merchandising rights. In the end, "Star Wars" took in $461 million on a budget of $11 million, and Lucas had himself an empire.

If hits are so hard to predict, why does it often appear that certain people, at certain times, have a hot hand?

The work of former UC Berkeley professor Daniel Kahneman helps explain this. While at the Hebrew University in Jerusalem in the 1970s, Kahneman and co-worker Amos Tversky addressed people's misconception of randomness and its effect on the way we make decisions. His research proved so influential in understanding how people make financial decisions that in 2002 Kahneman won the Nobel Prize in economics.

One of the questions Kahneman liked to put to his subjects concerned the sequences in a coin toss. For instance, in a toss of seven coins, which of the following head-tail combinations is more likely to occur, HHHHTTT or HTHTTHT? Most people erroneously believe that the first sequence is less likely than the second, but the two sequences—and all other sequences of seven heads and tails—are equally probable.

Not only are people bad at recognizing random processes, they also are easily fooled into thinking they are controlling them. Sociologists first noticed this while observing gamblers in Las Vegas. Dice players, they noted, act as if tossing the dice is a game of skill. They throw them softly if they want low numbers, or hard for high ones. Much like Hollywood executives, gamblers have their theories about how to make lucky throws.

The temptation to believe that you or others are causing chance events is so strong that psychologists coined a term for it: the illusion of control. In a classic study, psychologists Ellen J. Langer and Jane Roth recruited Yale undergraduate psychology majors to watch an experimenter flip a coin 30 times. One by one, the subjects watched the coin flips and tried to guess how the coins would land. They found that, although students at an Ivy League university are surely aware that a coin toss is a random event, those who experienced the early winning streaks developed an irrational attitude of confidence that they were "good" at intuiting the coin toss. Forty percent said their results would improve with practice; 25% even reported that, if in the future they were distracted during the test, their performance would suffer.

Although economists and psychologists have no problem understanding Hollywood's randomness, Hollywood executives, not surprisingly, are generally less convinced. "They are hostile to 'the nobody knows anything' school of thought," says Moul, "because it completely undercuts what they do." Jehoshua Eliashberg of the Wharton business school at the University of Pennsylvania says that unlike executives in other industries he has analyzed, in Hollywood "most executives feel threatened."

One Hollywood executive who spoke up against De Vany's work in the late 1990s was Frank Biondi, who ran Universal. Biondi thought he had it figured out. After running the numbers, he concluded that the industry was not as chaotic as it appeared. Films that cost more than $40 million had the highest return on capital, he said, and so the Harvard MBA directed his studio's dollars toward films he called "impact movies."

De Vany scoffs at such notions. "A naive analysis will often present false patterns," he says, "like faces in the clouds. But a careful study reveals that no strategy the studios devise is going to give them any kind of advantage at all." Then he adds, "So any studio executive getting paid more than the salary of a comparable executive at your local dairy is getting paid too much."

Who is right? In the case of Biondi and his strategy, the jury has delivered its verdict. Two years of impact movies later, with depressed film earnings and no relief in sight, Biondi was fired, leaving behind a legacy of film gems such as "Meet Joe Black" ($90 million budget, a feeble $44 million box office) and "Babe: Pig in the City" ($90 million budget, $18 million box office).

Old style seat-of-the-pants executives also object to the randomness theory. White-haired seventysomething Richard Zanuck, currently developing the upcoming Tim Burton-directed Jim Carrey film, "Ripley's Believe It or Not," is the son of 20th Century Fox founder Darryl F. Zanuck. Dick Zanuck ran production at Fox and then briefly ran the studio until some major dogs such as 1967's "Doctor Dolittle," 1968's "Star!" and 1969's "Hello, Dolly!" crippled the studio financially and led his dad to fire him. Zanuck says he understands his being fired. "You don't keep someone on endlessly hoping something will hit," he told me. "If you have a year of picking badly, you're walking down the street looking for a job."

In Zanuck's case, as in Lansing's, his bad streak ended and regression to the mean took over, but not in time to save his job. The films he developed before he got canned ended up doing well, and two of them, in fact, won best-picture Academy Awards—1970's "Patton" and 1971's "The French Connection."

I asked him if he thought he was fired prematurely.

"I don't think it hurt my career."

It certainly didn't. A few years later, Zanuck became the man responsible for Steven Spielberg's 1974 feature debut, "The Sugarland Express," as well as Spielberg's 1975 follow-up, "Jaws" (which took in $260 million on a budget of about $7 million). Did he feel "Jaws" would be a hit of historic proportions? "We didn't have any idea," he says. "We bought it from a manuscript, and the book became a bestseller while we were still doing the film."

Zanuck's career illustrates the randomness theory. He has made successful and unsuccessful films, and he obviously hasn't had an inkling in advance which would be which. But Zanuck disagrees with that take.

"True," he says, "nobody can pick a hit in advance because unpredictable things happen to each individual picture. But if you average over a five-year time span, over 100 pictures, 20 a year, the guys with talent will have a higher rate of success. You have to judge someone by their entire career."

Moul sympathizes with Zanuck's point of view. De Vany, too, understands what Zanuck is talking about. "Zanuck's father," he says, "and Thalberg and Disney had records of success that went far beyond chance. They were showmen. They had a knack for picking good stories. But they also had real power over their product and its distribution." They made movies the old-fashioned way: Prior to the 1960s, studios were able to integrate production (including actors and directors on long-term contracts) with large-scale exhibition interests. That meant the studio heads not only had complete creative and budgetary control, they also controlled the screens so they could adjust the release pattern as a film ran, making it less vulnerable to the information cascade.

Why are smart people in Hollywood blind to the randomness that rules their industry? Because we find comfort in having control. And then there are our egos. We like to believe in our own power.

But Langer also uncovered another important factor: competition. In the Yale coin-flip study, for example, most of the students assessed themselves as being better than their counterparts, even if the game was clearly no more than a series of random events.

And so we turn back to Hollywood, where both ego and competition reign supreme, and those involved in the game find it hard to believe that success and failure lie beyond their control. What lessons can we draw from all this?

De Vany's voice rises. "Today's Hollywood executives all act like wimps," he says. "They don't control their budgets. They give the actors anything they want. They rely on the easy answers, so they try to mimic past successes and cave in to the preposterous demands of stars. My research shows you don't have to do that. It's just an easy way out, an illusion."

Then he adds: "But, hey, it's Hollywood. Why should we expect the way they run the business to be any more real than the films themselves?"

Leonard Mlodinow is the author of several books on physics and mathematics, including "Feynman's Rainbow," "Euclid's Window: The Story of Geometry from Parallel Lines to Hyperspace" and, with Stephen Hawking, "A Briefer History of Time."