Saturday, July 02, 2005

What does it mean when we can listen to something instead of reading it?

And how will the role of the written word change?

---

iPod to the Rescue: Can Digital Audio Save Publishing?
June 29, 2005
By Rachel Deahl

At the South Huntington Public Library in South Huntington, N.Y., one of the most popular programs doesn’t involve books (in the strictest sense), or even reading (in the strictest sense). The big hit? Books on iPod. Library director Ken Weil says the branch purchased 14 iPod Shuffles in March that members can check out with pre-downloaded audiobooks. (And no, they don’t play the chapters in shuffle mode.) The iPods, Weil says, are “always out.”

That folks can pick up a gadget approximately the size of a cigarette lighter at their local library, programmed with a current bestseller for their listening pleasure, is the realization of countless sci-fi movies and Philip K. Dick novels. The future has clearly arrived: Apple’s immensely popular iPod—the software company shipped 5.3 million of the variously priced and sized devices in its second fiscal quarter of 2005 alone—is making consumers more comfortable with the idea of downloading audiobooks and listening on-the-go. So could DABs—which are more accessible, hip and cost-effective than traditional formats like cassettes and CDs—be the next big thing?

“[Digital audio] is the fastest-growing area of publishing,” says Lori Bell, head librarian at Mid-Illinois Talking Book Center in Peoria, Ill. “Books have stayed the same, but the audio publishing industry is the only one that’s really growing quickly.” For an industry constantly confronting the fear that it is thoroughly invested in a dying product, the growing popularity of DABs may point to salvation, promising to bring in younger, and more, consumers.

“Our download sales have gone up steadily over the course of the past year or so,” concurs Chris Lynch, executive vice president and publisher of Simon & Schuster Audio. “There’s definitely been an increase in [digital downloading of audiobooks], whether it’s to iPods or other devices like smart phones.” Such devices, says Steve Potash, CEO of OverDrive, one of the major distributors of DABs to libraries, are “changing the landscape [of the audiobook market] dramatically.”

One of the companies basking particularly comfortably in the success of DABs is Audible. The leading provider of spoken-word audio, Audible partnered with Apple in September 2004 to sell digital audio titles through the software company’s popular online music store, iTunes. According to David Joseph, Audible’s vice president of communications and strategy, 14 percent of the company’s revenue in the first quarter of 2005 came from purchases through iTunes.

Audible, which saw revenue of $34.3 million last year and attracted 72,000 new customers over its last fiscal quarter, is watching as the audiobook market expands in dramatic fashion. “We’ve been providing digital audio since 1997,” Joseph says, “but we haven’t seen accelerated growth since the last several years.” The way Joseph see it, these devices are “freeing” spoken-word audio in as dramatic a way as “printers freed text from computers” a generation ago.

Although iPods remain a top choice for downloadable audio (Lynch says he has seen a “noticeable” increase in the sale of digital audio since Audible partnered with iTunes), other gadgets are coming into the fold. Audible, for instance, allows users to download titles to 135 different devices including PDAs, pocket PCs and smart phones.

While cassettes were eclipsed years ago by CDs as the popular format for audiobooks (Lynch says publishing on cassette has become “more the exception than the rule”), the digital format offers even more advantages than CDs (which often, like cassettes, require consumers to deal with multiple discs to listen to a single book). Currently, one of the most attractive aspects is their price: often nearly 40% less than traditional audiobooks. With the lower price may come younger, first-time buyers. “In general, I think price is not a huge deal for our regular customers,” says Lynch. “But I think it is for people who are new to the format.” Lynch says that although Simon & Schuster Audio isn’t approaching its list yet with an eye toward titles skewing to younger listeners, his team is “paying close attention” to how titles geared to younger demographics fare.

Hilary Rubin, an audio rights agent at Trident Media Group, says that she’s seen a shift in the sale of audio rights for YA titles over the past few years. “I think there’s a direct connection to iPods,” she says. “Publishers are starting to see there is a market for this.”

In addition to YA titles, Rubin says, there is also a market for DABs geared to 20- and 30-somethings. Citing successes like John Stewart’s America the Book (Rubin’s agency reps the comedian/author), which sold very well in digital audio, Rubin thinks the audiobook market is accepting the idea that younger listeners are out there. “Warner did a lot of youth-targeted promotions for [America] and younger generations were buying that book more than 60-year-olds. I think this proves that if you market to younger readers, they’ll buy.”

Audible is also recognizing the potential market for DABs among younger consumers. The company is launching Audible Education in the third quarter of this year, through a partnership with the textbook publisher Pearson, to make inroads to the college market. As part of the deal, Audible will produce 100 audio study guides to be sold alongside the print editions of Pearson’s textbooks. The guides, which are to include chapter reviews and other information, will essentially allow students to listen to their homework while doing laundry or working out.

In an additional effort to bring digital audio to another audience—romance readers—Audible will be rolling out a 72-title program with romance-category leader Harlequin. The titles, 35 of which will be new releases (Audible will be producing the audio tracks for these editions), will all be available in digital audio for the first time.

Of course, younger readers aren’t the only ones flocking to DABs. Older readers, who were always considered the main demographic for books on tape, are also warming to the idea of downloading titles. And one place they’re finding DABs is at their local libraries. Along with South Huntington’s Books-on-iPod program, similar initiatives have become mainstays at libraries across the country. In one of the more publicized DAB programs, the New York Public Library announced on June 6 that it would be offering downloadable books through its website. (Piracy concerns are addressed by all the files being copy-protected and designed to “expire” in 21 days.)

Potash, whose OverDrive launched last year and is now one of the major companies providing libraries with digital audio, witnesses thousands of OverDrive users download audiobooks each month. Positing that since the library market typically skews to an older audience, Potash believes that the core group of listeners who once enjoyed the traditional, clunkier versions of audiobooks have gone digital. (A poll conducted recently by the Denver Public Library, for instance, showed that the most significant member group checking out its DABs was made up of consumers age 44 or older, with a third of them listening to the files on their computer.) “You can’t really pigeonhole the listening group,” Potash says, “but a mainstay of it is 40-plus and people looking for drive-time [books].”

As more consumers start downloading digital audiobooks, and more retailers start featuring them, the sky seems to be the limit for this new market. (One indicator: Amazon, which currently sells DABs through a partnership with Audible, recently announced plans to launch its own digital audio store.) This kind of vast potential and unbridled enthusiasm is a rarity in the book business, and a welcome one. “We’re all at the cusp of this technology,” says rights agent Rubin. “Everyone is trying to figure out the best method for taking advantage of it.”

© 2005 VNU eMedia Inc. All rights reserved.

Time to iterate.

Mathematics Is Biology's Next Microscope, Only Better; Biology Is Mathematics' Next Physics, Only Better

Joel E. Cohen

Joel E. Cohen is at the Laboratory of Populations, Rockefeller and Columbia Universities, New York, New York, United States of America. E-mail: cohen@rockefeller.edu

Published: December 14, 2004

DOI: 10.1371/journal.pbio.0020439

Copyright: © 2004 Joel E. Cohen. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Citation: Cohen JE (2004) Mathematics Is Biology's Next Microscope, Only Better; Biology Is Mathematics' Next Physics, Only Better. PLoS Biol 2(12): e439


--------------------------------------------------------------------------------

Although mathematics has long been intertwined with the biological sciences, an explosive synergy between biology and mathematics seems poised to enrich and extend both fields greatly in the coming decades (Levin 1992; Murray 1993; Jungck 1997; Hastings et al. 2003; Palmer et al. 2003; Hastings and Palmer 2003). Biology will increasingly stimulate the creation of qualitatively new realms of mathematics. Why? In biology, ensemble properties emerge at each level of organization from the interactions of heterogeneous biological units at that level and at lower and higher levels of organization (larger and smaller physical scales, faster and slower temporal scales). New mathematics will be required to cope with these ensemble properties and with the heterogeneity of the biological units that compose ensembles at each level.

The discovery of the microscope in the late 17th century caused a revolution in biology by revealing otherwise invisible and previously unsuspected worlds. Western cosmology from classical times through the end of the Renaissance envisioned a system with three types of spheres: the sphere of man, exemplified by his imperfectly round head; the sphere of the world, exemplified by the imperfectly spherical earth; and the eight perfect spheres of the universe, in which the seven (then known) planets moved and the outer stars were fixed (Nicolson 1960). The discovery of a microbial world too small to be seen by the naked eye challenged the completeness of this cosmology and unequivocally demonstrated the existence of living creatures unknown to the Scriptures of Old World religions.

Mathematics broadly interpreted is a more general microscope. It can reveal otherwise invisible worlds in all kinds of data, not only optical. For example, computed tomography can reveal a cross-section of a human head from the density of X-ray beams without ever opening the head, by using the Radon transform to infer the densities of materials at each location within the head (Hsieh 2003). Charles Darwin was right when he wrote that people with an understanding “of the great leading principles of mathematics… seem to have an extra sense” (F. Darwin 1905). Today's biologists increasingly recognize that appropriate mathematics can help interpret any kind of data. In this sense, mathematics is biology's next microscope, only better.

Conversely, mathematics will benefit increasingly from its involvement with biology, just as mathematics has already benefited and will continue to benefit from its historic involvement with physical problems. In classical times, physics, as first an applied then a basic science, stimulated enormous advances in mathematics. For example, geometry reveals by its very etymology (geometry) its origin in the needs to survey the lands and waters of Earth. Geometry was used to lay out fields in Egypt after the flooding of the Nile, to aid navigation, to aid city planning. The inventions of the calculus by Isaac Newton and Gottfried Leibniz in the later 17th century were stimulated by physical problems such as planetary orbits and optical calculations.

In the coming century, biology will stimulate the creation of entirely new realms of mathematics. In this sense, biology is mathematics' next physics, only better. Biology will stimulate fundamentally new mathematics because living nature is qualitatively more heterogeneous than non-living nature. For example, it is estimated that there are 2,000–5,000 species of rocks and minerals in the earth's crust, generated from the hundred or so naturally occurring elements (Shipman et al. 2003; chapter 21 estimates 2,000 minerals in Earth's crust). By contrast, there are probably between 3 million and 100 million biological species on Earth, generated from a small fraction of the naturally occurring elements. If species of rocks and minerals may validly be compared with species of living organisms, the living world has at least a thousand times the diversity of the non-living. This comparison omits the enormous evolutionary importance of individual variability within species. Coping with the hyper-diversity of life at every scale of spatial and temporal organization will require fundamental conceptual advances in mathematics.

The Past

The interactions between mathematics and biology at present follow from their interactions over the last half millennium. The discovery of the New World by Europeans approximately 500 years ago—and of its many biological species not described in religious Scriptures—gave impetus to major conceptual progress in biology.

The outstanding milestone in the early history of biological quantitation was the work of William Harvey, Exercitatio Anatomica De Motu Cordis et Sanguinis In Animalibus (An Anatomical Disquisition on the Motion of the Heart and Blood in Animals) (Harvey 1847), first published in 1628. Harvey's demonstration that the blood circulates was the pivotal founding event of the modern interaction between mathematics and biology. His elegant reasoning is worth understanding.

From the time of the ancient Greek physician Galen (131–201 C.E.) until William Harvey studied medicine in Padua (1600–1602, while Galileo was active there), it was believed that there were two kinds of blood, arterial blood and venous blood. Both kinds of blood were believed to ebb and flow under the motive power of the liver, just as the tides of the earth ebbed and flowed under the motive power of the moon. Harvey became physician to the king of England. He used his position of privilege to dissect deer from the king's deer park as well as executed criminals. Harvey observed that the veins in the human arm have one-way valves that permit blood to flow from the periphery toward the heart but not in the reverse direction. Hence the theory that the blood ebbs and flows in both veins and arteries could not be correct.

Harvey also observed that the heart was a contractile muscle with one-way valves between the chambers on each side. He measured the volume of the left ventricle of dead human hearts and found that it held about two ounces (about 60 ml), varying from 1.5 to three ounces in different individuals. He estimated that at least one-eighth and perhaps as much as one-quarter of the blood in the left ventricle was expelled with each stroke of the heart. He measured that the heart beat 60–100 times per minute. Therefore, the volume of blood expelled from the left ventricle per hour was about 60 ml × 1/8 × 60 beats/minute × 60 minutes/hour, or 27 liters/hour. However, the average human has only 5.5 liters of blood (a quantity that could be estimated by draining a cadaver). Therefore, the blood must be like a stage army that marches off one side of the stage, returns behind the scenes, and reenters from the other side of the stage, again and again. The large volume of blood pumped per hour could not possibly be accounted for by the then-prevalent theory that the blood originated from the consumption of food. Harvey inferred that there must be some small vessels that conveyed the blood from the outgoing arteries to the returning veins, but he was not able to see those small vessels. His theoretical prediction, based on his meticulous anatomical observations and his mathematical calculations, was spectacularly confirmed more than half a century later when Marcello Malpighi (1628–1694) saw the capillaries under a microscope. Harvey's discovery illustrates the enormous power of simple, off-the-shelf mathematics combined with careful observation and clear reasoning. It set a high standard for all later uses of mathematics in biology.

Mathematics was crucial in the discovery of genes by Mendel (Orel 1984) and in the theory of evolution. Mathematics was and continues to be the principal means of integrating evolution and genetics since the classic work of R. A. Fisher, J. B. S. Haldane, and S. Wright in the first half of the 20th century (Provine 2001).

Over the last 500 years, mathematics has made amazing progress in each of its three major fields: geometry and topology, algebra, and analysis. This progress has enriched all the biological sciences.

In 1637, René Descartes linked the featureless plane of Greek geometry to the symbols and formulas of Arabic algebra by imposing a coordinate system (conventionally, a horizontal x-axis and a vertical y-axis) on the geometric plane and using numbers to measure distances between points. If every biologist who plotted data on x–y coordinates acknowledged the contribution of Descartes to biological understanding, the key role of mathematics in biology would be uncontested.

Another highlight of the last five centuries of geometry was the invention of non-Euclidean geometries (1823–1830). Shocking at first, these geometries unshackled the possibilities of mathematical reasoning from the intuitive perception of space. These non-Euclidean geometries have made significant contributions to biology in facilitating, for example, mapping the brain onto a flat surface (Hurdal et al. 1999; Bowers and Hurdal 2003).

In algebra, efforts to find the roots of equations led to the discovery of the symmetries of roots of equations and thence to the invention of group theory, which finds routine application in the study of crystallographic groups by structural biologists today. Generalizations of single linear equations to families of simultaneous multi-variable linear equations stimulated the development of linear algebra and the European re-invention and naming of matrices in the mid-19th century. The use of a matrix of numbers to solve simultaneous systems of linear equations can be traced back in Chinese mathematics to the period from 300 B.C.E. to 200 C.E. (in a work by Chiu Chang Suan Shu called Nine Chapters of the Mathematical Art; Smoller 2001). In the 19th century, matrices were considered the epitome of useless mathematical abstraction. Then, in the 20th century, it was discovered, for example, that the numerical processes required for the cohort-component method of population projection can be conveniently summarized and executed using matrices (Keyfitz 1968). Today the use of matrices is routine in agencies responsible for making official population projections as well as in population-biological research on human and nonhuman populations (Caswell 2001).

Finally, analysis, including the calculus of Newton and Leibniz and probability theory, is the line between ancient thought and modern thought. Without an understanding of the concepts of analysis, especially the concept of a limit, it is not possible to grasp much of modern science, technology, or economic theory. Those who understand the calculus, ordinary and partial differential equations, and probability theory have a way of seeing and understanding the world, including the biological world, that is unavailable to those who do not.

Conceptual and scientific challenges from biology have enriched mathematics by leading to innovative thought about new kinds of mathematics. Table 1 lists examples of new and useful mathematics arising from problems in the life sciences broadly construed, including biology and some social sciences. Many of these developments blend smoothly into their antecedents and later elaborations. For example, game theory has a history before the work of John von Neumann (von Neumann 1959; von Neumann and Morgenstern 1953), and Karl Pearson's development of the correlation coefficient (Pearson and Lee 1903) rested on earlier work by Francis Galton (1889).

Table 1. Mathematics Arising from Biological Problems



The Present

To see how the interactions of biology and mathematics may proceed in the future, it is helpful to map the present landscapes of biology and applied mathematics.

The biological landscape may be mapped as a rectangular table with different rows for different questions and different columns for different biological domains. Biology asks six kinds of questions. How is it built? How does it work? What goes wrong? How is it fixed? How did it begin? What is it for? These are questions, respectively, about structures, mechanisms, pathologies, repairs, origins, and functions or purposes. The former teleological interpretation of purpose has been replaced by an evolutionary perspective. Biological domains, or levels of organization, include molecules, cells, tissues, organs, individuals, populations, communities, ecosystems or landscapes, and the biosphere. Many biological research problems can be classified as the combination of one or more questions directed to one or more domains.

In addition, biological research questions have important dimensions of time and space. Timescales of importance to biology range from the extremely fast processes of photosynthesis to the billions of years of living evolution on Earth. Relevant spatial scales range from the molecular to the cosmic (cosmic rays may have played a role in evolution on Earth). The questions and the domains of biology behave differently on different temporal and spatial scales. The opportunities and the challenges that biology offers mathematics arise because the units at any given level of biological organization are heterogeneous, and the outcomes of their interactions (sometimes called “emergent phenomena” or “ensemble properties”) on any selected temporal and spatial scale may be substantially affected by the heterogeneity and interactions of biological components at lower and higher levels of biological organization and at smaller and larger temporal and spatial scales (Anderson 1972, 1995).

The landscape of applied mathematics is better visualized as a tetrahedron (a pyramid with a triangular base) than as a matrix with temporal and spatial dimensions. (Mathematical imagery, such as a tetrahedron for applied mathematics and a matrix for biology, is useful even in trying to visualize the landscapes of biology and mathematics.) The four main points of the applied mathematical landscape are data structures, algorithms, theories and models (including all pure mathematics), and computers and software. Data structures are ways to organize data, such as the matrix used above to describe the biological landscape. Algorithms are procedures for manipulating symbols. Some algorithms are used to analyze data, others to analyze models. Theories and models, including the theories of pure mathematics, are used to analyze both data and ideas. Mathematics and mathematical theories provide a testing ground for ideas in which the strength of competing theories can be measured. Computers and software are an important, and frequently the most visible, vertex of the applied mathematical landscape. However, cheap, easy computing increases the importance of theoretical understanding of the results of computation. Theoretical understanding is required as a check on the great risk of error in software, and to bridge the enormous gap between computational results and insight or understanding.

The landscape of research in mathematics and biology contains all combinations of one or more biological questions, domains, time scales, and spatial scales with one or more data structures, algorithms, theories or models, and means of computation (typically software and hardware). The following example from cancer biology illustrates such a combination: the question, “how does it work?” is approached in the domain of cells (specifically, human cancer cells) with algorithms for correlation and hierarchical clustering.

Gene expression and drug activity in human cancer.

Suppose a person has a cancer. Could information about the activities of the genes in the cells of the person's cancer guide the use of cancer-treatment drugs so that more effective drugs are used and less effective drugs are avoided? To suggest answers to this question, Scherf et al. (2000) ingeniously applied off-the-shelf mathematics, specifically, correlation—invented nearly a century earlier by Karl Pearson (Pearson and Lee 1903) in a study of human inheritance—and clustering algorithms, which apparently had multiple sources of invention, including psychometrics (Johnson 1967). They applied these simple tools to extract useful information from, and to combine for the first time, enormous databases on molecular pharmacology and gene expression (http://discover.nci.nih.gov/arraytools/). They used two kinds of information from the drug discovery program of the National Cancer Institute. The first kind of information described gene expression in 1,375 genes of each of 60 human cancer cell lines. A target matrix T had, as the numerical entry in row g and column c, the relative abundance of the mRNA transcript of gene g in cell line c. The drug activity matrix A summarized the pharmacology of 1,400 drugs acting on each of the same 60 human cancer cell lines, including 118 drugs with “known mechanism of action.” The number in row d and column c of the drug activity matrix A was the activity of drug d in suppressing the growth of cell line c, or, equivalently, the sensitivity of cell line c to drug d. The target matrix T for gene expression contained 82,500 numbers, while the drug activity matrix A had 84,000 numbers.

These two matrices have the same set of column headings but have different row labels. Given the two matrices, precisely five sets of possible correlations could be calculated, and Scherf et al. calculated all five. (1) The correlation between two different columns of the activity matrix A led to a clustering of cell lines according to their similarity of response to different drugs. (2) The correlation between two different columns of the target matrix T led to a clustering of the cell lines according to their similarity of gene expression. This clustering differed very substantially from the clustering of cell lines by drug sensitivity. (3) The correlation between different rows of the activity matrix A led to a clustering of drugs according to their activity patterns across all cell lines. (4) The correlation between different rows of the target matrix T led to a clustering of genes according to the pattern of mRNA expressed across the 60 cell lines. (5) Finally, the correlation between a row of the activity matrix A and a row of the target matrix T described the positive or negative covariation of drug activity with gene expression. A positive correlation meant that the higher the level of gene expression across the 60 cancer cell lines, the higher the effectiveness of the drug in suppressing the growth of those cell lines. The result of analyzing several hundred thousand experiments is summarized in a single picture called a clustered image map (Figure 1). This clustered image map plots gene expression–drug activity correlations as a function of clustered genes (horizontal axis) and clustered drugs (showing only the 118 drugs with “known function”) on the vertical axis (Weinstein et al. 1997).

Figure 1. Clustered Image Map of Gene Expression–Drug Activity Correlations



Plotted as a function of 1,376 clustered genes (x-axis) and 118 clustered drugs (y-axis).
This image is more recent than the published image (Scherf et al. 2000).
Used by permission of John N. Weinstein.

What use is this? If a person's cancer cells have high expression for a particular gene, and the correlation of that gene with drug activity is highly positive, then that gene may serve as a marker for tumor cells likely to be inhibited effectively by that drug. If the correlation with drug activity is negative, then the marker gene may indicate when use of that drug is contraindicated.

While important scientific questions about this approach remain open, its usefulness in generating hypotheses to be tested by further experiments is obvious. It is a very insightful way of organizing and extracting meaning from many individual observations. Without the microscope of mathematical methods and computational power, the insight given by the clustered image map could not be achieved.

The Future

To realize the possibilities of effective synergy between biology and mathematics will require both avoiding potential problems and seizing potential opportunities.

Potential problems.

The productive interaction of biology and mathematics will face problems that concern education, intellectual property, and national security.

Educating the next generation of scientists will require early emphasis on quantitative skills in primary and secondary schools and more opportunities for training in both biology and mathematics at undergraduate, graduate, and postdoctoral levels (CUBE 2003).

Intellectual property rights may both stimulate and obstruct the potential synergy of biology and mathematics. Science is a potlatch culture. The bigger one's gift to the common pool of knowledge and techniques, the higher one's status, just as in the potlatch culture of the Native Americans of the northwest coast of North America. In the case of research in mathematics and biology, intellectual property rights to algorithms and databases need to balance the concerns of inventors, developers, and future researchers (Rai and Eisenberg 2003).

A third area of potential problems as well as opportunities is national security. Scientists and national defenders can collaborate by supporting and doing open research on the optimal design of monitoring networks and mitigation strategies for all kinds of biological attacks (Wein et al. 2003). But openness of scientific methods or biological reagents in microbiology may pose security risks in the hands of terrorists. Problems of conserving privacy may arise when disparate databases are connected, such as physician payment databases with disease diagnosis databases, or health databases with law enforcement databases.

Opportunities.

Mathematical models can circumvent ethical dilemmas. For example, in a study of the household transmission of Chagas disease in northwest Argentina, Cohen and Gürtler (2001) wanted to know—since dogs are a reservoir of infection—what would happen if dogs were removed from bedroom areas, without spraying households with insecticides against the insect that transmits infection. Because neither the householders nor the state public health apparatus can afford to spray the households in some areas, the realistic experiment would be to ask householders to remove the dogs without spraying. But a researcher who goes to a household and observes an insect infestation is morally obliged to spray and eliminate the infestation. In a detailed mathematical model, it was easy to set a variable representing the number of dogs in the bedroom areas to zero. All components of the model were based on measurements made in real villages. The calculation showed that banishing dogs from bedroom areas would substantially reduce the intensity of infection in the absence of spraying, though spraying would contribute to additional reductions in the intensity of infection. The model was used to do an experiment conceptually that could not be done ethically in a real village. The conceptual experiment suggested the value of educating villagers about the important health benefits of removing dogs from the bedroom areas.

The future of a scientific field is probably less predictable than the future in general. Doubtless, though, there will be exciting opportunities for the collaboration of mathematics and biology. Mathematics can help biologists grasp problems that are otherwise too big (the biosphere) or too small (molecular structure); too slow (macroevolution) or too fast (photosynthesis); too remote in time (early extinctions) or too remote in space (life at extremes on the earth and in space); too complex (the human brain) or too dangerous or unethical (epidemiology of infectious agents). Box 1 summarizes five biological and five mathematical challenges where interactions between biology and mathematics may prove particularly fruitful.

Acknowledgments

This paper is based on a talk given on February 12, 2003, as the keynote address at the National Science Foundation (NSF)–National Institutes of Health (NIH) Joint Symposium on Accelerating Mathematical–Biological Linkages, Bethesda, Maryland; on June 12, 2003, as the first presentation in the 21st Century Biology Lecture Series, National Science Foundation, Arlington, Virginia; and on July 10, 2003, at a Congressional Lunch Briefing, co-sponsored by the American Mathematical Society and Congressman Vernon J. Ehlers, Washington, D.C. I thank Margaret Palmer, Sam Scheiner, Michael Steuerwalt, James Cassatt, Mike Marron, John Whitmarsh, and directors of NSF and NIH for organizing the NSF–NIH meeting, Mary Clutter and Joann P. Roskoski for organizing my presentation at the NSF, Samuel M. Rankin III for organizing the American Mathematical Society Congressional Lunch Briefing, and Congressman Bob Filner for attending and participating. I am grateful for constructive editing by Philip Bernstein, helpful suggestions on earlier versions from Mary Clutter, Charles Delwiche, Bruce A. Fuchs, Yonatan Grad, Alan Hastings, Kevin Lauderdale, Zaida Luthey-Schulten, Daniel C. Reuman, Noah Rosenberg, Michael Pearson, and Samuel Scheiner, support from U.S. NSF grant DEB 9981552, the help of Kathe Rogerson, and the hospitality of Mr. and Mrs. William T. Golden during this work. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the NSF.

---

Box 1. Challenges

---

Here are five biological challenges that could stimulate, and benefit from, major innovations in mathematics.

Understand cells, their diversity within and between organisms, and their interactions with the biotic and abiotic environments. The complex networks of gene interactions, proteins, and signaling between the cell and other cells and the abiotic environment is probably incomprehensible without some mathematical structure perhaps yet to be invented.

Understand the brain, behavior, and emotion. This, too, is a system problem. A practical test of the depth of our understanding is this simple question: Can we understand why people choose to have children or choose not to have children (assuming they are physiologically able to do so)?

Replace the tree of life with a network or tapestry to represent lateral transfers of heritable features such as genes, genomes, and prions (Delwiche and Palmer 1996; Delwiche 1999, 2000a, 2000b; Li and Lindquist 2000; Margulis and Sagan 2002; Liu et al. 2002; http://www.life.umd.edu/labs/Delwiche/pubs/endosymbiosis.gif).

Couple atmospheric, terrestrial, and aquatic biospheres with global physicochemical processes.

Monitor living systems to detect large deviations such as natural or induced epidemics or physiological or ecological pathologies.

---

Here are five mathematical challenges that would contribute to the progress of biology.

Understand computation. Find more effective ways to gain insight and prove theorems from numerical or symbolic computations and agent-based models. We recall Hamming: “The purpose of computing is insight, not numbers” (Hamming 1971, p. 31).

Find better ways to model multi-level systems, for example, cells within organs within people in human communities in physical, chemical, and biotic ecologies.

Understand probability, risk, and uncertainty. Despite three centuries of great progress, we are still at the very beginning of a true understanding. Can we understand uncertainty and risk better by integrating frequentist, Bayesian, subjective, fuzzy, and other theories of probability, or is an entirely new approach required?

Understand data mining, simultaneous inference, and statistical de-identification (Miller 1981). Are practical users of simultaneous statistical inference doomed to numerical simulations in each case, or can general theory be improved? What are the complementary limits of data mining and statistical de-identification in large linked databases with personal information?

Set standards for clarity, performance, publication and permanence of software and computational results.

---

References
Anderson PW (1972) More is different. Science 177: 393–396. Find this article online
Anderson PW (1995) Physics: The opening to complexity. Proc Natl Acad Sci U S A 92: 6653–6654. Find this article online
Bell ET (1937) Men of mathematics. New York: Simon and Schuster. 592 p.
Benzer S (1959) On the topology of the genetic fine structure. Proc Natl Acad Sci U S A 45: 1607–1620. Find this article online
Bowers PL, Hurdal MK, (2003) Planar conformal mappings of piecewise flat surfaces. In Hege HC, Polthier K, editors. Visualization and mathematics III Berlin: Springer. pp 3–34.
Caswell H (2001) Matrix population models: Construction, analysis and interpretation, 2nd ed. Sunderland (Massachusetts): Sinauer Associates. 722 p.
Cohen JE, Gürtler RE (2001) Modeling household transmission of American trypanosomiasis [supplementary material]. Science 293: 694–698 Available: http://www.sciencemag.org/cgi/content/full/293/5530/694/DC1 via the Internet. Accessed 20 October 2004. Find this article online
[CUBE] Committee on Undergraduate Biology Education to Prepare Research Scientists for the 21st Century, Board on Life Sciences, Division on Earth and Life Studies, National Research Council of the National Academies (2003) BIO 2010: Transforming undergraduate education for future research biologists. Washington (D.C.): National Academies Press. 191 p.
Darwin F, editor (1905) The life and letters of Charles Darwin. New York: Appleton.
Delwiche CF (1999) Tracing the web of plastid diversity through the tapestry of life. Am Nat 154: S164–S177. Find this article online
Delwiche CF (2000a) Gene transfer between organisms. In: McGraw-Hill 2001 yearbook of science and technology New York: McGraw-Hill. pp 193–197.
Delwiche CF (2000b) Griffins and chimeras: Evolution and horizontal gene transfer. Bioscience 50: 85–87. Find this article online
Delwiche CF, Palmer JD (1996) Rampant horizontal transfer and duplication of rubisco genes in eubacteria and plastids. Mol Biol Evol 13: 873–882. Find this article online
Erdös P, Rényi A (1960) On the evolution of random graphs. Publ Math Inst Hung Acad Sci 5: 17–61. Find this article online
Euler L (1760) Recherches générales sur la mortalité et la multiplication. Mémoires de l'Académie Royal des Sciences et Belles Lettres 16: 144–164. Find this article online
Ewens WJ (1972) The sampling theory of selectively neutral alleles. Theor Popul Biol 3: 87–112. Find this article online
Fisher RA (1937) The wave of advance of advantageous genes. Ann Eugenics 7: 353–369. Find this article online
Fisher RA (1950) Contributions to mathematical statistics. Tukey J, indexer New York: Wiley. 1v.
Galton F (1889) Natural inheritance. London: Macmillan. 259 p.
Hamming RW (1971) Introduction to applied numerical analysis. New York: McGraw-Hill. 331 p.
Hardy GH (1908) Mendelian proportions in a mixed population. Science 28: 49–50. Find this article online
Hastings A, Palmer MA (2003) A bright future for biologists and mathematicians? Science 299: 2003–2004. Find this article online
Hastings A, Arzberger P, Bolker B, Ives T, Johnson N et al. (2003) Quantitative biology for the 21st century. Available: http://www.sdsc.edu/QEIB/QEIB_final.html via the Internet. Accessed 20 October 2004.
Hsieh J (2003) Computed tomography: Principles, design, artifacts, and recent advances. Bellingham (Washington): SPIE Optical Engineering Press. 387 p.
Hurdal MK, Bowers PL, Stephenson K, Sumners DWL, Rehm K, et al. (1999) Quasi-conformally flat mapping the human cerebellum. In: Taylor C, Colchester A, editors. Medical image computing and computer-assisted intervention—MICCAI '99 Berlin: Springer. pp 279–286.
Johnson SC (1967) Hierarchical clustering schemes. Psychometrika 2: 241–254. Find this article online
Jungck JR (1997) Ten equations that changed biology: Mathematics in problem-solving biology curricula. Bioscene 23: 11–36. Find this article online
Kendall DG (1948) On the generalized birth-and-death process. Ann Math Stat 19: 1–15. Find this article online
Kendall DG (1949) Stochastic processes and population growth. J R Stat Soc [Ser B] 11: 230–264. Find this article online
Keyfitz N (1968) Introduction to the mathematics of population. Reading (Massachusetts): Addison-Wesley. 450 p.
Kimura M, (1994) Population genetics, molecular evolution, and the neutral theory: Selected papers. Takahata N, editor Chicago: University of Chicago Press. 686 p.
Kingman JFC (1982a) On the genealogy of large populations. J Appl Prob 19A: 27–43. Find this article online
Kingman JFC (1982b) The coalescent. Stoch Proc Appl 13: 235–248. Find this article online
Kolmogorov A, Petrovsky I, Piscounov N (1937) Etude de l'équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique. Moscow University Bull Math 1: 1–25. Find this article online
Levin S, editor (1992) Mathematics and biology: The interface. Challenges and opportunities. Lawrence Berkeley Laboratory Pub-701 Berkeley (California): University of California. Available: http://www.bio.vu.nl/nvtb/Contents.html via the Internet. Accessed 20 October 2004.
Li L, Lindquist S (2000) Creating a protein-based element of inheritance. Science 287: 661–664. Find this article online
Liu JJ, Sondheimer N, Lindquist S (2002) Changes in the middle region of Sup35 profoundly alter the nature of epigenetic inheritance for the yeast prion [PSI+]. Proc Natl Acad Sci U S A 99: 16446–16453. Find this article online
Lotka AJ (1925) Elements of physical biology. Baltimore: Williams and Wilkins. 460 p.
Luria SE, Delbrück M (1943) Mutations of bacteria from virus sensitivity to virus resistance. Genetics 28: 491–511. Find this article online
Margulis L, Sagan D (2002) Acquiring genomes: A theory of the origins of species. New York: Basic Books. 240 p.
Markov AA (1906) Extension of the law of large numbers to dependent variables [Russian]. Izv Fiz-Matem Obsch Kazan Univ (2nd Ser) 15: 135–156. Find this article online
Miller RG Jr (1981) Simultaneous statistical inference, 2nd ed. New York: Springer-Verlag. 299 p.
Murray JA (1993) Mathematical biology, 2nd ed. Berlin: Springer-Verlag. 767 p.
Nicolson MH (1960) The breaking of the circle: Studies in the effect of the “new science” upon seventeenth-century poetry, revised ed. New York: Columbia University Press. 216 p.
Orel V (1984) Mendel. Finn S, translator Oxford: Oxford University Press. 111 p.
Palmer MA, Arzberger P, Cohen JE, Hastings A, Holt RD et al. (2003) Accelerating mathematical-biological linkages. Report of a joint National Science Foundation–National Institutes of Health Workshop; 2003 February 12–13 National Institutes of Health. Bethesda, Maryland: Available: http://www.palmerlab.umd.edu/report.pdf. Accessed 23 October 2004.
Pearson K, Lee A (1903) On the laws of inheritance in man. Biometrika 2: 357–462. Find this article online
Provine WB (2001) The origins of theoretical population genetics, 2nd ed. Chicago: University of Chicago Press. 211 p.
Rai AK, Eisenberg RS (2003) Bayh-Dole reform and the progress of biomedicine. Am Sci 91: 52–59. Find this article online
Scherf U, Ross DT, Waltham M, Smith LH, Lee JK, et al. (2000) A gene expression database for the molecular pharmacology of cancer. Nat Genet 24: 236–244. Find this article online
Shipman JT, Wilson JD, Todd AW (2003) An introduction to physical science, 10th ed. Boston: Houghton Mifflin. 1v.
Smoller L (2001) Applications: Web-based precalculus. Did you know…? Little Rock: University of Arkansas at Little Rock College of Information Science and Systems Engineering. Available: http://www.ualr.edu/lasmoller/matrices.html via the Internet. Accessed 26 December 2003.
Turing AM (1952) The chemical basis of morphogenesis. Phil Trans R Soc Lond B Biol Sci 237: 37–72. Find this article online
Verhulst PF (1838) Notice sur la loi que la population suit dans son accroissement. Correspondance mathématique et physique publiée par A. Quételet (Brussels) 10: 113–121. Find this article online
Volterra V, (1931) Variations and fluctuations of the number of individuals in animal species living together. In: Chapman RN, editor. Animal ecology New York: McGraw Hill. pp 409–448.
von Neumann J, (1959) On the theory of games of strategy. Bargmann S, translator. In: Tucker AW, Luce RD, editors. Contributions to the theory of games, Volume 4 Princeton: Princeton University Press. pp 13–42.
von Neumann J, Morgenstern O (1953) Theory of games and economic behavior, 3rd ed. New York: John Wiley and Sons. 641 p.
Wein LM, Craft DL, Kaplan EH (2003) Emergency response to an anthrax attack. Proc Natl Acad Sci U S A 100: 4346–4351. Find this article online
Weinberg W (1908) Ueber den Nachweis der Vererbung beim Menschen. Jahresh Verein f vaterl Naturk Württemb 64: 368–382. Find this article online
Weinstein JN, Myers TG, O'Connor PM, Friend SH, Fornace AJ, et al. (1997) An information-intensive approach to the molecular pharmacology of cancer. Science 275: 343–349. Find this article online
William H (1847) The works of William Harvey, M.D., physician to the king, professor of anatomy and surgery to the College of Physicians. Robert Willis. translator London: Printed for the Sydenham Society 624 p.
Yule GU (1925) The growth of population and the factors which control it. J R Stat Soc 88. Part I 1–58. Find this article online

Does it even matter?

Wanting to evolve and being able to evolve are two separate matters entirely.
---
July 3, 2005
Blockbuster Drugs Are So Last Century
By ALEX BERENSON
INDIANAPOLIS

DRUG companies do an awful job of finding new medicines. They rely too much on billion-dollar blockbuster drugs that are both overmarketed and overprescribed. And they have been too slow to disclose side effects of popular medicines.

Typical complaints from drug industry critics, right? Well, yes. Only this time they come from executives at Eli Lilly, the sixth-largest American drug maker and the company that invented Prozac.

From this placid Midwestern city, well removed from the Boston-to-Washington corridor that is the core of the pharmaceutical industry, Lilly is ambitiously rethinking the way drugs are discovered and sold. In a speech to shareholders in April, Sidney Taurel, Lilly's chief executive, presented the company's new strategy in a pithy phrase: "the right dose of the right drug to the right patient at the right time."

In other words, Lilly sees its future not in blockbuster medicines like Prozac that are meant for tens of millions of patients, but rather in drugs that are aimed at smaller groups and can be developed more quickly and cheaply, possibly with fewer side effects.

There is no guarantee, of course, that Lilly will succeed. And some Wall Street analysts complain about the recent track record of the company, saying that it has habitually overpromised the potential of its drugs and taken one-time charges that distort its reported profits. In the last year, Lilly's stock has fallen 21 percent, while shares in the average big drug maker have been flat.

Still, since late 2001, Lilly's labs have produced five truly new drugs, including treatments for osteoporosis, depression and lung cancer. The total exceeds that of many of its much-larger competitors. And at a time when the drug industry seems adrift, that Lilly has any vision at all for the future is striking.

"The challenge for us as an industry, as a company, is to move more from a blockbuster model to a targeted model," Mr. Taurel said at Lilly's headquarters here recently. "We need a better value proposition than today."

For five years, drug companies have struggled to bring new medicines to market. But Lilly executives say they believe that the drought is not permanent. Advances in understanding the ways that cells and genes work will soon lead to important new drugs, said Peter Johnson, executive director of corporate strategy.

Moreover, Lilly expects that drug makers without breakthrough medicines that are either the first or the best in their categories will face increasing pressure from insurers to cut prices or lose coverage.

If that vision is correct, the industry's winners will be companies that invest heavily in research and differentiate themselves by focusing on a few diseases instead of on building size and cutting costs through mergers, as Pfizer has done. Lilly, which spends nearly 20 percent of its sales on research, compared with about 16 percent for the average drug company, may be well positioned for the future.

"We do not believe that size pays off for anybody, especially size acquired in an acquisition," Mr. Taurel said.

But if Lilly is wrong about the industry's direction, or if its research efforts fail, it could wind up like Merck, the third-biggest American drug company, which has also adamantly opposed mergers and bet instead on its labs. After its own eight-year drought of major new drugs, Merck has had a 65 percent decline in its stock price since 2000, and its chief executive was forced out in May.

Mr. Johnson acknowledges that Lilly's strategy is risky. "You can't make a discovery operation invent what you want them to invent," he said.

So Lilly is seeking to improve its odds and to cut research costs by changing the way it develops drugs, said Dr. Steven M. Paul, president of the company's laboratories.

Bringing a drug to market cost more than $900 million on average in 2003, compared with $230 million in 1987, according to estimates from Lilly and industry groups. But the public's willingness to accept side effects is shrinking, and some drug-safety experts and lawmakers want even larger and longer clinical trials for new drugs, increasing development costs. If nothing changes, Lilly expects that by 2010, the cost of finding a single new drug may reach $2 billion by 2010, an unsustainable amount, Dr. Paul said.

"We've got to do something to reduce the costs," he added.

The biggest expense in drug development comes not from early-stage research, he said, but from the failure of drugs after they have left the labs and been tested in humans. A drug that has moved into first-stage human clinical trials now has only about an 8 percent chance of reaching the market. Even in late-stage trials, about half of all drugs fail, often because they do not prove better than existing treatments.

To change that, Lilly is focusing its research efforts on finding biomarkers - genes or other cellular signals that will indicate which patients are most likely to respond to a given drug. Other drug makers are also searching for biomarkers, but Lilly executives are the most vocal in expressing their belief that this area of research will fundamentally change the way drugs are developed.

Using biomarkers should make drugs more effective and reduce side effects, Dr. Paul said. If all goes as planned, the company will know sooner whether its drugs are working, and will develop fewer drugs that fail in clinical trials. The company may even be able to use shorter, smaller clinical trials because its drugs will demonstrate their effectiveness more quickly.

To improve its chances further, Lilly has focused its research efforts on four types of diseases: diabetes, cancer, mental illness and some heart ailments. In each category, it has had a history of successful drugs.

The company hopes to reduce the cost of new development to about $700 million a drug by 2010. Because Lilly now spends about $2.7 billion annually on research, that figure would imply that the company could develop as many as four new drugs a year, compared with just one a year if current trends do not change.

Among the company's most promising drugs in development are ruboxistaurin, for diabetes complications; arzoxifene, for the prevention of osteoporosis and breast cancer; and enzastaurin, for brain tumors and other cancers.

The flip side of Lilly's plan is that drugs it develops may be used more narrowly than current treatments. For example, the company may find that a diabetes drug works best in patients under 40 with a specific genetic marker, and enroll only those patients in its clinical trials. While doctors can legally prescribe any medicine for any reason once it is on the market, insurers would probably balk at covering the drug for diabetics over 40 or for patients without the genetic marker.

"The old model was, one size fits a whole lot of people," said Mr. Johnson, Lilly's strategist.

Last month, Lilly's vision of targeted therapies gained some ground - albeit at another company. The Food and Drug Administration approved BiDil, a heart drug from NitroMed that is intended for use by African-Americans. The approval, based on a clinical trial that enrolled only black patients, was the first ever for a drug meant for one racial group. While race can be a crude characterization of groups, it can serve as an effective biomarker, scientists said.

Lilly's road map may look appealing. But some analysts question whether the company is as different from the rest of the industry as it would like to believe.

While it professes to see a future of narrowly marketed medicines, Lilly is more dependent than any other major drug maker on a single blockbuster drug: Zyprexa, its treatment for schizophrenia and manic depression. Zyprexa accounted for about $4.4 billion in sales last year, 30 percent of the company's total sales.

And while Lilly executives say they want to avoid marketing its drugs too heavily or in anything less than a forthright way, federal prosecutors in Philadelphia are investigating its marketing practices for Zyprexa and Prozac. Last month, Lilly said it would pay $690 million to settle 8,000 lawsuits that contended that Zyprexa could cause obesity and diabetes and that the company had not properly disclosed that risk.

Lilly says that it acted properly in marketing Zyprexa and that is cooperating with the federal investigation. Still, the controversy has hurt Zyprexa sales, which fell 8 percent in the United States last year.

Some of Lilly's newest drugs have been commercial disappointments. The company and analysts hoped that annual sales of Xigris, a treatment introduced in late 2001 for a blood infection called sepsis, could reach $1 billion; Xigris's sales were $200 million last year. Sales of Strattera, for attention deficit disorder, slowed after a report in December that the drug can cause a rare but serious form of liver damage.

Michael Krensavage, an analyst at Raymond James & Associates who rates Lilly shares as underperform, said that Lilly's emphasis on targeted therapies might be a defensive response to the industry's recent inability to produce blockbusters.

Rather than targeted treatments, "drug companies would hope to produce a medicine that works for everybody," Mr. Krensavage said. "That's certainly the goal."

Mr. Krensavage also criticized Lilly's accounting, noting that the company has taken one-time charges in each of the last three years that have muddied its financial results. Lilly said its accounting complied with all federal rules.

Despite the company's recent stumbles with Zyprexa, other analysts say Lilly is well positioned, and they praise Mr. Taurel for looking for innovative ways to lower the cost of drug development.

"Sidney has a better concept of what's happening outside his four walls and is far better in reflecting that in how the company runs on a day-to-day basis than any of his peers," said Richard Evans, an analyst at Sanford C. Bernstein & Company.

Mr. Taurel acknowledged Lilly's dependence on Zyprexa and the fact that some new drugs had not met expectations. But he said the transition to targeted therapies would take years, if not decades. With earnings last year of $3.1 billion, before one-time charges, and no major patent expirations before 2011, Lilly can afford to make long-term bets, he said. "Our model needs to evolve," he said. "For the industry and for Lilly."

Copyright 2005 The New York Times Company.

Get good...or get out.



See last sentence. New York is the wrong basis for comparison.
Think Zurich or Geneva.

---

Profit, Not Jobs, in Silicon Valley

By JOHN MARKOFF and MATT RICHTEL
Published: July 3, 2005

SAN JOSE, Calif., July 1 - Things are looking up at Wyse Technology, a venerable maker of computer terminals. Unless, that is, you happen to want to work for the company here in Silicon Valley.

Responding to booming demand in Asia and in Europe, Wyse is adding new development teams in India and China and expanding its worldwide work force to about 380, from 260. Its profits are recorded here - but almost none of its new jobs.

Amid widespread signs of economic recovery in the region, Wyse is emblematic of its economy, in which demand, sales and profits are rising quickly while job growth continues to stagnate.

In the last three years, profits at the seven largest companies in Silicon Valley by market value have increased by an average of more than 500 percent while Santa Clara County employment has declined to 767,600, from 787,200. During the previous economic recovery, between 1995 and 1997, the county, which is the heart of Silicon Valley, added more than 82,800 jobs.

Changes in technology and business strategy are raising fundamental questions about the future of the valley, the nation's high technology heartland. In part, the change is driven by the very automation that Silicon Valley has largely made possible, allowing companies to create more value with fewer workers.

Some economists are wondering if a larger transformation is at work - accelerating a trend in which the region's big employers keep a brain trust of creative people and engineers here but hire workers for lower-level tasks elsewhere.

"What has changed is that Silicon Valley has continued to move up the value chain," said AnnaLee Saxenian, dean of the School of Information Management and Systems and professor of city and regional planning at the University of California, Berkeley. That has meant that just as low-skilled manufacturing jobs fled the region starting in the 1970's, now software jobs are also leaving.

The phenomenon is only the latest twist in the region's boom-and-bust history, marked by repeated cycles of innovation and renewal over the last five decades.

Industries based on personal computing, hand-held devices and electronic commerce have emerged and thrived here, and each brought waves of new jobs. Now, almost everyone agrees that Silicon Valley is coming back, and employment there grew from March to May, but the area still has about 10,000 fewer jobs than there were a year ago.

The increase in profits "has been very dramatic, but there's no job growth," said Doug Henton, president of Collaborative Economics, a regional consulting company.

Some former technology workers have given up on the sector - or moved out of the Bay Area altogether.

Catherine Haley, 32, went to work in 1997 as a project manager and a Web designer for technology companies in the area, but after quitting the consulting firm KPMG in 2002, she found it extremely difficult to find a full-time job.

In 2004, after working in piecemeal assignments for two years, she gave up on the job market and nearly moved back to Boston. Instead, she decided to pursue her passion - art - and is now majority owner of a gallery in San Francisco.

She does not miss fighting for work, she said, partly because the high-tech economy has lost its charm. "Unfortunately, the number of interesting projects and companies out there has really come down," she said.

In some cases, as at Wyse, the job growth in the sector is taking place elsewhere - in lower-cost, higher-growth markets.

A new management team, installed as part of a buyout of the company that was completed in April, is leading a restructuring that includes adding 100 workers in India and 35 in Beijing so far this year. At the start of the year the company had 90 percent of its work force in Silicon Valley; now the figure is 48 percent, and only 15 percent of its engineering talent remains here, largely because of the technology development teams it is building in India and China.

"It was pretty clear that growth was going to come first in Asia," said John Kish, Wyse's chief executive. "We had the desire to get engineering teams to those markets as quickly as we could."

Stephen Levy, director for the Center for the Continuing Study of the California Economy, said the growth of employment outside Silicon Valley was "not a nefarious plot," but a logical development. "Companies are going where there are customers and, in some cases, where it's cheaper to produce," he said.

Hoping to keep costs low, Electronic Arts, the video game maker based just north of here in Redwood City, already has development studios in Vancouver, Montreal, London, Chicago and Orlando, Fla. It is debating how much of its work in the future it can move to lower-cost regions, said Jeff Brown, a company spokesman.

"We will always have a presence here because there is a core of talent," he said. "But there is strong pressure to figure out exactly which jobs it is essential to keep in California."

The issue is not just outsourcing, though, but also big leaps in productivity. Cisco, the behemoth maker of Internet equipment, now has annual sales of $680,000 per employee, compared to $480,000 in 2001.

One key measure, known as value added per employee, rose 3.7 percent in 2004, to $222,000 in economic value per worker. That compares with $85,000 per worker in the rest of the country, according to data reported by Joint Venture Silicon Valley, a regional economic research group.

By a number of other measures, companies are watching profits and sales rise. An analysis published in April by The San Jose Mercury News found that the top 100 public companies in the region had revenues of $336 billion in 2004, an increase of 14 percent from the previous year.

Mr. Henton said that measure, while not entirely indicative of what is going on because it includes worldwide sales, gives a good sense of the growth here.

"It's a clear recovery," Mr. Levy said. "It's a high-productivity jobless recovery."

In the past, much of the job growth has come from investment by venture capitalists in start-up companies. That engine is starting to rev up again, with venture capitalists putting $7.4 billion into 724 Silicon Valley companies in 2004, according to the National Venture Capital Association.

That is up 17 percent from 2003, but still far below the $34 billion invested in 2000, at the peak of the phenomenon.

Also, newer start-ups are under pressure from their venture-capital investors to outsource work to lower-cost regions, said Cynthia Kroll, a senior regional economist at the Haas School of Business at the University of California, Berkeley. She added that the venture capitalists, burned as the last cycle turned downward, are much more closely watching how start-ups spend money, including how they hire.

"That would be slowing growth of employment," Ms. Kroll said.

The venture capitalists are being highly selective, said Margot Heiligman, 39, who is doing consulting work for technology companies but is in the market for a full-time job. Ms. Heiligman moved to San Francisco last November when her husband took a trumpeter position with the city's symphony orchestra.

Ms. Heiligman previously oversaw the internal technology department for the New York law firm of Proskauer Rose and spent six years as director of business development for Swisscom, a telecommunications company in Bern, Switzerland. She is eager to find work at a start-up company, but has found that the venture backers of such companies are very selective, hiring acquaintances or people who have worked at companies they know well.

"I'm finding it pretty closed," she said of the job market. "It's making New York look like an easy place" to find a job.

Copyright 2005 The New York Times Company

Some worry about survivor bias. Probably still preferable to loser bias.

Would you trust this person to have made the right move?

Exactly.

---

June 29, 2005
Wary of Stocks? Property Is an I.R.A. Option
By VIVIAN MARINO
Like many other investors, Ray Matteson, an executive at a data-processing company, was caught in the dot-com maelstrom of the late 1990's, except that his losses were on a far grander scale than most - about $3 million by his calculations and all of it in a retirement account.

But Mr. Matteson, 60, who had amassed a fortune through an employee stock ownership plan, says he has managed to recoup most of those losses. About two years ago, he shifted his money from a traditional individual retirement account into a so-called real estate I.R.A., and he used money in the account to buy a 105,000-square-foot warehouse in his hometown, Sacramento.

"I paid $4.3 million for it last year, and if I sold the building today, I could probably get around $5 million to $6 million," said Mr. Matteson, adding that he also receives a steady stream of income from his 12 commercial tenants, producing a 9 percent annual return.

While a vast majority of the 45 million households with I.R.A.'s are still in stocks and bonds, more investors are focusing on real estate, hoping to cash in on the robust market. An estimated 2 percent of I.R.A. money is now invested in real estate in one form or another - through self-directed accounts held by a custodian and managed by the accountholder - twice the percentage of two years ago, according to industry experts.

Real estate has always been permitted in I.R.A.'s, but few people seemed to know about this option - or even care - until the stock market began to decline. Financial institutions, meanwhile, had little incentive to recommend something other than stocks, bonds or mutual funds.

There are few restrictions on what investors can hold in these self-directed accounts. Qualifying properties include apartment and office buildings, shopping centers and warehouses as well as single-family houses. Investors cannot use commercial space owned through their I.R.A.'s for their own businesses.

Most real estate I.R.A.'s now are in residential property, according to account administrators, but they say they are seeing more and more commercial deals. Typically, these involve high-net-worth clients with substantial assets in their retirement accounts.

"People are looking for ways to accelerate their asset values, and they're not getting those returns in the stock market," said Hubert Bromma, chief executive of Entrust Administration, a company in Oakland, Calif., that specializes in the administration of I.R.A.'s for nontraditional investments. The company has $1.5 billion in assets under management; two-thirds is in real estate.

Tom W. Anderson, president and chief executive of the Pensco Trust Company, a self-directed I.R.A. custodian with offices in San Francisco, said that he was also seeing more interest in real estate investments. As of this year, about $604 million of the $1.25 billion in assets Pensco manages was in property or real estate investment trusts, known as REIT's, which are publicly traded or private companies with portfolios of commercial property. Last year, the total was $480 million, and in 1997, $100 million. The company did not provide a breakdown between commercial and residential real estate.

Because commercial property is typically more expensive than residential, many account holders are choosing to become part of private investment syndicates, Mr. Anderson said. "They may pool their money to buy retail strip malls or other buildings," he said.

Indeed, the biggest obstacle to investing in commercial real estate is expense - for the property as well as the custodial fees, which can vary widely. Investors can take out loans, though financing inside an I.R.A. can be complex. All property expenses, from taxes to repairs to loan payments, must be paid from funds in the I.R.A., which means investors will need to have plenty of liquid funds in the account. Mr. Matteson, for instance, pays $1,200 a month for the property manager that looks after the building, which generates about $40,000 in monthly revenue, all of which flows into his I.R.A.

"You would have to have a prodigious I.R.A. to be able to sink money into a commercial real estate property," said Jonathan Pond, a financial adviser from the Boston area.

Small investors can gain a presence in commercial real estate through REIT's or partnerships, but administrators say that most self-directed I.R.A.'s should have at least six figures to invest. That is not so uncommon, Mr. Bromma said, considering that "there are a lot of people rolling over money from 401(k)'s, professionals in high-level positions."

Mr. Matteson certainly fits into that category. His retirement account originated from a 401(k) plan awash with stock from his employer, DST Systems of Kansas City, Mo.; then the money was rolled into an I.R.A. after a merger, he said.

After his Internet losses, Mr. Matteson pulled out of Wall Street entirely. "I got tired of going to bed at night wondering how much I was going to lose the next morning," he said.

Mr. Matteson says he hopes to buy a second warehouse sometime this summer - this one is 125,000 square feet and near the first building - through his I.R.A., which is managed by Pensco. This time he will have to take out what is known as a nonrecourse loan, which means that the lender's ability to collect the money borrowed is limited to the collateral within the account and not the personal assets of the borrower.

"These two buildings, three years from now, will throw off $100,000 a month in after-expense, before-tax income," he said. "I can have the debt paid off in three to five years."

While there is growth potential in commercial real estate investing, there are also disadvantages to making property purchases within an I.R.A. For one thing, investors can't take the usual deductions for real estate, and they could face stiff penalties if they fail to comply with various I.R.A. rules including those on contributions and withdrawals.

Real estate transactions work best in Roth I.R.A.'s, financial advisers say. In a Roth account, investors will not owe taxes at distribution, when the investments have appreciated over time. When money is withdrawn from a traditional account, funds are taxed as regular income.

Copyright 2005 The New York Times Company.

Where are we going?



From Technological Forecasting and Social Change.
Copyright © 2005 Elsevier Inc. All rights reserved.

Article in Press, Corrected Proof

Jonathan Huebner

A possible declining trend for worldwide innovation.

ABSTRACT

A comparison is made between a model of technology in which the level of technology advances exponentially without limit and a model with an economic limit. The model with an economic limit best fits data obtained from lists of events in the history of science and technology as well as the patent history in the United States. The rate of innovation peaked in the year 1873 and is now rapidly declining. We are at an estimated 85% of the economic limit of technology, and it is projected that we will reach 90% in 2018 and 95% in 2038.

---

Entering a dark age of innovation
14:00 02 July 2005
NewScientist.com news service
Robert Adler

Downward progress

SURFING the web and making free internet phone calls on your Wi-Fi laptop, listening to your iPod on the way home, it often seems that, technologically speaking, we are enjoying a golden age. Human inventiveness is so finely honed, and the globalised technology industries so productive, that there appears to be an invention to cater for every modern whim.

But according to a new analysis, this view couldn't be more wrong: far from being in technological nirvana, we are fast approaching a new dark age. That, at least, is the conclusion of Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center in China Lake, California. He says the rate of technological innovation reached a peak a century ago and has been declining ever since. And like the lookout on the Titanic who spotted the fateful iceberg, Huebner sees the end of innovation looming dead ahead. His study will be published in Technological Forecasting and Social Change.

It's an unfashionable view. Most futurologists say technology is developing at exponential rates. Moore's law, for example, foresaw chip densities (for which read speed and memory capacity) doubling every 18 months. And the chip makers have lived up to its predictions. Building on this, the less well-known Kurzweil's law says that these faster, smarter chips are leading to even faster growth in the power of computers. Developments in genome sequencing and nanoscale machinery are racing ahead too, and internet connectivity and telecommunications bandwith are growing even faster than computer power, catalysing still further waves of innovation.

But Huebner is confident of his facts. He has long been struck by the fact that promised advances were not appearing as quickly as predicted. "I wondered if there was a reason for this," he says. "Perhaps there is a limit to what technology can achieve."

In an effort to find out, he plotted major innovations and scientific advances over time compared to world population, using the 7200 key innovations listed in a recently published book, The History of Science and Technology (Houghton Mifflin, 2004). The results surprised him.

Rather than growing exponentially, or even keeping pace with population growth, they peaked in 1873 and have been declining ever since (see Graphs). Next, he examined the number of patents granted in the US from 1790 to the present. When he plotted the number of US patents granted per decade divided by the country's population, he found the graph peaked in 1915.

The period between 1873 and 1915 was certainly an innovative one. For instance, it included the major patent-producing years of America's greatest inventor, Thomas Edison (1847-1931). Edison patented more than 1000 inventions, including the incandescent bulb, electricity generation and distribution grids, movie cameras and the phonograph.

Medieval future

Huebner draws some stark lessons from his analysis. The global rate of innovation today, which is running at seven "important technological developments" per billion people per year, matches the rate in 1600. Despite far higher standards of education and massive R&D funding "it is more difficult now for people to develop new technology", Huebner says.

Extrapolating Huebner's global innovation curve just two decades into the future, the innovation rate plummets to medieval levels. "We are approaching the 'dark ages point', when the rate of innovation is the same as it was during the Dark Ages," Huebner says. "We'll reach that in 2024."

But today's much larger population means that the number of innovations per year will still be far higher than in medieval times. "I'm certainly not predicting that the dark ages will reoccur in 2024, if at all," he says. Nevertheless, the point at which an extrapolation of his global innovation curve hits zero suggests we have already made 85 per cent of the technologies that are economically feasible.

But why does he think this has happened? He likens the way technologies develop to a tree. "You have the trunk and major branches, covering major fields like transportation or the generation of energy," he says. "Right now we are filling out the minor branches and twigs and leaves. The major question is, are there any major branches left to discover? My feeling is we've discovered most of the major branches on the tree of technology."

But artificial intelligence expert Ray Kurzweil - who formulated the aforementioned law - thinks Huebner has got it all wrong. "He uses an arbitrary list of about 7000 events that have no basis as a measure of innovation. If one uses arbitrary measures, the results will not be meaningful."

Eric Drexler, who dreamed up some of the key ideas underlying nanotechnology, agrees. "A more direct and detailed way to quantify technology history is to track various capabilities, such as speed of transport, data-channel bandwidth, cost of computation," he says. "Some have followed exponential trends, some have not."

Drexler says nanotechnology alone will smash the barriers Huebner foresees, never mind other branches of technology. It's only a matter of time, he says, before nanoengineers will surpass what cells do, making possible atom-by-atom desktop manufacturing. "Although this result will require many years of research and development, no physical or economic obstacle blocks its achievement," he says. "The resulting advances seem well above the curve that Dr Huebner projects."

At the Acceleration Studies Foundation, a non-profit think tank in San Pedro, California, John Smart examines why technological change is progressing so fast. Looking at the growth of nanotechnology and artificial intelligence, Smart agrees with Kurzweil that we are rocketing toward a technological "singularity" - a point sometime between 2040 and 2080 where change is so blindingly fast that we just can't predict where it will go.

Smart also accepts Huebner's findings, but with a reservation. Innovation may seem to be slowing even as its real pace accelerates, he says, because it's slipping from human hands and so fading from human view. More and more, he says, progress takes place "under the hood" in the form of abstract computing processes. Huebner's analysis misses this entirely.

Take a modern car. "Think of the amount of computation - design, supply chain and process automation - that went into building it," Smart says. "Computations have become so incremental and abstract that we no longer see them as innovations. People are heading for a comfortable cocoon where the machines are doing the work and the innovating," he says. "But we're not measuring that very well."

Huebner disagrees. "It doesn't matter if it is humans or machines that are the source of innovation. If it isn't noticeable to the people who chronicle technological history then it is probably a minor event."

A middle path between Huebner's warning of an imminent end to tech progress, and Kurzweil and Smart's equally imminent encounter with a silicon singularity, has been staked out by Ted Modis, a Swiss physicist and futurologist.

Modis agrees with Huebner that an exponential rate of change cannot be sustained and his findings, like Huebner's, suggest that technological change will not increase forever. But rather than expecting innovation to plummet, Modis foresees a long, slow decline that mirrors technology's climb.

At the peak

"I see the world being presently at the peak of its rate of change and that there is ahead of us as much change as there is behind us," Modis says. "I don't subscribe to the continually exponential rate of growth, nor to an imminent drying up of innovation."

So who is right? The high-tech gurus who predict exponentially increasing change up to and through a blinding event horizon? Huebner, who foresees a looming collision with technology's limits? Or Modis, who expects a long, slow decline?

The impasse has parallels with cosmology during much of the 20th century, when theorists debated endlessly whether the universe would keep expanding, creep toward a steady state, or collapse. It took new and better measurements to break the log jam, leading to the surprising discovery that the rate of expansion is actually accelerating.

Perhaps it is significant that all the mutually exclusive techno-projections focus on exponential technological growth. Innovation theorist Ilkka Tuomi at the Institute for Prospective Technological Studies in Seville, Spain, says: "Exponential growth is very uncommon in the real world. It usually ends when it starts to matter." And it looks like it is starting to matter.

Women think of themselves as being less healthy than men...but are less likely to die.

Demography. 2005 May;42(2):189-214.

Sex differences in morbidity and mortality.

Case A, Paxson C.

Center for Health and Wellbeing, Princeton University, Princeton, NJ 08544, USA.

Women have worse self-rated health and more hospitalization episodes than men
from early adolescence to late middle age, but are less likely to die at each
age. We use 14 years of data from the U.S. National Health Interview Survey to
examine this paradox. Our results indicate that the difference in self-assessed
health between women and men can be entirely explained by differences in the
distribution of the chronic conditions they face. This is not true, however, for
hospital episodes and mortality. Men with several smoking-related
conditions--including cardiovascular disease and certain lung disorders--are
more likely to experience hospital episodes and to die than women who suffer
from the same chronic conditions, implying that men may experience more-severe
forms of these conditions. While some of the difference in mortality can be
explained by differences in the distribution of chronic conditions, an equally
large share can be attributed to the larger adverse effects of these conditions
on male mortality. The greater effects of smoking-related conditions on men's
health may be due to their higher rates of smoking throughout their lives.

"America, he says, is already losing out in the global talent market because of its 'painful and humiliating' immigration procedures."

Funny how we often fail to recognize when we have made mistakes, and how others fail to learn from our mistakes even when they recognize that we have made them.

---

Mobility business - Visas

613 words
2 July 2005
The Economist
ECN
376
English
(c) The Economist Newspaper Limited, London 2005. All rights reserved

The market in citizenship

Bureaucrats are tightening rules on passports for the wealthy and talented

CLEVER, rich or both—almost every country in the world has some sort of programme to attract desirable migrants. The only exceptions are “weird places like Bhutan” says Christian Kalin of Henley & Partners, which specialises in fixing visas and passports for globe-trotters. Competition is fierce and, as with most things, that lowers the price and increases choice. Britain has two programmes, one for the rich—who have to invest £750,000 ($1.36m) in actively traded securities—and one, much larger, for talented foreigners.

Both have worked well. Unlike some other countries, Britain does not make applicants find a job first: with good qualifications, they can just turn up and look for work. That helps keep Britain's economy flexible and competitive. But now a bureaucratic snag is threatening the scheme.

The problem comes with anyone wanting to convert his visa into “indefinite leave to remain” (Britain's equivalent of America's Green Card). This normally requires four years' continuous residence in Britain. After a further year, it normally leads to British citizenship.

The law defines continuous residence sensibly. Business trips and holidays don't count, if the applicant's main home is in Britain. As a rule of thumb, an average of 90 days abroad was allowed each year. But unpublished guidelines seen by The Economist are tougher: they say that “none of the absences abroad should be of more than three months, and they must not amount to more than six months in all”. Over the four years needed to qualify, that averages only six weeks a year.

For many jet-setters, this restriction is a career-buster. Six weeks abroad barely covers holidays, let alone business travel. Alexei Sidnev, a Russian consultant, has to turn down important jobs because he cannot afford any more days abroad. If applicants they travel “too much”, their children risk losing the right to remain in Britain.

Roger Gherson, who runs a specialist immigration law firm, reckons that, including such dependents, the new rule could affect 750,000 people. “Panic will reign in Canary Wharf [in London's financial district] when they start implementing this,” he says. Next week his firm is going to court to try to have the guidelines ruled illegal. They came to light in a case involving a wealthy foreigner who runs an international property business. His application for permanent residency was rejected in April, though in the previous four years he had been abroad for only 351 days, and never for more than 90 days at a stretch.

The Home Office insists that the rules have not changed since 2001. That would confirm Mr Gherson's suspicion that the new policy has come in by accident, probably as a result of zeal or carelessness by mid-ranking officials. Their attitude is at odds with the stance of the government, which has been trying for years to make the system more user-friendly for the world's elite. It even moved processing of business residency cases from a huge office in Croydon, notorious for its slowness and hostility to would-be immigrants, to a new outfit in Sheffield.

But lawyers such as Mr Kalin are in no doubt of the risk Britain is running. America, he says, is already losing out in the global talent market because of its “painful and humiliating” immigration procedures. If Britain's rules stay tight, he says, foreigners will go elsewhere. Likely beneficiaries are Ireland and Austria, European Union countries whose residency visas and passports confer the same convenience as British ones, with less hassle.

The central banker's bank: worry about growth in debt and asset prices, not inflation.

Beware the bubbles - Economics focus

1,057 words
2 July 2005
The Economist
ECN
376
English
(c) The Economist Newspaper Limited, London 2005. All rights reserved

Economics focus: A wake-up call from the BIS

Even in a world of low inflation, central bankers cannot sleep soundly

THE desk of The Economist's economics editor is always piled high with reports on the global economy by official international institutions, central banks, think-tanks and investment banks. But in recent years one publication has towered above the others, thanks to its willingness to question the common complacency of policymakers: the annual report of the Bank for International Settlements (BIS), the so-called “central bankers' bank”. The latest edition, published on June 27th, points out several causes for concern.

Not least, perhaps, are the worrying similarities between the world economy now and that of the late 1960s and early 1970s, just before inflation surged. Like today, that was a period when both short- and long-term interest rates were low in real terms, while credit expanded rapidly. On June 30th—after The Economist had gone to press—America's Federal Reserve was widely expected to raise its federal funds rate by another quarter of a percentage point, to 3.25%. Yet that would still leave real rates at less than 0.5%, well below their usual level at this stage of an economic recovery (see left-hand chart) and below most estimates of the “natural” rate of interest consistent with non-inflationary growth. Moreover, the impact of higher short-term rates in America over the past year has been partly offset by a fall in bond yields, leaving overall monetary conditions very loose. Indeed, money looks unusually easy worldwide, with real interest rates close to zero in many countries.

A second parallel with the past, says the BIS, is that America's loose monetary policy is being exported to the rest of the world. In the late 1960s this occurred through the fixed exchange rates of the Bretton Woods system, which forced other countries to ease their policies to hold their currencies steady against a sickly dollar. Similarly, in the past couple of years the dollar's slide has caused China and other Asian countries to accumulate dollar reserves, and so run a looser monetary policy than they otherwise might, in order to prevent their currencies appreciating. As a result, global liquidity has been rising at its fastest pace since the 1970s. A third ominous similarity with 30-odd years ago is the jump in the prices of commodities and oil. Last but not least, governments' budget deficits have widened sharply in recent years, just as they did before the Great Inflation of the 1970s.

However, the BIS thinks it is unlikely that history is about to repeat itself with another burst of inflation. Prices took off in the 1970s largely because of errors in policy. Policymakers have since learned the hard way that rising inflation harms growth. Central banks have also been made independent of politicians and given the prime goal of price stability, which has helped to anchor inflationary expectations.

There are also some important differences between now and the late 1960s and early 1970s. Rich economies are now much less dependent on oil, and wage pressures have been muted as globalisation and the threat of outsourcing have curbed the bargaining power of workers. Deregulation, new technology and the integration of China into the global economy have also reduced the price of many goods, making it easier for central banks to keep inflation low. This has made inflation less sensitive to rising oil and commodity prices (see right-hand chart).

Beware of new hazards ahead

Although the BIS is not losing much sleep over a future surge in inflation, it worries about a different sort of risk: the rapid growth in debt and asset prices. Ironically, this is partly due to central banks' success in defeating inflation. Thanks to globalisation and technology, which have helped to hold down inflation, central banks have recently not needed to raise interest rates by as much as in past cycles. Well-anchored inflationary expectations also allowed rates to be cut more vigorously when economies stumbled in 2001 after the stockmarket crash. The cumulative effect of this is very low short-term interest rates.

Another change over the past three decades is that financial systems have been liberalised, making it even easier to borrow during a boom. This combination of cheap money and a liberalised financial system, suggests the BIS, explains why there have been more booms and busts in credit and asset prices in recent years. Top of the BIS's current list of worries are house prices, which it reckons are now “vulnerable to downward corrections”—likely to fall, in plain English—in several countries, and the vast amount of household debt. Sooner or later these could cause severe global economic and financial strains.

The BIS argues that America needs to raise interest rates further in order to restrain risk-taking in financial markets and borrowing by households. With debts and house prices already so high, this will hurt consumer spending, but it could help to avoid a more painful adjustment later.

Looking ahead, the BIS argues that policymakers need to modify their current policy frameworks in order to prevent the build-up of imbalances in future. Targeting inflation is not enough. Central banks also need to take more account of the increase in debt and exceptional rises in asset prices. Thus interest rates should be raised to curb excessive credit growth even if inflation remains tame. Regulatory policy could also be adjusted in a discretionary way over the cycle. Banks could be encouraged to build up more capital during booms, which would help to avoid excessive lending, and then be allowed to reduce their capital in bad times to cushion the economy from a credit crunch. During a rampant house-price boom lenders might be told to reduce the amount they can lend as a percentage of the purchase price of a home or to shorten repayment periods—the exact opposite of what tends to happen now.

The Federal Reserve has rejected the advice of the BIS for many years, insisting that the main job of a central bank is simply to control inflation. The risk is that in single-mindedly looking out for inflationary icebergs, a central bank will fail to spot the rocks that lie dead ahead.

Governments make promises to their employees that they cannot possibly keep.

Only problem: unlike private companies, they will drag down the rest of us as well.
---

Clearly unhealthy - Retired Americans' health care

903 words
2 July 2005
The Economist
ECN
376
English
(c) The Economist Newspaper Limited, London 2005. All rights reserved

Retired Americans' health care

Public-sector employers count the cost of their health-care promises

THESE days, Americans have many reasons to worry about how they will make ends meet when they retire. Social Security is said to be teetering; corporate and public pension plans are saddled with big deficits. The fastest-growing problem, however, is that rocketing costs are making it increasingly hard for employers to pay for the health care that many of them provide for retired staff. Many companies—think of General Motors—are finding the sums unpleasant. And following an accounting rule finalised last August by the Government Accounting Standards Board (GASB), the standard-setter for America's state and local governments, public-sector employers are also trying to come to grips with the cost of their promises.

Currently, nearly all governments operate on a “pay-as-you-go” basis. They dole out cash to pay for the medical care of those who have retired, but set none aside for future obligations. Their accounts have mirrored this, reflecting only current health-care expenditures and not the cost of promises that are yet to come due. This, critics say, lets governments make promises without counting the cost to be borne by future taxpayers.

The GASB's new rules compel America's 84,000 state and local government entities, including public hospitals and schools and fire and police departments, to put a value on the “other post-employee retirement benefits” (OPEB)—consisting mostly of health care—they promise to employees. They will also have to record an expense (the “annual required contribution”) for the amount they would need to stash away to fund this long-term liability fully over 30 years. Actuaries estimate that this contribution could be five to ten times current annual outlays for retirees' health care. Unfunded contributions will appear as a liability on balance sheets. The rules will be phased in over three years, starting in fiscal years beginning after December 15th 2006 for the biggest governments.

The idea, says Karl Johnson of the GASB, is for the new rules to make the cost of retiree-health obligations clearer and thus keep governments from making over-generous promises. “These costs were always there,” he says. “They just weren't disclosed and often were not even measured.” The private sector has had similar rules since 1992. But the effect on the public sector could be much bigger, because government employers are likelier to provide health-care benefits to retired staff (77% of “large” governments do so, against 36% of comparable companies, according to the Kaiser Family Foundation). Public-sector benefits also tend to be more generous.

The unfunded liabilities that will come to light are likely to be huge, dwarfing many pension deficits, because pensions are pre-funded. Mercer Consulting reckons that governments that have not set aside money for future obligations will face liabilities 40-60 times the current annual cost of retirees' health care. So California, for instance, which allocated $895m for retirees' health care in the 2005-06 budget, could have an unfunded OPEB liability of around $36 billion. North Carolina estimates its liability at $13 billion-14 billion. The Los Angeles Unified School District puts its liability, at the low end, at $5 billion, equivalent to 80% of its general-purpose operating budget. Keenan & Associates, a consulting firm that works with California's school systems, estimates that the unfunded liability for the Golden State's schools and community colleges is $22 billion.

If governments do nothing, their credit ratings could be damaged and their cost of borrowing could rise. Joseph Mason of Fitch, a rating agency, says that his firm will look not only at the big unfunded liability numbers but also at the steps governments take to manage their OPEB obligations. “With health-care costs spiralling and workforces ageing,” he says, “standing still isn't a viable option.”

Employees worry that governments will cut health benefits, as the private sector did when its rules were introduced. Some firms dropped health-care coverage for new employees. Back in 1988, according to Kaiser, 66% of big firms' health-care budgets went to retired employees compared with little more than a third today.

OPEB promises enjoy less legal protection than pension promises, some of which are even guaranteed in state constitutions. Even so, they may be hard to cut. The public sector is heavily unionised, so hacking at benefits could mean difficult negotiations or strikes. It would also damage morale and recruitment.

Governments are therefore scrambling to find other, politically more palatable options. Those in the best financial health may fund their OPEB promises in advance. Others are exploring milder ways of cutting costs. For instance, North Carolina, which gives its workers health benefits for life after only five years on the job, is considering lengthening the qualifying period. Some governments want employees or Medicare to foot more of the bill.

Several California school districts are talking to Wall Street about issuing bonds, as some states have done to shore up ailing pension plans. The idea is that, as long as the investment returns on the money raised are higher than the interest rates paid on debt, everyone gains. But this gamble can backfire, as it did for New Jersey, which issued pension bonds in 1997 and suffered when the technology bubble burst. There is no easy fix.