Saturday, July 30, 2005

We are now a decade removed from the Netscape IPO.

Inequality revisited.

Most staggering figure:

"In 2004 the world had 587 billionaires with a combined wealth of $1.9-trillion -- equivalent to nearly 20 per cent of the annual economic output of the United States."


Saturday, July 30, 2005

The rich get richer, the poor get squat


Worlds Apart:
Measuring International
and Global Inequality

By Branko Milanovic

Princeton University Press,

227 pages, $44.59

Global economic inequality isn't something that grabs a lot of headlines. And a book on the subject surely doesn't seem like gripping summer reading. But don't go away. This subject is critically important, and this particular book is extraordinary.

The debate among economists and other experts over capitalism's role in widening or narrowing the income gap between the world's winners and losers -- or, more simply, between the world's rich and poor -- is nothing short of raucous (yes, economists can be raucous). The stakes are as high as they get. If people around the world come to perceive that today's globalized capitalism is making the already rich vastly richer, while it's simultaneously leaving a large percentage of the world's population behind, then it will lose much of its moral standing as a set of principles for ordering our social and economic lives. And any social system that loses its moral standing -- its legitimacy, in the jargon of social scientists -- is a target for rebellion.

As so often happens in high-stakes debates when there's abundant evidence to support widely varied positions, people point to that which satisfies their temperamental biases. In this case, optimists focus on the soaring standard of living in countries like Chile, Malaysia and Taiwan, in the booming commercial zones of coastal China, and in the high-tech cities of southern India. Pessimists, on the other hand, focus on the chronic economic crisis in Africa and stagnation in much of Latin America. But the matter of income gaps and their causes is extraordinarily complex, so the evidence needs painstaking interpretation.

Branko Milanovic's new book does the job, and the results are fascinating. A lead economist at the World Bank, and recently a senior associate with the Carnegie Endowment for International Peace, Milanovic has written probably the most comprehensive, thorough and balanced assessment yet of global inequality. And for the most part, it shows the pessimists are right. Milanovic interprets the data with great care, and understates his conclusions, but in the end his book is an indictment of today's global economic order as, on balance, an abject moral failure.

The statistics describing the gap between rich and poor people on Earth are truly breathtaking. According to a recent report from the World Bank, in 2001, about 1.1 billion people, or one-fifth of the population of the world's poor countries, lived on less than what one dollar a day would buy in the United States. About 2.7 billion people, or more than half the developing world's population, lived on less than two dollars a day. In 2004, says the UN's Food and Agriculture Organization, 852 million people faced chronic hunger, up 15 million from the previous year. And in the same year, according to Unicef, one billion children -- nearly half the children in the world -- were severely deprived. More than 600 million children didn't have adequate shelter, and every day, 4,000 died because of dirty water or poor sanitation.

Meanwhile, at the other end of the spectrum, in 2004 the world had 587 billionaires with a combined wealth of $1.9-trillion -- equivalent to nearly 20 per cent of the annual economic output of the United States. If liquidated, this wealth would have been sufficient to hire the poorest third of the world's workers for a whole year -- a labour force of nearly one billion people. The capacity of the world's wealthiest people to buy the labour of its poorest is probably the best measure of humankind's economic disparities, and never before have so few had the ability to command the labour of so many.

But are these disparities getting worse? Here we have to tread carefully, because the question leads us into a theoretical and statistical minefield. Milanovic begins the journey by distinguishing among three types of inequality: inequality between countries' average incomes (measured in terms of GDP per capita); between average incomes weighted by the countries' populations; and between individuals. This tripartite distinction is critically important, because each measure of inequality tells us something distinct. Let's start with the first.

Orthodox economic theory says that rich and poor countries should eventually converge to similar levels of per capita income, because investment should flow from rich countries, where capital is abundant and returns are limited, to poor countries, where capital is scarce and returns are much higher. If convergence is happening, then GDP in low-income countries should generally grow faster than in high-income countries, and over time, the incomes of people in poor countries should reach the level of incomes in rich countries, at which point the distinction between poor and rich will disappear.

Economists have known for some time that, for the most part, convergence isn't happening. Many of today's poorest countries have GDP growth rates far below those of the richest countries. And, if we look back over a century or more, inequality between incomes in poor countries and those in rich countries has widened. Such inequality, says former IMF deputy managing director Stanley Fischer, "appears to have been increasing for at least 400 years." Milanovic concurs. By his analysis, this kind of inequality has more than doubled since 1820, with only a brief pause in the trend between the First and Second World Wars.

But his most striking finding concerns the polarization of the world's countries into rich and poor clubs: Relative to the rich, almost all the middle-income countries in 1960 dropped into the ranks of the poor by 2000, and the club of rich countries became almost exclusively Western. "While in the year 1960," he writes, "there were forty-one rich countries -- nineteen of them non-Western -- in 2000, there were only thirty-one rich countries, and only nine of them were non-Western. None of the African countries (except for Mauritius) and none of the Latin American and Caribbean countries (except for the Bahamas) were left among the rich. Latin America and the Caribbean, probably for the first time in 200 years, had no country that was richer than the poorest West European country."

And once you're in the poor club, history shows that it's almost impossible to get out. "Unless there is a remarkable discontinuity with the patterns of development that have lasted during the past half-century (and possibly longer), the likelihood of escaping from the bottom rung is almost negligible."

But many of the countries that have fallen behind have relatively small populations or are marginal in the world economy. Upbeat economists note that the two really big countries, China and India, are doing exceptionally well, with per-capita income-growth rates that are two to four times higher than those of rich countries. And, they say, when we weight countries' incomes by their populations, which gives the economic success of China and India its due, we find that global income inequality is declining.

Milanovic acknowledges that, according to this measure, global inequality has indeed declined, although only for the past 20 years or so. Yet he argues that this measure is ultimately not very satisfactory. Among other things, it doesn't gauge inequality between individuals within countries. If we really want to know what's happening with global inequality, including inequality both between and within countries, we should measure inequality between all the world's individuals, as if national boundaries had suddenly vanished, allowing us to examine the income of each person as a member of a single global population.

In the past decade, Milanovic has pioneered exactly this kind of measure, and in this book he presents his latest results. Exploiting the World Bank's vast database on household incomes and expenditures -- a database that details the economic behaviour of individual people and families in countries that together include more than 84 per cent of the world's population -- Milanovic finds that inequality between individuals stayed roughly constant, and extremely high, in the last two decades of the 20th century. Between 1988 and 1993, inequality increased mainly because average incomes in rural Asia -- especially in the still-immense rural populations of China, India and Bangladesh -- grew much more slowly than incomes in rich countries.

Inequality increased, too, because of a rapidly widening gap between rural and urban incomes in China. Between 1993 and 1998, the overall inequality trend reversed itself slightly: While the gap between rural and urban incomes in China continued to grow, the gap between rural incomes in India and China and incomes in rich countries shrank slightly.

So upbeat assessments of the world inequality trend, based on population-weighted measures, are wrong. Inequality between the world's rich and poor people has been rising for a very long time and has stayed high in recent decades. To the extent that there has been improvement in the population-weighted inequality statistics, it has been entirely due to China's spectacular growth since the 1970s. But when we look at inequality between individuals, we find that widening inequalities inside China and India have counterbalanced the gains from China's overall growth.

We also find, definitively, that we now live in the world without a middle class: More than 77 per cent of the world's people are poor (with a per-capita income below the Brazilian average), while nearly 16 per cent are rich (with incomes above the Portuguese average), which leaves less than 7 per cent in the middle.

This is not a prescription for a stable global polity. Milanovic points out that political theorists going back to Aristotle have argued that a large middle class is key to civil peace. The more we live in a truly global society -- intimately connected by fibre-optic cables, air travel and trade -- the more the absence of a world middle class matters. And the huge income differences among us matter, too, because people care not just about their absolute level of income, but also about their relative position in the overall income distribution. "Globalization must, by changing the reference point upward," Milanovic concludes, "make people in poor countries feel more deprived."

Worlds Apart has too many tables, graphs and formulas for lazy summertime reading. Yet Branko Milanovic makes a difficult subject remarkably accessible. His expertise and intellectual integrity inform every page. And underneath all the technical apparatus of argument and analysis, one senses his passion for the cause of reducing inequality -- even if only a little -- and his perplexity at finding himself one very lucky human being among billions who aren't so fortunate.

Thomas Homer-Dixon is director of the Trudeau Centre for Peace and Conflict Studies at the University of Toronto and author of The Ingenuity Gap.

The key to helping developing countries help themselves.

Establishing sources of ideas (think tank; the fractal brain) and instituting translational infrastructure (regional networks). Unless one changes how people think, but through the terms of their context, one will never change how they behave.


Journal of International Development
J. Int. Dev. 17, 727–734 (2005)
Published online in Wiley InterScience ( DOI: 10.1002/jid.1235



Overseas Development Institute (ODI), London, UK

Abstract: Better utilization of research and evidence in development policy and practice can
help save lives, reduce poverty and improve the quality of life. However, there is limited
systematic understanding of the links between research and policy in international develop-
ment. The paper reviews existing literature and proposes an analytical framework with four
key arenas: external influences, political context, evidence and links. Based on the findings of
stakeholder workshops in developing countries around the world, the paper identifies four key
issues that characterize many developing countries. These are: (i) troubled political contexts;
(ii) problems of research supply; (iii) external interference; and (iv) the emergence of civil
society as a key player. Despite these challenges, two institutional models seem to be
particularly effective: (i) think tanks and (ii) regional networks. Copyright # 2005 John
Wiley & Sons, Ltd.


Often it seems that researchers, practitioners and policymakers live in parallel universes.
Researchers cannot understand why there is resistance to policy change despite clear and
convincing evidence. Policy-makers bemoan the inability of many researchers to make
their findings accessible and digestible in time for policy decisions. Practitioners often just
get on with things.

Yet better utilization of research and evidence in development policy and practice can
help save lives, reduce poverty and improve the quality of life. For example, the results of
household disease surveys in rural Tanzania informed a process of health service reforms
which contributed to over 40 per cent reductions in infant mortality between 2000 and
2003 in two districts.1

Indeed, the impact of research and evidence on development policy is not only
beneficial —it is crucial. The HIV/AIDS crisis has deepened in some countries because
of the reluctance of governments to implement effective control programmes despite clear
evidence of what causes the disease and how to prevent it spreading (Court, 2005).

Despite the importance of —and interest in —the topic, very little work has focused on
the international development sector. With a few notable exceptions (Garret and Islam,
1998; Keeley and Scoones, 2003; Ryan, 1999, 2002; Lindquist, 2001 and the strategic
evaluation by International Development Research Centre (IDRC)2) most work has
focused on organization for Economic Cooperation and Development (OECD) countries.


ODI has been looking at research-policy linkages in international development for over
five years. We have completed extensive literature reviews —drawing on various streams
of literature such as economics, political science, management, anthropology, social
psychology, marketing communication, and media studies (de Vibe et al., 2002; Crewe
and Young, 2002). We have also collected and analysed a large number of case studies on
the topic of Bridging Research and Policy (Court and Young, 2003; Court et al., 2004), and
more recently have been involved in advisory work and workshops, seminars and training
courses for researchers and policy makers in the UK and developing countries.

We define both research and policy very broadly. By research we do not just mean
classical scientific research. It includes any systematic learning process —from theory
building and data collection to evaluation action research. Similarly, policy is not just
narrowly defined as a set of policy documents or legislation; it is about setting a deliberate
course of action and then implementing it. It includes the setting of policy agendas, official
policy documents, legislation, changes in patterns of government spending to implement
policies, and the whole process of implementation. It is also about what happens on the
ground: a policy is worth nothing unless it results in actual change.

Policy-making used to be thought of as a linear and logical process, in which policy-
makers identified a problem, commissioned research, took note of the results and made
sensible policies which were then implemented. Clearly that is not the case. Policy-
making is a dynamic, complex, chaotic process, especially in developing countries. Clay
and Schaffer’s book Room for Manoeuvre in 1984 about Agricultural Policy in Africa
described ‘the whole life of policy is a chaos of purposes and accidents. It is not at all a
matter of the rational implementation of decisions through selected strategies’. That is
increasingly recognized as a more realistic description of the policy process than the linear
rational model —though the truth is probably somewhere in the middle. Furthermore, as
Steve Omamo (2003) pointed out in a recent report on policy research on African
agriculture: ‘Most policy research on African agriculture is irrelevant to agricultural
and overall economic policy in Africa’. It is not really surprising that the link between
research and policy is tenuous and difficult to understand if policy processes are complex
and chaotic and much research is not very policy relevant.

ODI’s Context Evidence and Links Framework is an attempt to simplify the complexity
of how evidence contributes to the policy process so that policy makers and researchers
can make decisions about how they do their work to maximise the chance that policies
are evidence-based, and that research does have a positive impact on policy and practice.

It identifies four broad groups of factors. We call the first external influences. These are the
factors outside a particular country which affect policy makers and policy processes within
the country. Even in big countries such as India, international economic, trade and even
cultural issues matter a great deal. In smaller, heavily indebted countries, World Bank and
Bilateral Donor policies and practices can be very influential. At national level the factors
fall into three main areas. The political context includes the people, institutions and
processes involved in policy making. The evidence arena is about the type and quality of
research and how it is communicated. The third arena links is about the mechanisms
affecting how evidence gets into the policy process or not.

An interesting thing about the framework is how well it maps onto real-life activities.
The political context sphere maps onto politics and policy making, evidence onto the
processes of research, learning and thinking and links onto networking, the media and
advocacy. Even the overlapping areas map onto recognizable activities. The intersection of
the political context and evidence represents the process of policy analysis —the study of
how to implement and the likely impact of specific policies. The overlap between evidence
and links is the process of academic discourse through publications and conferences, and
the area between links and political context is the world of campaigning and lobbying.
Evidence from the case studies suggests that the area in the middle —the bulls-eye —
where convincing evidence providing a practical solution to a current policy problem, that
is supported by and brought to the attention of policymakers by actors in all three areas, is
where there is likely to be the most immediate link between evidence and policy.


Over the last few years, ODI has run a number of workshops with researchers, policy
makers and advocates to learn from their own experiences, gather case studies, help them
to understand these issues and to provide guidance for those in developing countries who
would like to increase the policy impact of their work.3 These have included:

* UK —on bridging research and policy with development researchers, practitioners and
policy makers
* Botswana —on civil society organizations (CSOs), Evidence and Pro-poor Policy at the
CIVICUS World Assembly 2003
* Morocco —with the Economic Research Forum for a range of research-policy
stakeholders in the Middle East
* India —with the Indian Institute of Technology and involving a range of research-
policy stakeholders on water policy in India
* Indonesia —on Bridging Research and Policy with researchers belonging to the East
Asia Development Network
* Moldova —on Policy Entrepreneurship for staff from think tanks in Central and Eastern
Europe and the former Soviet Union (organized with the Open Society Institute)
* Kenya —with various stakeholders on CSOs, Evidence and Policy Influence
* Egypt —to help researchers and policy makers promote evidence-based policy making
in the Small and Medium Enterprise Sector in Egypt
* Southern Africa (Malawi, Zambia and Mozambique), Eastern Africa (Tanzania and
Uganda) and West Africa (Ghana and Nigeria) —for staff from research institutes,
national non-governmental organizations (NGOs) and networks along with a wide
spectrum of stakeholders interested in how CSOs can contribute more effectively to
evidence-based policymaking.

These have led us to identify four practical issues complicating evidence-based
policymaking in developing countries, and two approaches which seem to help.


4.1 Political Context: Politics and Institutions

Research-policy links are dramatically shaped by the political context. The policy process
and the production of research are in themselves political processes from start to finish.
Key influencing factors include:
* issues of political culture
* the extent of civil and political freedoms in a country
* political contestation, institutional pressures and vested interests
* the capacity of government to respond
* the attitudes and incentives among officials, their room for manoeuvre, local history,
and power relations

In most of the workshops, participants agreed that the prevailing political culture can be
one of the biggest challenges for research uptake. Instability and high turnover of key
positions, authoritarianism as a virtue, clientelism, empirical policy-making and lack of
transparency, among other characteristics of the policy context, were described as crucial
hurdles to the process of informing policy with evidence-based research. In Kenya, for
example, participants considered that overlaps between minister’s responsibilities effec-
tively created vacuums in the system, making it almost impossible for civil society
organizations to engage with the policy processes.

Understanding the degree of political contestation and the attitudes and incentives of
officials is important to explain some important public policies. For instance, in
Ethiopia, the government’s energy policies focus on large-scale hydro, oil and gas
investments despite the fact that the majori ty of the population, in particular the poor,
use biomass as both a source of energy and subsistence. This focus on large scale
investments illustrates the importance of the elite’s agenda in public policy. At a
workshop on CSOs, Evidence and Pro-poor Policy in Botswana, participants consid-
ered that before pursuing pro-poor policies, governments needed to stop anti-poor

4.2 Problems of Research Supply and Communication

Participants in many workshops described the lack of high quality credible research on
current policy issues as a major constraint. Lack of investment in higher education in
developing countries over the last two decades has dramatically eroded academic capacity
(Commission for Africa, 2005). Participants at the Morocco workshop complained of poor
institutional capacity for research among academic institutions throughout Africa and the
Middle East. In particular inadequate financial resources, the absence of peer review
systems and limited access to research methods and tools such as data management
software. More recently, consultancy firms have been poaching the best researchers from
think tanks and universities,4 which undermines the capacity of academic institutions to
secure the resources necessary to train new generations of researchers and policymakers.
Participants of ODI workshops have been unanimous on the need to package research in
an attractive and useful manner, and the lack of skills to do this. The type of research has
also been highlighted. Policymakers, it seams, prefer action research —or evidence of
actions or events taking place in real life. Theoretical or hypothetical arguments are not as
effective as pilots, case studies or comparative studies. In Moldova, for instance, it was
made clear that comparative studies can have more impact. In other workshops, for
instance with Save the Children UK’s Young Lives programme, ODI has found that there
are some country examples that are valued by policymakers of some countries above
others. Vietnam and Ethiopia, it was suggested, would consider Chinese success stories as
relevant to their context.

4.3 Donors have an Exaggerated Influence

Donors can have a dramatic influence on research-policy interactions in developing
countries by influencing both the research that is undertaken and policy processes. Poverty
Reduction Strategy Processes were frequently cited at the African workshops as increasing
investment in local research, though much of this is still done by Northern organizations
and external consultants and has raised concerns of relevance and beneficiaries’ access to
the findings.

The role that donors play can be seen as both supportive and pervasive. While external
support can provide research and research institutions with leverage and independence, it
can also impose a research agenda of little relevance to the country’s policy context and
culture. In Egypt, for instance, it was argued that international consultants, who play an
important role in shaping the country’s policies, were not aware of the Egyptian context.

4.4 The Changing Role of Civil Society

CSOs have played a vital role in development for decades, as innovators, service providers
and advocates with and for the poor. They are increasingly involved in policy processes,
but often with limited success. While their legitimacy and credibility with the local
communities they support is widely recognized, national governments remain wary of
their greater involvement in policy.

The Africa workshops demonstrated a high demand from CSOs for enhanced capacity
and greater opportunity to engage more effectively with national and international
development policy, and provided some good examples of where they have been
successful. The Malawi Economic Justice Network in Malawi, Cruzeiro do Sul in
Mozambique and Forest Watch in Ghana have all successfully influenced policies through
campaigns on debt reduction, fair trade and sustainable forest management, respectively,
often through forming partnerships and coalitions, giving them greater credibility and
legitimacy with policymakers.

In Botswana, participants described how CSOs help the government to develop policy
by working as facilitators, intermediaries, amplifiers and filters, and also help to
implement, monitor and evaluate the impact of policies. Good communication is essential
if they are to play these roles. This is frequently a problem. Participants at the workshop in
Morocco described severe communication challenges existed between both groups ‘as if
they lived in different worlds and spoke different languages’.


Two approaches seem to feature frequently in cases where research-based evidence has
influenced policy in developing countries:

5.1 The Think Tank Approach

Think Tanks are a well developed organizational model, and play an important role in
policy processes in developed countries. While there are relatively few Think Tanks in
developing countries, the Think Tank Approach —delivering academically credible
research-based evidence and advice to policy makers in the right format at the right
time —is a frequent feature of successful cases.

Two examples from the Africa workshops illustrate different methods to develop the
necessary credibility and influence with policy makers. Upon taking up his post, the
new Executive Director of the Kampala based Economic Policy Research Centre’s
(EPRC) concentrated the Centre’s research programme on poverty issues in Uganda.
EPRC was then able to establish a reputation for expertise in poverty issues that gave it
the credibility to play a key role in the povert y reduction strategy paper (PRSP) process
in Uganda. Its success in poverty analysis allowed it to extend its research across a
wider range of policy issues, including nutrition, food security, agriculture, micro-
economic policies, tourism, competitiveness and trade and strengthening financial
institutions. The Centre de Recherches Economiques Applique ́ s (CREA) in Senegal, on
the other hand, was more opportunistic, seeking to identify and quickly gather evidence
to help policy makers respond to emerging issues. Policy makers now frequently come
to them for advice.

Think Tanks need both the capacity to do credible research, communicate effectively,
collaborate closely with policy makers, and the organizational capacity to survive and
thrive in a highly competitive environment. Participants at the Kenya workshop wanted
help to establish local networks and coalitions. CSOs in Malawi were interested in
learning more about how to do credible research. organizations in Eastern Europe needed
advice on fundraising and organizational development.

5.2 Networks

National regional and global networks are playing an increasing role in development
policy (Stone and Maxwell, 2005), and many national and regional networks were cited as
influential during the workshops. These included the Southern Africa Forum for Disability
and Development (SAFOD), Malawi Economic Justice Network (Malawi) Rural Media
Network and Association of African Universities (Ghana), Community Development
Resource Network (Uganda) and Nature Conservation Foundation (Nigeria), among

A number of the case studies involved networks. For example donors applied pressure
through Southern Africa networks to promote national civil society’s participation in the
Malawi PRSP. In Indonesia, participants considered that the regional economic crisis
stimulated the development of networks and partnerships between previously non-
collaborative actors including the private and public sectors. Not all stories were so
positive though. In India, it was the participants’ opinion that the complex webs of
networks in the Water Sector undermined specialist knowledge and made it more difficult
to develop credible evidence on particular subjects.

Participants in many of the workshops described how regional networks and suprana-
tional bodies can enhance the policy leverage of national and local CSOs. The Association
of Southeast Asian Nations (ASEAN), for instance, is seen as a credible organization that
draws from various constituencies interested in a common integration vision. CSOs
working within this context can take advantage of ASEAN’s position and regional role to
bridge the gap between research and policy.


The Tanzania example in the introduction provides good evidence that better use of
research-based evidence in development policy and practice can help save lives, reduce
poverty and improve the quality of life. There is considerable understanding about how
research-based evidence contributes to policy in OECD countries, and much expertise
among policy research institutes and think tanks in the developed world about how to
make it happen. But our understanding remains shallow in the developing world,
especially in countries where political processes themselves are often poorly understood.
There is however much interest among donors, researchers, CSOs and policy makers in
doing it better.

There is a wide range of factors involved, and each context will have its own unique
mix. A more systematic understanding of the external context, the political context, the
evidence and the links between them will help researchers, policy makers, practitioners
and CSOs decide how they can best promote more evidence-based policy. Particular
attention needs to be given in developing countries to:
* the political context —the factors which shape local policy and political processes;
* the problem of research supply —is there the capacity to generate and use research-
based evidence effectively?
* the role of external actors —how do donors influence research and policy processes; and
* the role of Civil Society —how can civil society be empowered to promote evidence-
based policymaking.

Think Tanks have developed a range of approaches which are particularly effective at
promoting evidence-based policy in developed countries, and similar approaches seem to
be effective in developing countries, although the Think Tank itself might not be the most
play an important role and are highly valued by local organizations. There remains much
more to learn and ODI’s RAPID Programme5 is working on many of these issues.

Clay EJ, Schaffer BB. 1984. Room for Manoeuvre: An Exploration of Public Policy in Agricultural
and Rural Development. Heinemann: London.
Commission for Africa. 2005. Our Common Interest: Report of the Commission for Africa.
Commission for Africa: London.
Court J. 2005. Bridging research and policy on HIV/AIDS in developing countries. ODI Working
Paper. Overseas Development Institute: London.
Court J, Young J. 2003. Bridging research and policy: insights from 50 case studies. ODI Working
Paper 213. Overseas Development Institute: London.
Court J, Hovland I, Young J. 2004. Bridging Research and Policy in International Development:
Evidence and the Change Process. ITDG: London.
Crewe E, Young J. 2002. Bridging research and policy: context, evidence and links. ODI Working
Paper 173. Overseas Development Institute: London.
De Vibe M, Hovland I, Young J. 2002. Bridging research and policy: an annotated bibliography. ODI
Working Paper 174. Overseas Development Institute: London.
Garrett JL, Islam Y. 1998. Policy research and the policy process: do the twain ever meet?
Gatekeeper Series 74. International Institute for Environment and Development: London.
Keeley J, Scoones I. 2003. Understanding environmental policy processes. In Africa: Cases from
Ethiopia, Mali and Zimbabwe. Earthscan: London.
Lindquist EA. 2001. Discerning Policy Influence: Framework for a Strategic Evaluation of IDRC-
Supported Research. International Development Research Centre (IDRC): Canada.
Omamo S. 2003. Policy research on African agriculture: trends, gaps, and challenges. Research
Report No 21. International Service for National Agricultural Research (ISNAR): The Hague.
Ryan J. 1999. Assessing the impact of rice policy changes in Viet Nam and the contribution of policy
research. Impact Assessment Discussion Paper 8. International Food Policy Research Institute:
Washington, DC.
Ryan J. 2002. Synthesis Report on Assessing the Impact of Policy-Oriented Social Science Research.
International Food Policy Research Institute: Washington, DC.
Stone D, Maxwell M (eds). 2005. Global Knowledge Networks and International Development:
Bridges Across Boundaries. Routledge: Oxford.

Further information can be found at[Accessed 29 June 2005].

Copyright # 2005 John Wiley & Sons, Ltd. J. Int. Dev. 17, 727–734 (2005)

Knowledge management = multi-way communication.

The debate between bottoms-up vs. top-down must belong to the past.


How one airline flew back into the black

Emulating low-cost competitors, American uses workers' expertise.

By Alexandra Marks | Staff writer of The Christian Science Monitor

NEW YORK - Two American Airlines mechanics didn't like having to toss out $200 drill bits once they got dull. So they rigged up some old machine parts - a vacuum-cleaner belt and a motor from a science project - and built "Thumping Ralph." It's essentially a drill-bit sharpener that allows them to get more use out of each bit. The savings, according to the company: as much as $300,000 a year.

And it was a group of pilots who realized that they could taxi just as safely with one engine as with two. That was instituted as policy has helped cut American's fuel consumption even as prices have continued to rise to record levels.

From the maintenance floor to the cockpit, American Airlines is daily scouring operations to increase efficiency and find even the smallest cost savings. It's paid off: Last week, the company announced its first profit in almost five years.

While the other so-called legacy carriers are also slashing labor costs and increasing efficiency in an effort to compete with successful low-cost airlines, American has been the most aggressive in emulating the positive employee relations of low-cost rivals. Indeed, when American's management intensified its cost-saving efforts, it didn't turn to high-priced outside consultants. Rather, it asked its employees, since they do their jobs day in and out and know them probably better than anyone else.

"Communication lines were suddenly open," says Justin Fuller, an American engineer in Tulsa, Okla. "Before, people had ideas, but they didn't know where to take them. They also thought it wouldn't make any difference if they did. Now, the groundwork has been laid so people know where to take their ideas and how to get them implemented."

In an industry long noted for hostile labor-management relations, American's new approach is garnering high praise. But analysts caution that the change may not be enough to help turn the airline around completely. Fuel prices continue at record highs - double what they were a year ago - and, even though fares have nudged up only slightly, passengers are still demanding bargain-basement fares.

The combination continues to hobble most of the major US carriers. United, Delta, and Northwest are all struggling to either emerge from bankruptcy or avoid it. At Northwest, the clock has started ticking on a potential strike.

Of the major legacy carriers, only American and Continental showed a profit in the second quarter - usually the industry's strongest. Although the earnings are good news, analysts say that American still faces a long haul.

"There's some potential looking forward. Things are better," says airline analyst Helane Becker of the Benchmark Co. in New York. "But the worst isn't over yet. Fuel costs are still very high."

Two years ago, American was dodging bankruptcy, and labor strife hit a peak after unions voted to accept steep pay cuts, only to find out that management had given itself big bonuses and protected their pension plans from creditors in the case of bankruptcy. Unions were livid and temporarily suspended approval of the givebacks.

Call that the nadir. The board reacted almost immediately, ousting CEO Donald Carty and eventually bringing in Gerard Arpey, who put into place the current employee-based turnaround plan.

"American seemed to figure out sooner than United or Delta that the kind of cost structures they had through the '90s weren't going to make it, and they went about changing them in a much less confrontational way than the other carriers," says Clint Oster, an aviation economist at Indiana University in Bloomington. "There've been lots of jokes made about American's decision to pull the pillows out of certain flights. And while you can argue that is pretty silly, it does show that detailed attention to costs."

At the same time that it streamlines its operations, American is also focusing on strengthening them. While other carriers are outsourcing maintenance to save money, American is expanding its main maintenance facility in Tulsa. And it's challenging its employees to come up with $500 million in savings there during the next year.

"They're going to get to that goal ... by finding simpler ways to get their jobs done, lowering costs, and increasing productivity," says James Beer, American's chief financial officer. "As a result, we believe we'll be able to bring in other maintenance work [from other airlines] into the Tulsa organization."

For instance, on a tip from a maintenance crew, engineer Blakle Burgess came up with a way to save 90 percent on bathroom mirrors when they need to be replaced. Instead of ordering them prefabricated, Ms. Burgess helped design a way to make them on site with far cheaper raw materials. That way, when maintenance needs to replace a mirror, they can make it without having to go through the process of ordering and waiting for one to be delivered.

"Keeping stuff in house also spurs a sense of us working together," says Burgess. "People are seeing that we're doing something different. We're trying to keep work in while other airlines are taking a cheaper route and outsourcing."

Although there remain employee skeptics who are quick to note the company's history of labor troubles, most are reserving judgment.

And one thing is clear: The report that the company finally turned a profit, albeit a small one, is having an impact.

"It brings a smile to a lot of our faces," says mechanic Mick Davenport.

Copyright © 2005 The Christian Science Monitor. All rights reserved.

Wal-Mart's Most Favored Nation.

From Reuters:

"It (Wal-Mart) bought $18 billion worth of goods from China last year. If the company were a country, it would rank as China's sixth-largest export market, ahead of the United Kingdom, Taiwan, Singapore and France, economists estimate."


Friday, July 29, 2005

Not all tails have to be new.

July 28, 2005

Reading Between the Lines of Used Book Sales


THE Internet is a bargain hunter's paradise. Ebay is an easy example, but there are many places for deals on used goods, including

While Amazon is best known for selling new products, an estimated 23 percent of its sales are from used goods, many of them secondhand books. Used bookstores have been around for centuries, but the Internet has allowed such markets to become larger and more efficient. And that has upset a number of publishers and authors.

In 2002, the Authors Guild and the Association of American Publishers sent an open letter to Jeff Bezos, the chief executive of, which has a market for used books in addition to selling new copies. "If your aggressive promotion of used book sales becomes popular among Amazon's customers," the letter said, "this service will cut significantly into sales of new titles, directly harming authors and publishers."

But does it? True, consumers probably save a few dollars while authors and publishers may lose some sales from a used book market. Yet the evidence suggests that the costs to publishers are not large, and also suggests that the overall gains from such secondhand markets outweigh any losses.

Consider a recent paper, "Internet Exchanges for Used Books," by Anindya Ghose of New York University and Michael D. Smith and Rahul Telang of Carnegie-Mellon. (The text of the paper is available at

The starting point for their analysis is the double-edged impact of a used book market on the market for new books. When used books are substituted for new ones, the seller faces competition from the secondhand market, reducing the price it can set for new books. But there's another effect: the presence of a market for used books makes consumers more willing to buy new books, because they can easily dispose of them later.

A car salesman will often highlight the resale value of a new car, yet booksellers rarely mention the resale value of a new book. Nevertheless, the value can be quite significant.

This is particularly true in textbook markets, where many books cost well over $100. Judith Chevalier of the Yale School of Management and Austan Goolsbee at the Chicago Business School recently examined this market and found that college bookstores typically buy used books at 50 percent of cover price and resell them at 75 percent of cover price. Hence the price to "rent" a book for a semester is about $50 for a $100 book.

Ms. Chevalier and Mr. Goolsbee found that students were well aware of industry practices and took resale value into account when they bought books. (The study, "Are Durable Goods Consumers Forward Looking? Evidence from College Textbooks," is available at Mr. Goolsbee's Web site,

Back to Amazon. Professors Ghose, Smith and Telang chose a random sample of books in print and studied how often used copies were available on Amazon. In their sample, they found, on average, more than 22 competitive offers to sell used books, with a striking 241 competitive offers for used best sellers. The prices of the secondhand books were substantially cheaper than the new, but of course the quality of the used books (in terms of wear and tear) varied considerably.

According to the researchers' calculations, Amazon earns, on average, $5.29 for a new book and about $2.94 on a used book. If each used sale displaced one new sale, this would be a less profitable proposition for Amazon.

But Mr. Bezos is not foolish. Used books, the economists found, are not strong substitutes for new books. An increase of 10 percent in new book prices would raise used sales by less than 1 percent. In economics jargon, the cross-price elasticity of demand is small.

One plausible explanation of this finding is that there are two distinct types of buyers: some purchase only new books, while others are quite happy to buy used books. As a result, the used market does not have a big impact in terms of lost sales in the new market.

Moreover, the presence of lower-priced books on the Amazon Web site, Mr. Bezos has noted, may lead customers to "visit our site more frequently, which in turn leads to higher sales of new books." The data appear to support Mr. Bezos on this point.

Applying the authors' estimate of the displaced sales effect to Amazon's sales, it appears that only about 16 percent of the used book sales directly cannibalized new book sales, suggesting that Amazon's used-book market added $63.2 million to its profits.

Furthermore, consumers greatly benefit from this market: the study's authors estimate that consumers gain about $67.6 million. Adding in Amazon's profits and subtracting out the $45.3 million of losses to authors and publishers leaves a net gain of $85.5 million.

All in all, it looks like the used book market creates a lot more value than it destroys.

Hal R. Varian is a professor of business, economics and information management at the University of California, Berkeley.

Copyright 2005 The New York Times Company

A managerial legend comments on health care.

We should only be so fortunate to have more views from the outside.


Efficiency in the Health Care Industries
A View From the Outside

Andrew S. Grove, PhD

JAMA. 2005;294:490-492.

The health science/health care industry and the microchip industry are similar in some important ways: both are populated by extremely dedicated and well-trained individuals, both are based on science, and both are striving to put to use the result of this science. But there is a major difference between them, with a wide disparity in the efficiency with which results are developed and then turned into widely available products and services.

To be sure, there are additional fundamental differences between the 2 industries. One industry deals with the well-defined world of silicon, the other with living human beings. Humans are incredibly complex biological systems, and working with them has to be subject to safety, legal, and ethical concerns. Nevertheless, it is helpful to mine this comparison for every measure of learning that can be found.

First, there are important differences between health care and microchip industries in terms of research efficiency. This year marks the 40th anniversary of a construct widely known as Moore’s Law, which predicts that the number of transistors that can be practically included on a microchip doubles every year. This law has been a guiding metric of the rate of technology development.1 According to this metric, the microchip industry has reached a state in which microchips containing many millions of transistors are shipped to the worldwide electronics industry in quantities that are measured in the billions per month.

By contrast, a Fortune magazine article suggested that the rate of progress in the "war on cancer" during the same 40 years was slow.2 The dominant cause for this discrepancy appears to lie in the disparate rates of knowledge turns between the 2 industries. Knowledge turns are indicators of the time it takes for an experiment to proceed from hypothesis to results and then lead to a new hypothesis and a new result.

The importance of rapid knowledge turns is widely recognized in the microchip industry. Techniques for early evaluation are designed and implemented throughout the development process. For example, simple electronic structures, called test chips, are incorporated alongside every complex experiment. The test chips are monitored as an experiment progresses. If they show negative results, the experiment is stopped, the information is recorded, and a new experiment is started.

This concept is also well known in the health sciences. It is embodied in the practice of futility studies, which are designed to eliminate drugs without promise. A recent example of the use of futility studies for this purpose is the exercise of narrowing the list of putative neuroprotective agents before launching a major randomized clinical trial.3

The difference is this: whereas the surrogate "end point" in the case of microchip development—the test chip failure—is well defined, its equivalent in the health sciences is usually not. Most clinical trials fall back on an end point that compares the extent by which a new drug or therapy extends life as compared with the current standard treatment. Reaching this end point usually takes a long time; thus, knowledge turns are slow. In many instances, a scientist’s career can continue only through 2 or 3 such turns. The result is wide-scale experimentation with animal models of dubious relevance, whose merit principally lies in their short lifespan. If reliable biomarkers existed that track the progression of disease, their impact on knowledge turns and consequently on the speed of development of treatments and drugs could be dramatic.

Even though such biomarkers could have a profound effect on medical research efficiency, biomarker development efforts seem far too low. Although precise numbers are difficult to come by, in my estimation, in the microchip industry, research into development, test, and evaluation methods represents about 10% of total research and development budgets. This 10% is taken off the top, resulting in less actual product development than the engineers, marketers, or business managers would like. But an understanding that this approach will lead to more rapid knowledge turns protects this allocation from the insatiable appetite of the business. The National Institutes of Health (NIH) budget is about $28 billion a year.4 It seems unlikely that anywhere near 10%—$2.8 billion—is spent on biomarker development.

A second difference between the microchip and health science industries is the rate at which hard-fought scientific results are "brought to market"—produced in volume in the case of microchips or translated into clinical use in the case of medicine. A key factor in accelerating the movement of discoveries from the research laboratory to marketplace (or from bench to bedside) is the nature of the facilities in which translational work is performed. The world of business has many stories of failures of organizational designs that impede technology transfer. The classical research laboratory, isolated and protected from the chaos and business-driven urgencies of production units, often led to disappointing results. For example, when Intel started, the leadership resolved to operate without the traditional separation of development from production, which worked remarkably well for quite some time. Developers had to compete for resources with the business-driven needs of production, but their efforts were more than compensated by the ease with which new technology, developed on the production line, could be made production worthy.

Today, an evolution of this resource-sharing principle continues in the microchip industry. Dedicated developmental factory units are designed from the ground up with the aim of eventually turning them into production units. They are overbuilt for the needs of development, but once development is completed, the facility is filled with equipment and people and transformed into a production unit in a matter of months. Although overbuilding for the development phase costs more initially, the savings in efficiency of moving products to production more than make up for this initial outlay. Medical facilities are designed for a variety of purposes, ranging from outpatient clinics to surgical centers, from general hospitals to tertiary hospitals. There is room for a translational hospital designed from the ground up with the mission of speeding new developments toward usage in general hospitals. These hospitals would be flexible, equipped for capability of extra monitoring, ready to deal with emergencies—all extra costs but likely to be made up by the resulting increase in translational efficiency. Some examples exist, such as the NIH Clinical Center. Some cancer centers have adopted changes in hospital design that are steps in this direction. However, much more needs to be done before these designs are evaluated and an optimal approach is adopted and proliferated throughout the health care industry.

When it comes to operational efficiency, nothing illustrates the chasm between the 2 industries better than a comparison of the rate of implementation of electronic medical records with the rate of growth of electronic commerce (e-commerce). Common estimates suggest that no more than 15% to 20% of US medical institutions use any form of electronic records systems.5 By contrast, during the last 10 years, more than $20 trillion worth of goods and services have been bought and sold over the Internet (A. Bartels, written communication, June 2005).

e-Commerce started in the era of mainframe computers. It required specialized software, created and owned by the participants (so-called proprietary software). To link buyer with seller, each had to have the same software. The software was expensive and difficult to modify and maintain. Consequently, the use of e-commerce was limited to a few large companies.

The Internet changed all that. Computing became standardized, driven by the volumes of substantially identical personal computers; interconnection standards were defined and implemented everywhere. A virtuous cycle evolved: standards begot large numbers of users, and the increasing numbers of users reinforced the standards. It was easy to become part of an electronic marketplace because it no longer required the installation of proprietary software and equipment.

The early results were pedestrian: orders taken by telephone, manual data entry and reentry, and the use of faxes were reduced. But the benefits were spectacular. Costs and error rates plunged. Small- and medium-sized companies rushed to join the electronic marketplace, necessitating the development of a standardized software code that would translate information from one company’s system to that of another, the computing version of the Rosetta stone.

Although the computer industry is fairly fragmented,6 the health care industry is even more so. Like the computer industry, health care is a largely horizontally organized industry, with the horizontal layers representing patients, payers, physicians, and hospitals, as well as pharmaceutical and medical device companies. Standard ways of interconnecting all these constituencies are crucial. The good news is that the desire to increase internal productivity has led to at least partial deployment of information technology within the companies of many of the participants. Further good news is that the physical means of interconnecting the many participants already exists in the form of the Internet.

The bad news is that with the exception of a few, large, vertically integrated health care organizations, in which participants from several layers are contained in 1 organization (as is the case with the Veterans Affairs Administration and Kaiser Permanente), the benefits of electronic information exchange are not necessarily realized by the participants in proportion to their own investment.7 The industry faces what is called in game theory the "prisoners’ dilemma" all members have to act for any one member to enjoy the benefit of action.

Such collective action often requires external stimulus. The year 2000 problem (ie, "Y2K") was an example of such a stimulus, causing the near-simultaneous upgrade of the worldwide computing and communications infrastructure. Although its ostensible benefit was the avoidance of a digital calamity at the turn of the century, its greatest benefit was in readying thousands of commercial organizations for the age of the Internet and e-commerce.

Even though the task facing the health care industry in developing and deploying the crucial "Rosetta code" is much smaller than the task of getting ready for 2000 was, external impetus is still needed to catalyze serious action. The National Health Information Infrastructure Initiative8 demonstrates some desire to encourage progress along these lines.

However, what is needed to cause the industry to act is customer demand. The largest customer—approaching half of total health care spending9—is the Medicare system. It seems that the entire health care industry would benefit if Medicare mandated the adoption of a Rosetta code for the health care industry before institutions were granted permission to participate in Medicare business.

There are signs that individual consumers may be taking matters into their own hands. The proliferation of companies providing personal health record services10 is an indication of such a movement. This phenomenon has all the makings of becoming a disruptive technology.11 Disruptive technologies, usually initiated by small businesses that are new to the industry in question, can force widespread defensive actions by the much larger industry incumbents. In this case, inadequate response by the incumbents could lead to some of the emerging providers of personal health record services becoming the owners of the customer relationship—a development of considerable strategic significance to all such businesses.

The health care industry in the United States represents 15% of the gross domestic product,12 and bearing its cost is a heavy burden on corporations and individuals alike. The mandate for increasing its efficiency—in research, translation, and operations—is clear. History shows that whatever technology can do, it will do.

If not here, where? If not now, when?


Corresponding Author: Andrew S. Grove, PhD, Intel Corporation, 2200 Mission College Blvd, Santa Clara, CA 95054 (

Financial Disclosures: Intel Corporation manufactures microprocessors and other types of microchips that can be used in health care information technology.

Author Affiliation: Dr Grove is former chairman of the board of Intel Corporation, Santa Clara, Calif.


1. Moore G. Cramming more components onto integrated circuits. Electronics Magazine. April 19, 1965:114-117.

2. Leaf C. The war on cancer. Fortune. March 2004:76-96. MEDLINE

3. Elm JJ, Goetz CG, Ravina B, et al. A responsive outcome for Parkinson's disease neuroprotection futility studies. Ann Neurol. 2005;57:197-203. CrossRef | ISI | MEDLINE

4. American Association for the Advancement of Science. NIH "soft landing" turns hard in 2005: R&D funding update on R&D in the FY 2005 NIH budget. Available at: February 20, 2004. Accessed June 20, 2005.

5. Manhattan Research. Taking the Pulse v 5.0: Physician and Emerging Information Technologies. New York, NY: Manhattan Research; April 12, 2005. Available at: Accessed June 20, 2005.

6. Grove AS. Only the Paranoid Survive. New York, NY: Doubleday; 1996:42.

7. Pearl R, Meza P, Burgelman RA. Better Medicine Through Information Technology. Stanford, Calif: Stanford Graduate School of Business; 2004. Case study SM136.

8. Stead WW, Kelly BJ, Kolodner RM. Achievable steps toward building a national health information infrastructure in the United States. J Am Inform Assoc. 2005;12:113-120. ISI

9. Cowan C, Catlin A, Smith C, Sensenig A. National health expenditures, 2002. Health Care Financ Rev. 2004;25:143-166. MEDLINE

10. Markle Foundation. The Personal Health Working Group final report. Connecting for health: a public-private collaboration [appendix 2]. New York, NY: Markle Foundation; July 2003.

11. Christensen CM. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, Mass: Harvard Business School Press; 1997.

12. 2004 CMS Statistics. Baltimore, Md: US Dept of Health and Human Services, Centers for Medicare & Medicaid Services, Office of Research, Development and Information; 2004. CMS Pub No. 03455.

Betting on the dark side.

Interestingly, Warren Buffett ends up in the Vice Fund. Someone else is paying attention to exactly where he makes his investments.


God vs. Satan
Who's the better investor?

By Daniel Gross

The market is amoral and agnostic. It has no interest in your virtues or vices or God, except insofar as they help make money. But just as morality and faith have taken a larger role in all of American life, so are they also playing an increasingly prominent role in investing. For the secularly progressive, there are socially conscious mutual funds. Jews may be partial to Israel bonds. Thrivent Financial for Lutherans, which sounds like the setup for a Garrison Keillor one-liner, offers more than 20 mutual funds. Putting money to work in ways compatible with your overall worldview is clearly appealing to growing numbers of investors.

And this has produced a very odd market anomaly: Both virtue and vice seem to be increasingly effective investing strategies. God and Satan are both winning on Wall Street. In recent years, people who have invested in a particular brand of virtue—the Ave Maria Catholic Values Fund—and people who have invested in a particular brand of vice—the Vice Fund—have both handily beaten the market.

The Catholic Values Fund, with about $250 million in assets, is one of four funds offered by the Ave Maria Funds, which invests from a Catholic perspective. And, no, it doesn't mean finding companies that make communion wafers. First, it looks for stocks that meet financial criteria set by the portfolio managers at Schwartz Investment Counsel. It uses a "proprietary screening process" created by a Catholic advisory board, whose members include former baseball commissioner Bowie Kuhn, Domino's Pizza founder Thomas Monaghan, American Enterprise Institute theocon Michael Novak, and Phyllis Schlafly. That screen is pretty simple. They eliminate companies connected with abortion or pornography, or those whose policies "undermine the Sacrament of marriage," by offering benefits to domestic partners of employees.

Just as you don't have to be Jewish to like bagels and lox, you don't have to be Catholic to like the performance of Ave Maria's funds, which also include the Growth Fund and the Rising Dividend Fund. The flagship Catholic Values Fund has picked stocks very well. Here's a chart of the Catholic Values Fund against the S&P 500 since its inception in January 2002. And the 2004 annual report shows the holdings to be rather catholic. The managers have chosen companies in an array of sectors: energy and mining, healthcare, and consumer products. Some of the holdings may raise some questions. It owns insurer AIG, which has recently been revealed to be a corporate sinner. Among its largest holdings is defense contractor General Dynamics. (Presumably, it only makes weapon systems used in just wars.) And the bond fund has about half its assets in the bonds of a U.S. government that maintains a death penalty and permits abortion.

By contrast, the much-smaller Vice Fund actively embraces companies that profit from human fallibility. And it has profited handsomely from doing so, crowing that it ranks in the top 1 percent of funds in its category. Here's a chart of the Vice Fund against the S&P 500 since its inception in 2002.

As of June 30, the Vice Fund's $42.5 million in holdings were divided among gambling, booze, and defense stocks (about 25 percent each), tobacco stocks (15 percent), and a bunch of randoms. It's easy to see why Playboy Enterprises and Rick's Cabaret International Inc. are here. But Berkshire Hathaway, run by the abstemious Warren Buffett? And Microsoft? Maybe Warren Buffett and Bill Gates gamble when they play bridge.

How is it that both funds have walloped the S&P 500 and the vast majority of other mutual funds in recent years? Is it that investing in vice, or virtue—as defined by the Catholic Advisory Board—is a better investing discipline than looking at P/E ratios and charts? Perhaps. The folks managing these funds have clearly been good and judicious stockpickers.

But both funds have been well-positioned to outperform the market in recent years. Think about it. Aside from the ever-growing gambling market, vice gets you defense (supported by huge military budgets) and noncyclical consumer goods like beer and tobacco—all three of which were laggards in the go-go tech- and financial-services-dominated 1990s, but which are booming now.

The Catholic Values Fund is benefiting in a different way. Consider which companies and sectors it excludes because of its screening criteria. In 2003, 40 percent of the Fortune 500 offered benefits to domestic partners, according to the Human Rights Campaign. So, for the Values Fund, there is no Microsoft, Citigroup, General Electric, Cisco, or Dell—none of whom have done particularly well since 2000. And the fund's morality has helped it avoid altogether some sectors that have done poorly in recent years—entertainment, media, newspapers, technology, advertising, and pharmaceuticals. Meanwhile, its screening criteria made it more likely to look at less-progressive companies in thriving sectors such as raw materials and energy. Following either the Vice Fund or the Catholic Values Fund in the 1990s would have been a mistake. And given the trend toward providing benefits for domestic partners, the Catholic Values Fund will find itself with a smaller universe of stocks to pick with each passing year.

Now let's answer the two questions I am sure you have been asking yourself. First, do the funds share any holdings? In other words, what vice is also a virtue? The answer, of course, is war. Both the Catholic Values Fund and the Vice Fund own General Dynamics. And Ave Maria's bond fund owns bonds of United Technologies, which is also in the Vice Fund.

Second, in the competition between virtue and vice, who's winning? Alas, as this chart shows, vice has been winning by a small margin over the past few years. In the past year, its margin has been impressive.

Daniel Gross ( writes Slate's "Moneybox" column. You can e-mail him at

Article URL:

Quantitative self-absorption.

Imagine living one's life for the sole purpose of self-monitoring.


July 28, 2005
By Data Obsessed


MY husband, son and I were about to go for a bike ride. We had heart rate monitors. (We got them years ago.) And we had bicycle computers, little devices that track your speed, average speed, mileage, time and revolutions a minute. But now we were trying out the latest gadget for the data-driven workout - power meters. These used sensors in the hub of the rear wheel to calculate how many watts of power we put out. They also gave us all the other data we might want - heart rate, maximum heart rate, average heart rate, revolutions a minute, time, speed and average speed. And after a ride you can upload the data to a computer and see color-coded graphs of your performance.

It's mesmerizing to see your power output. Stephen Madden, the editor in chief of Bicycling magazine, who assiduously uses a power meter himself, warned us not to get too obsessed.

"Don't be a watt weenie," Mr. Madden said, explaining that some people get so focused on their wattage that all the fun goes out of riding.

Too obsessed? Is it even possible? The world of exercise is increasingly being shaped by data, with bicyclists, who have perhaps the most tools to monitor themselves; runners who time themselves and monitor their heart rates and sometimes use global positioning sensors to measure their speed and distance; and moderate exercisers who clip pedometers onto their belts to measure the number of steps they take each day.

Many tools are very new. Heart rate monitors have been around for years, but have only recently been linked to GPS systems or power meters. And now companies are offering Web-based tools to make it even easier for people to track their efforts.

Just six months ago Nike introduced an updated Web site allowing runners to get a customized training program, a schedule and a way to document every variable in their exercise sessions - heart rate, distance, pace (with a pace calculator), weather and route. The company said 11,000 runners use it each week.

The year-old MotionBased Technologies Web site provides athletes and ordinary exercisers a place to store and analyze data from GPS-monitored workouts. (Minimal data storage is free and more elaborate storage costs $11.94 a month.) By word of mouth the company has attracted 10,000 users, said Clark Weber, one of its founders.

And Carmichael Training Systems, a six-year-old online company started by Chris Carmichael, Lance Armstrong's trainer, has signed up more than 10,000 cyclists and other athletes. Subscribers transmit their workout data, and coaches prescribe training programs. The cost is $39 to $499 a month, depending on how much coaching is wanted and how much data is transmitted. At least a quarter of subscribers opt for more elaborate data monitoring, said Kevin Dessart, the company's marketing director. What you do with workout data depends on your goals.

Riders in the Tour de France use power meters to assess their fitness level as they train for a peak power performance during those three weeks in July when it traces its way around France.

As for wannabe athletes, said Michael Berry, an exercise physiologist at Wake Forest University, "if you want to start improving your fitness level and being competitive, then you have to target how intense your exercise is." That needs data.

He elaborated: "The only way to improve is to work harder. And the best way to know if you're working harder is to be able to monitor your work, whether it be via a power meter or a heart rate monitor."

Some, like Grant McAllister, 35, a professor of German at Wake Forest, said that monitoring changed his life.

"I got my first heart rate monitor in 1997," he said. An amateur bicycle racer, he used it to see how much effort he was putting out. The higher the heart rate, the greater the effort. Like many others, he found that his heart rate monitor became a gateway to a data obsession.

Mr. McAllister's next device was a bicycle computer. "I feel completely naked if I don't have some type of computer on my bike now," he said. Two years ago he saved his money and bought a $700 power meter, the cheapest he could find. Now he uses it on every ride, sends his data to Carmichael Training Systems and is a transformed rider.

"It's given purpose to my training," Mr. McAllister said.

Matt Canter, an owner of Ken's Bike Shop in Winston-Salem, N.C., who competes against Mr. McAllister in local races, said Mr. McAllister used to be a good but not an outstanding racer. Now that he has been training with a power meter, Mr. Canter said, "He just kills everybody."

Others monitor their performance to motivate themselves and burn more calories.

Steven Guy, 52, a business consultant in Pottstown, Pa., took up exercise last year, when he joined a weight-loss study that had subjects follow a diet and work out at least moderately most days. Mr. Guy wanted more. He wanted to use exercise to speed his weight loss and to keep the pounds off. So he decided to bike, run and jump rope.

Last Valentine's Day his wife bought him a heart rate monitor, and he began keeping track of his every effort. A heart rate monitor is a narrow elastic band with a plastic sensor in front. You strap the band around your chest, and the sensor transmits electrical signals to a special watch on your wrist that displays your heart rate in beats a minute.

Mr. Guy's monitor came with a chart telling what a person's maximum heart rate should be. Maximum heart rates normally fall by about a beat a minute each year. No one knows why. The formula used for the charts says a person's maximum heart rate is 220 minus age, which would make Mr. Guy's maximum 168 beats a minute.

But exercise physiologists have long known that such charts are not accurate for some exercisers, and Mr. Guy's own experience shows the chart does not work for him. He has raised his heart rate to 175 during a burst of effort.

"There is large variability in the maximum heart rate," said Steven Blair, the president and chief executive of the Cooper Institute, a nonprofit research foundation that studies exercise and fitness. "For one 35-year-old woman 170 might be her maximum heart rate. Her friend the same age might be able to reach 195. That's just genetic variation."

Mr. Guy, however, had a nagging worry: Is it dangerous to push your heart rate too high?

"Nope," Dr. Blair said. "You can't hurt a healthy heart with exercise."

If you get your heart rate close to its true maximum, you feel so depleted that you automatically slow down. Just as you can't hold your breath until you die, you can't drive your heart rate up until your heart gives out.

Now, a year after he began his program, Mr. Guy has lost 25 pounds. Best of all, he said, he ran a 10-kilometer race on the Fourth of July, the first time in 15 years he was able to race that distance. The secret, he said, was monitoring and recording his training.

On the other hand, said Edward Coyle, an exercise physiologist at the University of Texas at Austin, the kind of person who keeps track of mileage and effort with a GPS-based system or who obsessively straps on a heart rate monitor for every exercise session is frequently the kind of person who can do more harm than good with monitoring. Too often, he said, people look at their heart rate and want to get it higher, higher, higher, or they want their power meter readings to soar day after day.

"For people who are motivated and data-driven, they believe that more is better, but that's not the case," Dr. Coyle said.

To improve or even to continue exercising injury-free people must deliberately slow down on days after hard workouts.

"One of the most important ways a heart rate monitor or a power meter is used is when you need to take a day easy," Dr. Coyle explained. And you say to yourself that "you won't let your heart rate go above 130, no matter how much you want to pick it up."

Then there are people who have no race or training program in mind but only want to reassure themselves that they are doing the recommended amount of moderate exercise. They simply want to get adequate physical activity.

That describes Ingrid Woods, 58, who lives in Mill Valley, Calif. She puts on a pedometer every morning and takes it off at night to see how many steps she has taken. She got the device about a year ago "just out of curiosity," she said, when she heard that people should take at least 10,000 steps a day to be fit. She likes to keep checking on herself, she likes to see the data, and by now it has become a habit. "This is just what I do," she said.

Data-driven workouts are not for everyone. Not everyone wants to set a new personal best or train to see how fast she can be. And it is not necessary to monitor and record steps, heart rate, speed and power if your goal is ordinary fitness, Dr. Blair said. Some, in fact, find monitoring abhorrent.

Dr. Clifford Rosen, 55, a physician at the Maine Center for Osteoporosis Research and Education, runs every day but does not use a heart rate monitor and does not keep training schedules or logs. As a scientist, he said, he has enough of data in his work life.

"I measure everything else," Dr. Rosen said. "I tell myself this is one thing I don't have to measure."

And, he said, he is none the worse for it.

"I've run five marathons," Dr. Rosen said. "I'm in really good shape."

After our ride with the power meters, we knew that we loved them. It had been a hot day with drenching humidity, conditions that make the heart rate soar. But power output is different. Power is power, no matter what the weather. And power never lies. The meters are expensive though, my husband said. Should we really buy them?

Of course, I told him. "Think of the pleasure."

Clipped to your belt or waistband, the device counts your steps. If you calibrate the length of your stride, it can also gauge your distance. This is the Sportline 345, $29.95.
PROS Pedometers, which cost $10 to $40, are routinely used in weight-loss programs. (People are advised to clock at least 10,000 steps a day.)
CONS There is no way to review a previous workout with some pedometers, like this one. So you need to keep a separate log.


By pinpointing your location with a Global Positioning System sensor these devices measure speed, distance and pace during a workout. A heart rate monitor is usually included. Left, the Garmin Forerunner 301, $324.98, lets you download your exercise history to a computer (PC only).
PROS GPS is a convenient way to measure speed and distance on foot or on a bike.
CONS The watch can be bulky. Clouds can diminish accuracy.


Cyclists use power meters to record power output, pedal revolutions, speed, time and distance. The meter here, CycleOps PowerTap Pro, $899.99, is installed onto the bike’s back wheel, and the data screen, is attached to the handlebars. Most meters let you upload data to a computer.
PROS Power meters simply measure the power you generate.
CONS They cost $700 to more than $3,400.


These monitors, which cost $60 to more than $400, track heartbeats a minute by transmitting electrical signals from a sensor on a chest strap to a special watch. This monitor, the Polar F11, $159.95, can create a customized training program with a heart rate to aim for during each workout.
PROS They can be used during any form of exercise.
CONS Chest straps must be wet to work properly.

The value of conflict.

The key to greatness is mutual challenge.


The Birth of Google

Larry thought Sergey was arrogant. Sergey thought Larry was obnoxious. But their obsession with backlinks just might be the start of something big.

By John Battelle

It began with an argument. When he first met Larry Page in the summer of 1995, Sergey Brin was a second-year grad student in the computer science department at Stanford University. Gregarious by nature, Brin had volunteered as a guide of sorts for potential first-years - students who had been admitted, but were still deciding whether to attend. His duties included showing recruits the campus and leading a tour of nearby San Francisco. Page, an engineering major from the University of Michigan, ended up in Brin's group.

It was hardly love at first sight. Walking up and down the city's hills that day, the two clashed incessantly, debating, among other things, the value of various approaches to urban planning. "Sergey is pretty social; he likes meeting people," Page recalls, contrasting that quality with his own reticence. "I thought he was pretty obnoxious. He had really strong opinions about things, and I guess I did, too."

"We both found each other obnoxious," Brin counters when I tell him of Page's response. "But we say it a little bit jokingly. Obviously we spent a lot of time talking to each other, so there was something there. We had a kind of bantering thing going." Page and Brin may have clashed, but they were clearly drawn together - two swords sharpening one another.

When Page showed up at Stanford a few months later, he selected human-computer interaction pioneer Terry Winograd as his adviser. Soon thereafter he began searching for a topic for his doctoral thesis. It was an important decision. As Page had learned from his father, a computer science professor at Michigan State, a dissertation can frame one's entire academic career. He kicked around 10 or so intriguing ideas, but found himself attracted to the burgeoning World Wide Web.

Page didn't start out looking for a better way to search the Web. Despite the fact that Stanford alumni were getting rich founding Internet companies, Page found the Web interesting primarily for its mathematical characteristics. Each computer was a node, and each link on a Web page was a connection between nodes - a classic graph structure. "Computer scientists love graphs," Page tells me. The World Wide Web, Page theorized, may have been the largest graph ever created, and it was growing at a breakneck pace. Many useful insights lurked in its vertices, awaiting discovery by inquiring graduate students. Winograd agreed, and Page set about pondering the link structure of the Web.

Citations and Back Rubs

It proved a productive course of study. Page noticed that while it was trivial to follow links from one page to another, it was nontrivial to discover links back. In other words, when you looked at a Web page, you had no idea what pages were linking back to it. This bothered Page. He thought it would be very useful to know who was linking to whom.

Why? To fully understand the answer to that question, a minor detour into the world of academic publishing is in order. For professors - particularly those in the hard sciences like mathematics and chemistry - nothing is as important as getting published. Except, perhaps, being cited.

Academics build their papers on a carefully constructed foundation of citation: Each paper reaches a conclusion by citing previously published papers as proof points that advance the author's argument. Papers are judged not only on their original thinking, but also on the number of papers they cite, the number of papers that subsequently cite them back, and the perceived importance of each citation. Citations are so important that there's even a branch of science devoted to their study: bibliometrics.

Fair enough. So what's the point? Well, it was Tim Berners-Lee's desire to improve this system that led him to create the World Wide Web. And it was Larry Page and Sergey Brin's attempts to reverse engineer Berners-Lee's World Wide Web that led to Google. The needle that threads these efforts together is citation - the practice of pointing to other people's work in order to build up your own.

Which brings us back to the original research Page did on such backlinks, a project he came to call BackRub.

He reasoned that the entire Web was loosely based on the premise of citation - after all, what is a link but a citation? If he could divine a method to count and qualify each backlink on the Web, as Page puts it "the Web would become a more valuable place."

At the time Page conceived of BackRub, the Web comprised an estimated 10 million documents, with an untold number of links between them. The computing resources required to crawl such a beast were well beyond the usual bounds of a student project. Unaware of exactly what he was getting into, Page began building out his crawler.

The idea's complexity and scale lured Brin to the job. A polymath who had jumped from project to project without settling on a thesis topic, he found the premise behind BackRub fascinating. "I talked to lots of research groups" around the school, Brin recalls, "and this was the most exciting project, both because it tackled the Web, which represents human knowledge, and because I liked Larry."

The Audacity of Rank

In March 1996, Page pointed his crawler at just one page - his homepage at Stanford - and let it loose. The crawler worked outward from there.

Crawling the entire Web to discover the sum of its links is a major undertaking, but simple crawling was not where BackRub's true innovation lay. Page was naturally aware of the concept of ranking in academic publishing, and he theorized that the structure of the Web's graph would reveal not just who was linking to whom, but more critically, the importance of who linked to whom, based on various attributes of the site that was doing the linking. Inspired by citation analysis, Page realized that a raw count of links to a page would be a useful guide to that page's rank. He also saw that each link needed its own ranking, based on the link count of its originating page. But such an approach creates a difficult and recursive mathematical challenge - you not only have to count a particular page's links, you also have to count the links attached to the links. The math gets complicated rather quickly.

Fortunately, Page was now working with Brin, whose prodigious gifts in mathematics could be applied to the problem. Brin, the Russian-born son of a NASA scientist and a University of Maryland math professor, emigrated to the US with his family at the age of 6. By the time he was a middle schooler, Brin was a recognized math prodigy. He left high school a year early to go to UM. When he graduated, he immediately enrolled at Stanford, where his talents allowed him to goof off. The weather was so good, he told me, that he loaded up on nonacademic classes - sailing, swimming, scuba diving. He focused his intellectual energies on interesting projects rather than actual course work.

Together, Page and Brin created a ranking system that rewarded links that came from sources that were important and penalized those that did not. For example, many sites link to Those links might range from a business partner in the technology industry to a teenage programmer in suburban Illinois who just got a ThinkPad for Christmas. To a human observer, the business partner is a more important link in terms of IBM's place in the world. But how might an algorithm understand that fact?

Page and Brin's breakthrough was to create an algorithm - dubbed PageRank after Page - that manages to take into account both the number of links into a particular site and the number of links into each of the linking sites. This mirrored the rough approach of academic citation-counting. It worked. In the example above, let's assume that only a few sites linked to the teenager's site. Let's further assume the sites that link to the teenager's are similarly bereft of links. By contrast, thousands of sites link to Intel, and those sites, on average, also have thousands of sites linking to them. PageRank would rank the teen's site as less important than Intel's - at least in relation to IBM.

This is a simplified view, to be sure, and Page and Brin had to correct for any number of mathematical culs-de-sac, but the long and the short of it was this: More popular sites rose to the top of their annotation list, and less popular sites fell toward the bottom.

As they fiddled with the results, Brin and Page realized their data might have implications for Internet search. In fact, the idea of applying BackRub's ranked page results to search was so natural that it didn't even occur to them that they had made the leap. As it was, BackRub already worked like a search engine - you gave it a URL, and it gave you a list of backlinks ranked by importance. "We realized that we had a querying tool," Page recalls. "It gave you a good overall ranking of pages and ordering of follow-up pages."

Page and Brin noticed that BackRub's results were superior to those from existing search engines like AltaVista and Excite, which often returned irrelevant listings. "They were looking only at text and not considering this other signal," Page recalls. That signal is now better known as PageRank. To test whether it worked well in a search application, Brin and Page hacked together a BackRub search tool. It searched only the words in page titles and applied PageRank to sort the results by relevance, but its results were so far superior to the usual search engines - which ranked mostly on keywords - that Page and Brin knew they were onto something big.

Not only was the engine good, but Page and Brin realized it would scale as the Web scaled. Because PageRank worked by analyzing links, the bigger the Web, the better the engine. That fact inspired the founders to name their new engine Google, after googol, the term for the numeral 1 followed by 100 zeroes. They released the first version of Google on the Stanford Web site in August 1996 - one year after they met.

Among a small set of Stanford insiders, Google was a hit. Energized, Brin and Page began improving the service, adding full-text search and more and more pages to the index. They quickly discovered that search engines require an extraordinary amount of computing resources. They didn't have the money to buy new computers, so they begged and borrowed Google into existence - a hard drive from the network lab, an idle CPU from the computer science loading docks. Using Page's dorm room as a machine lab, they fashioned a computational Frankenstein from spare parts, then jacked the whole thing into Stanford's broadband campus network. After filling Page's room with equipment, they converted Brin's dorm room into an office and programming center.

The project grew into something of a legend within the computer science department and campus network administration offices. At one point, the BackRub crawler consumed nearly half of Stanford's entire network bandwidth, an extraordinary fact considering that Stanford was one of the best-networked institutions on the planet. And in the fall of 1996 the project would regularly bring down Stanford's Internet connection.

"We're lucky there were a lot of forward-looking people at Stanford," Page recalls. "They didn't hassle us too much about the resources we were using."

A Company Emerges

As Brin and Page continued experimenting, BackRub and its Google implementation were generating buzz, both on the Stanford campus and within the cloistered world of academic Web research.

One person who had heard of Page and Brin's work was Cornell professor Jon Kleinberg, then researching bibliometrics and search technologies at IBM's Almaden center in San Jose. Kleinberg's hubs-and-authorities approach to ranking the Web is perhaps the second-most-famous approach to search after PageRank. In the summer of 1997, Kleinberg visited Page at Stanford to compare notes. Kleinberg had completed an early draft of his seminal paper, "Authoritative Sources," and Page showed him an early working version of Google. Kleinberg encouraged Page to publish an academic paper on PageRank.

Page told Kleinberg that he was wary of publishing. The reason? "He was concerned that someone might steal his ideas, and with PageRank, Page felt like he had the secret formula," Kleinberg told me. (Page and Brin eventually did publish.)

On the other hand, Page and Brin weren't sure they wanted to go through the travails of starting and running a company. During Page's first year at Stanford, his father died, and friends recall that Page viewed finishing his PhD as something of a tribute to him. Given his own academic upbringing, Brin, too, was reluctant to leave the program.

Brin remembers speaking with his adviser, who told him, "Look, if this Google thing pans out, then great. If not, you can return to graduate school and finish your thesis." He chuckles, then adds: "I said, 'Yeah, OK, why not? I'll just give it a try.'"

From The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture, copyright © by John Battelle, to be published in September by Portfolio, a member of Penguin Group (USA), Inc. Battelle ( was one of the founders of Wired.

Wednesday, July 27, 2005

No accident that RSS has gone from summary to syndication.

From the Harvard Business Review, circa 2000.



The Emerging Model for Business in the Internet Era

There's no question that the Internet is overturning the old rules about competition and strategy. But what are the new rules? Many of them can be found in the concept of syndication, a way of doing business that has its origins in the entertainment world but is now expanding to define the structure of e-business. As companies enter syndication networks, they'll need to rethink their products, relationships, and even their core capabilities.

BUSINESS EXECUTIVES HAVE A LOT TO LEARN FROM TALK SHOW HOST JERRY SPRINGER - not about resolving conflicts through chairthrowing brawls but about syndication, the ideal way to conduct business in a networked, information-intensive economy.

Syndication involves the sale of the same good to many customers, who then integrate it with other offerings and redistribute it. The practice is routine in the world of entertainment. Production studios syndicate TV programs, such as the Jerry Springer Show, to broadcast networks and local stations. Cartoonists syndicate comic strips to newspapers and magazines. Columnists syndicate articles to various print and on-line outlets. Consumers of entertainment - the people watching the TV shows or reading the newspapers - are generally unaware of the complex, ever-shifting business relationships that play out behind the scenes. But without syndication, the American mass media as we know it would not exist.

Elsewhere in the business world, syndication has been rare. The fixed physical assets and slow-moving information of the industrial economy made it difficult, if not impossible, to create the kind of fluid networks that are essential to syndication. But with the rise of the information economy, that's changing. Flexible business networks are not only becoming possible, they're becoming essential. As a result, syndication is moving from the periphery of business to its center. It is emerging as the fundamental organizing principle for e-business.

Although few of the leading Internet companies use the term "syndication" to describe what they do, it often lies at the heart of their business models. Look at E* Trade. Like other on-line brokerages, E*Trade offers its customers a rich array of information, including financial news, stock quotes, charts, and research. It could develop all this content on its own, but that would be prohibitively expensive and would distract E*Trade from its core business: acquiring and building relationships with on-line customers. Instead, the company purchases most of its content from outside providers- Reuters and for news, Bridge Information Systems for quotes, for charts, and so on. These content providers also sell, or syndicate, the same information to many other brokerages. E*Trade distinguishes itself from its competitors not through the information it provides but through the way it packages and prices that information. Just like a television station, it is in the business of aggregating and distributing syndicated content as well as providing other in-house services such as trade execution.

On the Web, unlike in the physical world, syndication is not limited to the distribution of content. Commerce can also be syndicated. One company can, for example, syndicate a shopping-cart ordering and payment system to many e-tailers. Another company can syndicate a logistics platform. Another can syndicate fraud detection and credit-scoring algorithms. Another can syndicate human resource processes. Businesses themselves, in other words, can be created out of syndicated components. The much-discussed "virtual company" can become a reality.

Syndication is a radically different way of structuring business than anything that's come before. It requires entrepreneurs and executives to rethink their strategies and reshape their organizations, to change the way they interact with customers and partner with other entities, and to pioneer new models for collecting revenues and earning profits. Those that best understand the dynamics of syndication - that are able to position themselves in the most lucrative nodes of syndication networks - will be the ones that thrive in the Internet era.

The Three Syndication Roles

Traditionally, companies have connected with one another in simple, linear chains, running from raw-material producers to manufacturers to distributors to retailers. In syndication, the connections between companies proliferate. The network replaces the chain as the organizing model for business relationships. Within a syndication network, there are three roles that businesses can play. Originators create original content. Syndicators package that content for distribution, often integrating it with content from other originators. Distributors deliver the content to customers. A company can play one role in a syndication network, or it can play two or three roles simultaneously. It can also shift from one role to another over time.

Here's a simple example of a syndication network from the media business. Scott Adams, an originator, draws the popular Dilbert cartoon strip. He licenses it to a syndicator, United Features, which packages it with other comic strips and sells them to a variety of print publications. A newspaper, such as the Washington Post, acts as a distributor by printing the syndicated cartoons, together with articles, photographs, television listings, advertisements, and many other pieces of content, and delivering the entire package to the doorsteps of readers.

Now, lets look at how the syndication roles are emerging on the Internet:

Originators. The Internet broadens the originator category in two ways. It expands the scope of original content that can be syndicated, and it makes it easier for any company or individual to disseminate that content globally. Anything that can exist as information- from products and services to business processes to corporate brands-can be syndicated.

A good example of an Internet originator is Inktomi, a start-up that created a powerful Internet search engine using its proprietary technologies for connecting many inexpensive computers to act as a virtual supercomputer. By the time Inktomi was ready to enter the market, other companies such as Yahoo! and Excite already had well-established search engine brands. Inktomi's executives knew that it would be difficult for a new competitor to take them on directly. But the executives also saw that many other large Web sites wanted to offer search engine functionality but didn't have the technology. Rather than sell itself to any one of these companies, Inktomi decided to syndicate its application to all of them. Web sites are able to customize the Inktomi service for their users, offer it under their own brands, and combine it with other functions and content that they develop on their own or purchase from other originators.

Inktomi generates revenues through per-query charges and by sharing the dollars its customers generate from selling banner advertisements on their search pages. The company has applied the same business model and core technologies to other services such as content caching and comparison shopping. Last quarter, it answered 3.4 billion search queries, its quarterly revenues hit $36 million, and its market capitalization surpassed $ 10 billion.

Syndicators. By bringing together content from a variety of sources and making it available through standard formats and contracts, syndicators free distributors from having to find and negotiate with dozens or hundreds of different originators to gather the content they want. In other words, syndicators are a form of infomediary, collecting and packaging digital information in a way that adds value to it. In the physical world, stand-alone syndicators are rare outside the entertainment field, but this business model is becoming increasingly prominent on the Net.

Screaming Media is a leading content syndicator. It collects articles in electronic form from some 400 originators and, using a combination of automated filtering software and human editors, categorizes each article as it flows through its servers. It then delivers to its customers- currently, more than 500 different sites-only the content relevant to their target audience. A site catering to auto-racing enthusiasts, for example, would receive a steady stream of up-to-date racing news and features. The site could license content directly from originators such as the Associated Press, but the vast majority of that content would be irrelevant to its audience. Screaming Media charges monthly fees based on the volume of filtered content its customers desire. It pays some of that money back to the content originators as royalties, allowing everyone involved to benefit from the transaction.

LinkShare is another on-line syndicator, but unlike Screaming Media, it syndicates commerce rather than traditional content. More than 400 online retailers have contracted with LinkShare to administer their affiliate programs-programs that enable other sites to provide links to the e-tailers in return for a small cut of any sales those links generate. LinkShare aggregates all the programs on its own site, providing an easy, one-stop marketplace for affiliate sites. In this network, the e-tailers act as the originators, LinkShare is the syndicator, and the content sites are the distributors. LinkShare also provides the technical infrastructure for monitoring transactions and tracking and paying affiliate commissions, and it offers ancillary services such as reporting for both affiliates and retailers. The e-tailers pay LinkShare a combination of up-front fees and per-sale commissions.

Distributors. Distributors are the customer-facing businesses. They use syndication to lower their costs for acquiring content and to expand the value they provide to consumers. E*Trade is one example of a distributor. Another is, an on-line destination for women.'s staff creates its own content, which it integrates with syndicated information from partners such as ABC News and Good Housekeeping magazine. also offers a range of syndicated services, including free Web-based e-mail accounts from WhoWhere, a subsidiary of Lycos, and weather forecasts from AccuWeather. As a distributor,'s role is to organize all this material into a compelling, targeted offering that attracts visitors.

At the same time, also distributes shopping services syndicated from a variety of partners, including eToys,, RedEnvelope, and FogDog. Much like a traditional department store, organizes these on-line retailers' merchandise into relevant categories, such as gifts, clothing, cosmetics, and electronics, and it promotes featured products with pictures and descriptions. There are two important differences from the physical world, however. First, when a customer makes a purchase, she does so through a special hyperlinked connection to the partner site rather than through need not hold inventory, process transactions, or manage fulfillment, but it receives a percentage of each sale for bringing in the customer. Second, distributors have far more flexibility on the Web. If PlanetRx offers a better commission on cosmetics than, or if one set of products sells better than another, can quickly swap the products it promotes to maximize its revenues. It never has to worry about unsold inventory or a time lag in reconfiguring its supply chain.

From Scarcity to Abundance

Internet syndication opens up endless opportunities for entrepreneurs, and it provides enormous freedom to all companies. It enables businesses to choose where they wish to concentrate their efforts and to piggyback a myriad of other businesses that can handle the remaining elements of a complete end-to-end service. Unlike outsourcing, it does not restrict flexibility. Syndication relationships can change rapidly-by the second, in fact- and companies can quickly shift between different roles. But because syndication networks are so complex, they also present a host of challenges.

For a sense of what business is like in a syndication network, consider the Motley Fool, a popular on-line company that provides financial information to investors. The Motley Fool plays all three syndication roles simultaneously. It originates content, which it uses on its own Web site and on its America Online site, and which it also offers through syndicators like iSyndicate. It acts as a syndicator itself, providing stock-market commentary in various formats to sites such as Yahoo! and the San Jose Mercury News's, as well as to 150 print newspapers and 100 radio stations. And it distributes syndicated business stories from news wires such as Reuters and syndicated financial applications from FinanCenter's

At an operational level, the Motley Fool's business is extremely complicated. The various elements of content that flow between it and its partners are updated according to different schedules and are subject to different business rules governing how material can be used and how payments are distributed. In some cases, the Motley Fool makes money through up-front licensing fees; in other cases, it receives a share of advertising revenue on other sites that run its content; and in still other cases, it takes a share of transaction revenues. Fortunately, however, the content flows, the business rules, and the revenue streams can largely be managed by software. As long as you get the code right, the business runs smoothly.

The bigger challenge lies at the strategic level. Given the unpredictable and ever-changing flows of revenues, profits, and competition on the Web, companies need to choose their place in a syndication network with care, and they need to be adept at reconfiguring their roles and relationships at a moment's notice. The syndicated world of the Web is radically different from the traditional business world, where assets tended to be fixed and roles and relationships stable. To thrive in a syndication network, executives first have to shed many of their old assumptions about business strategy.

In setting strategy, companies have always sought to organize their markets so as to place themselves in the sweet spot of the value chain-the place where most of the profits reside. Traditionally, the way to do that has been to seize upon or create scarcities. Control over a scarce resource is always more valuable than control over a commodity. Procter & Gamble cranks out a constant stream of new products and product extensions because it wants to maximize its control over supermarkets' limited shelf space. Home Depot seeks to crush local hardware stores with broad selection and low prices because it wants to be the only place in town to buy saws and bathroom fixtures. Other companies seek to dominate a source of supply, to patent a product, or to establish control over some other scarce resource.

The Internet, however, replaces scarcity with abundance. Information can be replicated an unlimited number of times. It can be reassembled and recombined in infinite combinations. And it can be distributed everywhere all the time. There are no limits on shelf space on the Net, every store is accessible to every shopper, the lanes of supply and distribution are wide open, and even the tiniest new- company can achieve enormous scale in almost no time. Because the constraints of physical inventory and location don't apply, creating and maintaining scarcities isn't an option.

Instead, successful strategies must be designed to benefit from abundance. Companies need, in other words, to seek out and occupy the most valuable niches in syndication networks-which turn out to be those that maximize the number and strength of the company's connections to other companies and to customers. And because those connections are always changing, even the most successful businesses will rarely be able to stay put for long.

Amazon's Syndication Strategy

The maneuverings of can best be understood through the lens of syndication strategy. Jeff Bezos, Amazon's founder and CEO, quickly established his fledgling company as the leading on-line distributor of books and information about books by capitalizing on the abundance of the Web: his site could offer a dramatically larger selection than any physical bookstore. But since the Web's abundance is open to all comers, that early advantage could not be sustained for long. Other on-line booksellers soon matched Amazon's selection, and consumers began to use shopping bots to compare many merchants' prices instantly. Though Amazon is the largest retailer on the Web, thousands of competitors are always just a click away. If Bezos had simply tried to maintain Amazon's role as a distributor, he would have doomed his company to endless price wars and vanishing margins, no matter how many different products it distributed.

But Amazon hasn't stood still. It has constantly repositioned itself to play different syndication roles. In 1996, for example, it launched an aggressive affiliate program called Associates. Instead of relying solely on attracting customers to its site, Amazon can use this program to take its site to where customers already are. The more than 400,000 sites that have signed up to be affiliates each provide their own visitors with hyperlinks that enable them to make purchases through Amazon. In effect, Amazon is syndicating its store to other locations. While Amazon loses some control over merchandising and has to pay out 5% to 15% commissions on revenues generated by affiliates, the benefits far exceed the costs. Amazon puts itself in front of more potential customers than it could attract directly, especially in niche categories where affiliates provide specialized content and organize product listings for a specific audience. And it turns hundreds of thousands of non-employees into a virtual sales force, which never gets paid until a sale is realized.

More recently, Amazon has taken on a new syndication role. Through its zShops program, it now hosts hundreds of small e-commerce providers on its own site. These shops gain access to Amazon's 13 million customers as well as its sophisticated tools for smoothing the on-line ordering process. In return, they pay Amazon a listing fee for each item, plus a 1.25% to 5% commission on each sale. zShops turns Amazon into a distributor-not of books or other products but of on-line shops. In addition to the revenue boost, Amazon gets additional traffic from customers interested in the niche zShops offerings. Amazon has also started signing distribution deals with larger e-tailers such as and, which offer products complementary to its own. Amazon receives substantial payments and equity from these partners in exchange for placement on its site, and it also gives customers fewer reasons to shop elsewhere.

By acting as a syndicator and a distributor of e-commerce, Amazon is turning the absence of scarcity on the Web from a threat to an advantage. The multitude of other sites that users visit are no longer alternatives to Amazon; they are opportunities for Amazon to expand its presence-and its earnings.

Rethinking Core Capabilities

Amazon's experience holds a very important lesson for all companies. In a syndicated world, core capabilities are no longer secrets to protect - they are assets to buy and sell. One of Amazon's most distinctive capabilities is its ordering system. Instead of keeping that system to itself-as traditional strategists might have counseled-Amazon uses syndication to sell the capability to both stores and content sites throughout the Web. Amazon draws the line at direct competitors such as, which it is suing for infringing on a patent of its ordering system, though even this distinction may ultimately give way as the benefits of syndication multiply. In an economy of scarcity, core capabilities are sources of proprietary advantage. In an economy of abundance, they're your best product. If you try to sequester them, you may gain a short-term competitive edge, but your competitors will soon catch up. If you syndicate them, you can turn those competitors into customers.

In some cases, the syndicated assets themselves may be valuable enough to generate big revenues. But even if they aren't, the other benefits of syndication can be significant. Like Amazon, companies can use syndication to broaden their distribution in an efficient manner. Syndication can also bring companies data about customer usage patterns. And it can generate leads and reinforce brands. All of these are benefits that companies have traditionally sought to derive by dominating their markets and by exercising exclusive control over information. But with competitive advantages increasingly difficult to lock in-thanks to the leveling power of the information economy-syndication provides a superior route to the same benefits.

Think about what Federal Express has done with its package-tracking system. FedEx invested a great deal of money to develop unique technologies and an infrastructure for monitoring the location of every package it handles. This capability gave it an edge on competitors. Now, however, FedEx is syndicating its tracking system in several ways. The company allows customers to access the system through its Web site to check the status of their packages. It provides software tools to its corporate customers that enable them to automate shipping and track packages using their own computer systems. And it allows on-line companies to customize its tracking system, integrate it with their own offerings, and distribute it through their own sites.

Someone who orders flowers through, for example, can check the delivery status directly on the Proflowers site. Behind the scenes, it's the FedEx application querying the FedEx database, but whereas FedEx just tracks the package, Proflowers also provides information from its own records about what's inside the box and what the sender wrote on the accompanying card. FedEx doesn't charge Proflowers for using its technology; it is, in a very real sense, giving away one of its core capabilities. What does it get in return? Plenty. By integrating its technology with the Proflowers ordering system, it makes it much harder for the customer to switch to a different delivery company. By enabling Proflowers to serve its customers better, it ensures that more packages of flowers will be shipped in FedEx planes and trucks. And by incorporating its brand name on the Proflowers site, it publicizes its services and promotes its brand.

As more and more business turns into e-business, smart managers in every company will find ways to use syndication to do what FedEx has done.

The New Shape of Business

Beyond its impact on individual companies' strategies and relationships, syndication promises to change the nature of business. As organizations begin to be constructed out of components syndicated from many other organizations, the result will be a mesh of relationships with no beginning, end, or center. Companies may look the same as before to their customers, but behind the scenes they will be in constant flux, melding with one another in ever-changing networks. The shift won't happen overnight, and of course there will always be functions and goods that don't lend themselves to syndication. But in those areas where syndication takes hold, companies will become less important than the networks that contain them.

Indeed, individual companies will routinely originate, syndicate, or distribute information without even being aware of all the others participating in the network. A particular originator may, for example, have a relationship with only one syndicator, but through that relationship it will be able to benefit from the contributions of hundreds or even thousands of other companies. While every participant will retain some measure of control-choosing which syndication partners to have direct relationships with and deciding which business rules to incorporate into its syndicated transactions-no participant will control the overall network. Like any highly complex, highly adaptive system, a well-functioning syndication network will be self-organizing, constantly optimizing its behavior in response to an unending stream of information about the transactions taking place among its members.

Syndication may not be a new model, but it takes on a new life thanks to the Interact. Virtually any organization can benefit from syndication, often in several different ways if it's willing to view itself as part of a larger, interconnected world rather than seeking exclusive control at every turn. The tools and intermediaries that facilitate syndication relationships will become more sophisticated over time. Already, though, there are many syndication networks in place and many examples of successful syndication strategies. As the Internet economy continues to grow in importance, syndication will grow along with it as the underlying structure of business.

The Structure of Syndication

Legend for Players
A Originators
B Syndicators
C Distributors
D Consumers


A Create original content
B Package content and manage relationships between originators and
C Deliver content to consumer
D View or use content; create revenue through fees, purchases, or
viewing ads

Traditional Examples

A Dreamworks
Charles Schulz
Oprah Winfrey

B King World
United Features

C New York Times

Web Examples

A Inktomi
Motley Fool

B iSyndicate LinkShare
Motley Fool

Motley Fool

Everything Changes

Business in a syndicated world bears little resemblance to its industrial predecessor. To succeed, executives need to change the way they think about nearly every aspect of strategy and management.

A Structure of Relationships
B Corporate Roles
C Value Added
D Strategic Focus
E Role of Corporate Capabilities
F Role of Outsourcing

Traditional Business
A Linear supply-and-demand chains
B Fixed
C Dominated by physical distribution
D Control scarce resources
E Sources of advantage to protect
F Gain efficiency

A Loose, weblike networks
B Continually shifting
C Dominated by information manipulation
D Leverage abundance
E Products to sell
F Assemble virtual corporations

By replacing an economy with one of abundance, the Internet will force executives to rethink their strategies. Instead of viewing core capabilities as secrets to protect, they'll need to see them as products to sell.


By Kevin Werbach

Kevin Werbach is the managing editor of Release 1.0, a monthly report on trends in the Internet, communications, and computing worlds.

Why Syndication Suits the Web

Syndication has traditionally been rare in the business world for three reasons. First, syndication works only with information goods. Because information is never "consumed," infinite numbers of people can use the same information. That's not the case with physical products. If I sell you a car or a watch, I can't turn around and self those same items to someone else. As long as most of the business world was engaged in the production, transport, and sale of physical goods, syndication could exist only on the margins of the economy.

Second, syndication requires modularity. While a syndicated good can have considerable value in and of itself, it does not normally constitute an entire product; it's a piece of a greater whole. Howard Stern's radio show attracts a sizable audience, but it needs to be combined with many other shows to create a station's programming. Dave Barry's columns have lots of dedicated readers, but they need to be combined with many other pieces of content to make a newspaper. In the old, physical economy, modularity was rare. The boundaries between products, supply chains, and companies tended to be dearly demarcated and impermeable.

Third, syndication requires many independent distribution points. There's little to be gained by creating different combinations and configurations of content if there's only one distributor, or if every distributor is controlled by a content creator. Think of Hollywood in its early days. Major movie studios such as MGM and Warner Brothers not only produced firms but also owned the theaters that showed the films. Since a theater owned by Warner Brothers played only Warner Brothers movies, there was little room for syndicators. But when the U.S. government broke up those arrangements in 1948 on antitrust grounds, studios and distributors became independent from theaters. Syndication of entertainment content began to flourish. In most industries, however, there still tend to be limited numbers of distribution outlets, and they often have tight relationships with the companies that create the goods they sell.

With the Interact, information goods, modularity, and fragmented distribution become not only possible trot essential Everything that moves on the Internet takes the form of information. The hyperlinked architecture of the Web is modular by nature. And because anyone can start a Web site, there are literally millions of different distribution points for users, in such an environment, syndication becomes inescapable.

Beyond Outsourcing

On the surface, syndication looks a lot like outsourcing They both involve the use of outsiders to supply a business asset or function. But syndication holds two large advantages over traditional outsourcing. First because syndication deals with information rather than physical resources, a company can syndicate the same goods or services to an almost infinite number of partners without incurring much additional cost. A physical call-center outsourcer, for example, must hire more people, lease more office space, and buy more equipment as it adds customers. But a content or e-commerce originator doesn't have to invest in more people, space; or machinery when it adds another distributor. Software practically scales for free.

The second advantage is that on-line syndication can be automated and standardized in a way that physical outsourcing can't. An important feature of syndication relationships is that business rules, such as usage rights and payment terms, can be passed between companies along with the syndicated asset or service- both take the form of digital code. Moreover, because the Internet is an open system, the rules can be coded in standard formats that can be shared by any company. That allows syndication networks to be created, expanded, and optimized far more quickly than is possible in the physical world.

Syndication provides choices far beyond those that companies had with outsourcing, but the existence of those choices makes a coherent strategy all the more important. Companies should look for relationships that offer the greatest speed and flexibility, but they should also carefully identify the business terms they consider most important. Should you pay an up-front fee for a syndicated search service for your site, or would it make more sense to receive the service for free but let the provider run a banner ad? Should you use a syndicated procurement application from a company that sells the aggregated purchasing data it collects, or should you pay more for an application from a company that won't use your data? The flexibility of the Internet architecture and the limitless creativity of Internet entrepreneurs- means that every company will ,face a multitude of complex choices in structuring relationships. Be prepared.

Copyright of Harvard Business Review is the property of Harvard Business School Publication Corp. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
Source: Harvard Business Review, May/Jun2000, Vol. 78 Issue 3, p84, 10p, 3c.
Item Number: 3049547