Saturday, June 10, 2006

Consistency matters.

June 11, 2006
For Some, Online Persona Undermines a Résumé

By ALAN FINDER

When a small consulting company in Chicago was looking to hire a summer intern this month, the company's president went online to check on a promising candidate who had just graduated from the University of Illinois.

At Facebook, a popular social networking site, the executive found the candidate's Web page with this description of his interests: "smokin' blunts" (cigars hollowed out and stuffed with marijuana), shooting people and obsessive sex, all described in vivid slang.

It did not matter that the student was clearly posturing. He was done.

"A lot of it makes me think, what kind of judgment does this person have?" said the company's president, Brad Karsh. "Why are you allowing this to be viewed publicly, effectively, or semipublicly?"

Many companies that recruit on college campuses have been using search engines like Google and Yahoo to conduct background checks on seniors looking for their first job. But now, college career counselors and other experts say, some recruiters are looking up applicants on social networking sites like Facebook, MySpace, Xanga and Friendster, where college students often post risqué or teasing photographs and provocative comments about drinking, recreational drug use and sexual exploits in what some mistakenly believe is relative privacy.

When viewed by corporate recruiters or admissions officials at graduate and professional schools, such pages can make students look immature and unprofessional, at best.

"It's a growing phenomenon," said Michael Sciola, director of the career resource center at Wesleyan University in Middletown, Conn. "There are lots of employers that Google. Now they've taken the next step."

At New York University, recruiters from about 30 companies told career counselors that they were looking at the sites, said Trudy G. Steinfeld, executive director of the center for career development.

"The term they've used over and over is red flags," Ms. Steinfeld said. "Is there something about their lifestyle that we might find questionable or that we might find goes against the core values of our corporation?"

Facebook and MySpace are only two years old but have attracted millions of avid young participants, who mingle online by sharing biographical and other information, often intended to show how funny, cool or outrageous they are.

On MySpace and similar sites, personal pages are generally available to anyone who registers, with few restrictions on who can register. Facebook, though, has separate requirements for different categories of users; college students must have a college e-mail address to register. Personal pages on Facebook are restricted to friends and others on the user's campus, leading many students to assume that they are relatively private.

But companies can gain access to the information in several ways. Employees who are recent graduates often retain their college e-mail addresses, which enables them to see pages. Sometimes, too, companies ask college students working as interns to perform online background checks, said Patricia Rose, the director of career services at the University of Pennsylvania.

Concerns have already been raised about these and other Internet sites, including their potential misuse by stalkers and students exposing their own misbehavior, for example by posting photographs of hazing by college sports teams. Add to the list of unintended consequences the new hurdles for the job search.

Ana Homayoun runs Green Ivy Educational Consulting, a small firm that tutors and teaches organizational skills to high school students in the San Francisco area. Ms. Homayoun visited Duke University this spring for an alumni weekend and while there planned to interview a promising job applicant.

Curious about the candidate, Ms. Homayoun went to her page on Facebook. She found explicit photographs and commentary about the student's sexual escapades, drinking and pot smoking, including testimonials from friends. Among the pictures were shots of the young woman passed out after drinking.

"I was just shocked by the amount of stuff that she was willing to publicly display," Ms. Homayoun said. "When I saw that, I thought, 'O.K., so much for that.' "

Ms. Rose said a recruiter had told her he rejected an applicant after searching the name of the student, a chemical engineering major, on Google. Among the things the recruiter found, she said, was this remark: "I like to blow things up."

Occasionally students find evidence online that may explain why a job search is foundering. Tien Nguyen, a senior at the University of California, Los Angeles, signed up for interviews on campus with corporate recruiters, beginning last fall, but he was seldom invited.

A friend suggested in February that Mr. Nguyen research himself on Google. He found a link to a satirical essay, titled "Lying Your Way to the Top," that he had published last summer on a Web site for college students. He asked that the essay be removed. Soon, he began to be invited to job interviews, and he has now received several offers.

"I never really considered that employers would do something like that," he said. "I thought they would just look at your résumé and grades."

Jennifer Floren is chief executive of Experience Inc., which provides online information about jobs and employers to students at 3,800 universities.

"This is really the first time that we've seen that stage of life captured in a kind of time capsule and in a public way," Ms. Floren said. "It has its place, but it's moving from a fraternity or sorority living room. It's now in a public arena, so it's a completely different ballgame."

Ms. Rose of the University of Pennsylvania said of these public sites, "Students go on them a lot, and, unfortunately, now employers go there."

Some companies, including Enterprise Rent-a-Car, Ernst & Young and Osram Sylvania, said they did not use the Internet to check on college job applicants.

"I'd rather not see that part of them," said Maureen Crawford Hentz, manager of talent acquisition at Osram Sylvania. "I don't think it's related to their bona fide occupational qualifications."

More than a half-dozen major corporations, including Morgan Stanley, Dell, Pfizer, L'Oréal and Goldman Sachs, turned down or did not respond to requests for interviews.

But other companies, particularly those involved in the digital world like Microsoft and Métier, a small software company in Washington, D.C., said researching students through social networking sites was now fairly typical.

"It's becoming very much a common tool," said Warren Ashton, group marketing manager at Microsoft. "For the first time ever, you suddenly have very public information about almost any candidate who is coming through the process."

At Microsoft, Mr. Ashton said, recruiters are given broad latitude over how to work, and there is no formal policy about using the Internet to research applicants. "There are certain recruiters and certain companies that are probably more in tune with the new technologies than others are," he said.

Microsoft and Osram Sylvania have also begun to use social networking sites in a different way, participating openly in online communities to get out their company's messages and to identify talented job candidates.

Students may not know when they have been passed up for an interview or a job offer because of something a recruiter saw on the Internet. But more than a dozen college career counselors said recruiters had been telling them since last fall about incidents in which students' online writing or photographs had raised serious questions about their judgment, eliminating them as job candidates.

Some college career executives are skeptical that many employers routinely check applicants online. "My observation is that it's more fiction than fact," said Tom Devlin, director of the career center at the University of California, Berkeley.

At a conference in late May, Mr. Devlin said, he asked 40 employers if they researched students online and every one said no.

Many career counselors have been urging students to review their pages on Facebook and other sites with fresh eyes, removing photographs or text that may be inappropriate to show to their grandmother or potential employers. Counselors are also encouraging students to apply settings on Facebook that can significantly limit access to their pages.

Melanie Deitch, director of marketing at Facebook, said students should take advantage of the site's privacy settings and be smart about what they post. But it is not clear whether many students are following the advice.

"I think students have the view that Facebook is their space and that the adult world doesn't know about it," said Mark W. Smith, assistant vice chancellor and director of the career center at Washington University in St. Louis. "But the adult world is starting to come in."

Copyright 2006 The New York Times Company

Friday, June 09, 2006

The definition of value.

Towards a General Theory of Value: An Interview with Michael Benedikt

Written by Gong Szeto

Filed in Gain: Journal of Business and Design.

Michael Benedikt’s books include For an Architecture of Reality, Deconstructing the Kimbell, and Cyberspace: First Steps. He lectures widely to business and professional audiences, and has written over a hundred articles on architecture, design, social-economic theory, and the information society. He is the Hal Box Chair in Urbanism and Director of the Center for American Architecture and Design at the University of Texas at Austin.

>> What transpired during the period of writing A General Theory of Value?

Started in 1992, this book took ten years to write. In 1992, the Internet was just beginning to spur the investment boom in digital communications that would prevail until this year (and that was predicted in Cyberspace: First Steps (MIT Press, 1991), a book that was written before there was a "World Wide Web." What was not clear to me then—or clear to anyone else at the time—is what of value would sustain the boom other than the confidence that "great things would happen," that the Information Age had finally arrived, etc.

The nineties went by. Fortunes were being made mainly in providing infrastructure, programming tools, and access, but no one was making a profit offering content. There was no there there—or not enough there there, somehow, for people to want to pay for content over and above access. In all, Internet-based investment soon began to look a Ponzi scheme, where growing numbers of late investors pay off the fewer numbers of early ones. Who, one had to wonder, would be left holding the over-valued baby?

At the same time I was also lamenting that cyberspace—that wonderful, phantasmagoric three-dimensional alternative reality imagined by William Gibson—was not actually shaping itself on-line as I and many others thought it surely would. What Mosaic, then Netscape, then Explorer delivered was mostly the content of your local drugstore newsstand, but worse: delivered more jerkily, more shallowly, and more resolutely two-dimensionally—like paper flyers blown against the back of the computer screen. (99% of it still looks that way, Flash graphics notwithstanding.) Set aside the code-writing required: by 1993 it was clear that the transmission and processing speeds required to sustain cyberspace were going to be long in coming. They are still not here. To this day, only advanced intranet gamers have a foretaste of Gibsonian cyberspace: a real-time, shared, virtual space seamlessly mixing useful data, personal presence, and real-world, real-time connection.

“My definition of value is simple: ‘positive value’ is what we attribute to that which intensifies and/or prolongs life. Conversely, ‘negative value’ is what we attribute to that which dilutes and/or shortens life.”

Forget cyberspace. Over all the whole digital communications enterprise hung the question of why? What good was it all (notwithstanding the fact that many of my friends, and one member of my family, were becoming dotcom millionaires, at least on paper)? Discovering that "good"—finding the value of networked computers for the masses—would be essential to achieving sustainable economic growth in the 21st century.

>> So there must have been some valuable things that emerged during that period.

By the late nineties there were some candidates. Quick and inexpensive access to hard information—like news, manuals, and other business and legal documentation—from anywhere on earth seemed like a durably Good Thing. So too did access to entertainment and shopping, with consumer-oriented companies offering 'reception areas' and catalogs to the public as well as establishing real-time global markets in certain goods. Add the larger category of enriched asynchronous communications between individuals (whatever the reason for or content of those communications), and one seemed to have incontrovertible grounds for believing that the digital revolution could deliver real value to millions of ordinary people for decades to come.

All very promising. Missing from the picture, though, was a sufficiently modern theory of value: a deeper way to understand why and how these things had value over time. In the new, so-called post-industrial, information-age era, if we didn't have a theory of value—and in particular, a good theory of economic value—then we couldn't decrease our failure rate, we couldn't steer our efforts, and we wouldn't have the ideas we needed to come up with new enterprises and solutions. Trial and error would remain the only method, with "guns or butter" utilitarianism our only economic doctrine.

>> So what was missing in current thinking about “value”?

After much research it became clear to me that neither by themselves nor together had the disciplines of psychology or economics anything serious to say as to why people spend their time and/or money the way they do (although econometrics could say roughly how). And when the good in question was as ephemeral and cheaply reproducible as "information"—i.e. not food or steel or real estate—the problem was compounded. Of what (and how much) value is "information," after all?

This was the question and that my theory set out to answer. It would eventually have something to say about the value of everything, from food to architecture.

Which is just as well, because there is more to our lives than computers and communications. Indeed, economic growth over the next few decades might not be driven by the information industries at all. These might have peaked as providers of value already, and what they produce might be heading for commodity status: in constant over-supply and competing on price alone. (Look at what's happening to popular music and the radio and recording industries. Or to cable and satellite TV. Or to wireless phone service...) If this is the case—as I believe it is—then it's critical to ask "what's next?" What new areas of production, employment, and enjoyment are going to carry the economy into the future? I offer: design and architecture. But that's at the end of the book, in the last chapter and Coda, entitled "The Value of Architecture."

>> Who is the audience for this book?

If you find what I am saying interesting, you ...and people like you: informed professionals and academics, intellectuals with a stomach for mixing science, social science, and business; future-looking developers, marketers, and CEOs; graduate students, and, hopefully, many architects and designers.

>> How do you define "value"? How do most people think of value? Architects? Businesspersons?

My definition of value is simple: "positive value" is what we attribute to that which intensifies and/or prolongs life. (Conversely, "negative value" is what we attribute to that which dilutes and/or shortens life.) The bias is naturally towards human life (over animal or vegetal life), and towards life in the social and physical proximity of the judge over life further away. But I argue for constantly moderating that bias—i.e., for widening the ambit of consideration—if only, interestingly enough, in self interest.

How do most people think about value? In the singular (value): as getting a lot of stuff for a little money; and in the plural ("values") as virtues or ideals worth achieving: for example, loyalty, honesty, efficiency, control... The list is long and full of intrinsic conflicts.

>> Why is a deep understanding of value necessary for designers?

Well to start with, what is design? I would say: thoughtful making, or rather, thought before making. Design is not just about shapes, colors, and materials. It's about the close consideration of the ways in which the product of design—be it a building or a bicycle—might enhance the life of its makers, purveyors, owners, users, fixers, and disposers. It's about human needs, and the planet's needs. At the most global level, design is speeded-up evolution, courtesy of our excess of gray matter.

You would think from this that I would be against "new" and "cool" in design. I'm not at all. "New" and "cool" are significant epithets, because if you look at the reasons people spontaneously say them, you will often find a genuine need being met in a new and better way, if only the need for refreshed vision, for believing that the future holds promise. I don't think Frank Gehry meditates much on the nature of value in the abstract. A sharp deal-maker and astute businessman as well as a brilliant designer, his Guggenheim museum in Bilbao was an aesthetic as well as value-economic tour de force. Who says art museums had to be neutral white boxes with gauzy skylights and low-voltage track lighting—minimalist visual Muzak? Who put "flexibility" above all other values? Certainly not the artists, and certainly not art's customers.

>> How do you envision a deeper understanding of value affecting works of design? the business world?

I think the answer is two-fold.

First, if understood, my theory will put pay to business's predilection for reductive simplicity in design, which yields only temporary and local economic advantage. The thrust of life's evolution is towards greater degrees of complexity and, at the same time, towards greater degrees of organization, the two together, in balance, at many scales. Evolution entails a life of production 'til reproduction, adapting, with success, to crowding and competition, and laced with happy (and unhappy) accidents. Knowing nothing of "efficiency," life loves only itself. Life wants more life, and goes always from the few to the many and from crudity to refinement, which is to say, from relatively simple-and-disorganized to relatively complex-and-organized. We are part of life's evolution, and neither we nor our social institutions nor the things we make and put into the world as artifacts can, in the long run, opt out of the trend towards greater complexity and greater organization—the equal balance of which, in any absolute amount, yields optimal omega.

Second: value presents itself to us first through feelings of need. When one feels thirsty or uncomfortable or insignificant, one feels a "need" for water, for a change of position or clothing, for more attention and respect. And so on. To have a theory of value one must have a theory of needs. In my book, I expand upon Maslow's hierarchy of basic needs. I posit six to Maslow's five, namely: the needs for survival, security, legitimacy, approval, confidence, and freedom. The order is significant. Usually the lower needs (earlier in the above list) trump the higher ones until they (the lower ones) are sufficiently satisfied. All goods serve to satisfy one or more of these needs to some degree; and great design is merely (!) great sophistication in how one goes about addressing, satisfying, and sometimes also stimulating these needs.

I also point out the existence of what I call the "token economy," in which the goods manufactured, traded, and, after a fashion, consumed, are purely social and psychological—I mean speeches, gestures, plans, stories, compliments, licenses, and so on. What's important to see is that the token economy parallels the material economy we're all familiar with, but is made almost entirely of immaterial information. The token economy influences the material economy greatly. (Indeed, I argue that money is a token—the prime one, and very "psychological" because of it.) Tokens satisfy needs; and the job of many goods, like the jobs of many people, is to design, produce, and trade in tokens profitably. Right now, very few economists think in these terms. In the information age, we cannot afford not to. The law of supply and demand operates a little differently when tokens dominate and information is the good in question. The resultant market dynamics are different too. I wish more economists would put their minds to modeling the social-psychological token economy. It's always been there, of course, but it's becoming more and more important as material goods become increasingly commoditized.

>> In the business world, the "value chain" is a popular way to describe the process by which value is created for end-products, yet most designers are concerned mostly with the end product and not the value creation process of raw materials, etc - why is that?

The people who are asked to invent products that, by their form and color, material and message, appeal to consumers, are called designers. The people who optimize the processes of product manufacture are called engineers. Engineers surely design a great deal, and designers—good ones anyway—give real consideration to manufacture, if only because this often affects cost. But for designers, efficiency-of-use is at least equal in importance to efficiency-in-manufacture, and usually more important. And sometimes, even efficiency-of-use (and manufacture) is eclipsed by other values: craftsmanship, show of class, meaning, style, newness, 'atmospherics,' and so on, all of which are inimical to straight-ahead efficiency considerations. These turn out to be too simple.

For designers (in general) to bring ecological, social, production-efficiency and other values to bear in manufacture, and stay "designers" nonetheless, they must make consumers conscious of these behind-the-scenes realities and opportunities. How? If not by evidence in the material and form of the final product, then by printed information on sales tags, in brochures, in advertising campaigns, and so on.

By the way, the value added at each stage in a production process consists of increasing the omega of the product at that stage, like a lengthening and improving of a melody. The happiness of the consumer is his or her omega increasing too.

...
First published in Gain 2.0: AIGA Journal of Design for the Network Economy.

Sunday, June 04, 2006

Bringing business to the restaurant business.

"Fewer and fewer chefs, it seems, strive to be the single-restaurant artist-monk."

---

June 4, 2006
Style
The Secret Ingredient

By MICHAEL RUHLMAN

Over the past few years, Terrance Brennan, the chef and owner of Picholine, has been slowly expanding his portfolio, first opening Artisanal Fromagerie and Bistro and more recently the retail business Artisanal Premium Cheese. Like many chefs who are thinking big, Brennan has lots of ideas about what to do next but neither the time nor the connections to put them in place, so he hired the one man who he felt could: Adam Block. He said he hoped that Block, the deal maker who helped create the template for the celebrity-chef contract that transformed Las Vegas into a dining oasis and brought Thomas Keller, the chef and owner of the French Laundry in Yountville, Calif., to New York, could tell him what to do. Not that Brennan is used to being told what to do.

The two met earlier this spring in Brennan's office, far from any kitchen. Brennan, dressed in a pinstripe suit, had just returned from Chicago, where he was considering opening more bistros. Block started by conveying the, well, infelicity of this move (with new clients, he said, "a little bit of an education" is required) and to impress upon Brennan the importance of establishing a "core brand" in New York. Block said he believed that Brennan could open four bistros in the city without "cannibalizing" himself, and noted that five was the magic number that could bring buyout money, serious cash. The tricky part, Block said, "is how to grow something that represents the philosophy and the product, and how to do it without compromising too much." He paused for emphasis and warned Brennan: "It's hard to be all things to all people, as you once were. What you're creating is an iteration of the original. With growth, something is always compromised."

Brennan absorbed the information poker-faced. By the end of the meeting, he was swayed, although not convinced. As star chefs proliferate in the $500-billion-a-year restaurant industry, more and more of them need the business acumen of someone like Block to bring their work to the masses. Although not well known even within the food world, Block is influencing the shape of name-brand dining in this country.

Born in Chicago, Block, 46, began his restaurant career 20 years ago as a consultant — reviewing operational overviews, finances, concepts and check averages and suggesting ways restaurants could be more profitable — in the San Francisco Bay Area. In the early 1990's, as chefs rose to new heights, he stepped in to help them leverage their fame into lucrative contracts. Through the word of mouth that followed extraordinary deals for marquee chefs, Block has fashioned a career that resembles star-athlete management. While his job description seems to be continuously evolving, his main role remains that of an agent who can fit talent with golden opportunity, the alchemist who can help chefs make big money.

Beginning in 1992 with Paul Bertolli, the former Chez Panisse chef, (Block helped negotiate a partnership with the stagnant restaurant Oliveto, to which Bertolli would bring prosperity and national acclaim), he went on to work with such culinary lights as Alice Waters, Jean-Louis Palladin, Laurent Gras, Guenter Seeger and Eric Ripert. He negotiated the first celebrity-chef management contract in 1994, a deal with the MGM Grand in Las Vegas that gave the chef Charlie Trotter a free restaurant, a six-figure signing bonus and a percentage of sales or commensurate salary for the next 10 years (even if the restaurant closed, which it did after 15 months), a financial windfall for Trotter. All of this was unheard of in an industry where a chef who earned $150,000 a year was doing well.

This was the first deal in Las Vegas in which the chef had no ownership in a restaurant with his brand name on it, perhaps the first one conceived not to turn a profit but rather as an amenity for high rollers. "I was pulling numbers out of the air," Block admits now. Nevertheless, it provided a formula for chefs and hoteliers and the impending Las Vegas chef bonanza.

In today's restaurant-management contracts, a marquee chef is typically paid 3 to 5 percent of sales in return for opening the restaurant and spending as little as two weeks a year there; in Las Vegas, such a restaurant brings in between $10 million and $18 million annually, netting the chef between $300,000 and $900,000, an enticing secondary income, with the promise of a percentage of the profits if the restaurant succeeds.

Block's current projects have taken him around the world — from London (meeting with Terence Conran) to Bora Bora (with Trotter to consider the St. Regis Hotel there) to Singapore (he's marshaling an international group of celebrity chefs for a new hotel project) to his backyard, north of Oakland (where he negotiated Bertolli's exit contract from Oliveto), after which he returned to New York to help open a restaurant at the Carlton Hotel and explore other available real estate for Geoffrey Zakarian, the chef of Town, and the restaurateur Jonathan Morr.

Block raises the act of taking a meeting to athletic proportions — crunching numbers, reading contracts as others talk and arguing his points, sometimes it seems all at once — but with Brennan, Block was subdued, finessing the chef, getting a sense not of his business goals but of how these goals fitted into Brennan's life as a whole.

Brennan was anxious to get something started. "I'm most happy when I'm building something," he told Block. Noting that bad deals are simple to pull together, Block told Brennan: "One of my main jobs is to keep my clients from making mistakes. It's really hard to do a good deal."

"Terrance is in the infancy of his brand," Block told me later, adding that he liked the intelligence of Brennan's focus: his bistro — a simple concept that can be efficiently duplicated or, in industry jargon, rolled out — and a good retail product: hand-made cheese. Brennan "needs to build his core in New York first," he went on to say, "because of the economies of scale there, and he understands the market and people know him."

Block knows the New York market well. He was the prime mover of one of the highest-profile restaurant conglomerations in recent memory, the Time Warner Restaurant Collection, negotiating the deals of four of the original five restaurants with the co-developer, the Related Companies.

At first, Block dismissed the Time Warner Center out of hand for a client, the New York chef Gray Kunz, because it lacked street access, but when Thomas Keller — also a client at the time — asked Block to consider it, Block realized that if he could finesse the right kind of deal for the chef-owner of the French Laundry, it could be worth Keller's while, not to mention make him the linchpin restaurateur for the developer.

With Keller attached, the project became appealing to other big-draw chefs. Block asked Keller whom he wanted in the center. He said Masa Takayama, the country's best-known sushi chef. Thus Block not only became Takayama's negotiator but also helped raise money, hired a design team and, in the middle of construction, secured a collateral-free bank loan for what would become the four-star restaurant Masa. (The chef pays off the balance of Block's fee shrewdly: in meals.) With Keller and Takayama in the group, as well as Jean-Georges Vongerichten (who was brought in by the Related Companies), Block said he felt it was now feasible to bring in Trotter and Kunz, who'd been in the culinary desert for years after leaving Lespinasse.

"He'll customize the needs or desires of the chef with what the circumstances are," Trotter said. "He'll mesh the two together. It's really incredible. He works on everything from making a deal happen to arranging for financing if that's desired. There's no one who does exactly what he does."

Professionally, Adam Block is a one-man band. He recently bought a two-bedroom apartment in Chelsea but does not have a New York office. He logs more than 200,000 miles a year on airplanes. He's a knot of contradictions: a pugnacious negotiator who wears his heart on his sleeve; a behind-the-scenes operator who heretofore shunned the press yet is starved for acknowledgment of his successes.

"Everyone has a gift," Block told me. "And my gift is my ability to be able to read how people think on both sides of the table." His clients are sometimes hotel operators who want chefs and sometimes chefs who want restaurant deals. "I'm not speculating, I'm telling you the way it is," he says, adding, "I know how far I can push each side when I'm negotiating."

But it's exactly this advantage that has compromised the confidence of some of his clients — most important, Keller. (Full disclosure: I worked closely with Keller on his two cookbooks.) In 2005, after six years as a member of Keller's inner circle of advisers and as a personal friend, Block was out. Questions had been raised about how well he represented Keller's interests in Bouchon Las Vegas and in Per Se. Block remains bitter about the divorce.

The suggestion that he had a conflict of interest — Block worked for both Keller and the Venetian Hotel, an owner of Keller's Bouchon — is especially frustrating for Block. "They can't have it both ways," he says. "They can't get the benefits of my relationships and doing deals for them that they want, through the relationships I have, and then complain that I have these relationships."

The break with Keller, friction with other high-profile chefs and the termination of Charlie Trotter's restaurant project in the Time Warner Center due to disagreements between the chef and the developer have left him disenchanted both with his role as middleman between developer and celebrity chef and with his income. This is why he has recently become a partner with a chef in opening a restaurant.

Initially hired by the Carlton Hotel in New York to lure a big name to its restaurant space, Block has become an investor with Zakarian and Morr in the resulting restaurant, Country. Putting his own expertise to work for him is a way for Block to earn more than his $500-an-hour fee (he does not receive a commission on his deals, and claims to feel often like the uninvited guest after the papers are signed). Echoing his own advice to Brennan, Block is also working with Zakarian and Morr on a simple concept, opening a burger restaurant in the East Village modeled after Morr's Manhattan noodle house, Republic — the kind of restaurant they could "reiterate" in New York, Las Vegas and beyond. Their plans reflect what many successful high-end chefs are doing: simplifying their work in order to sell it to more people, creating their prêt-à-porter line.

Fewer and fewer chefs, it seems, strive to be the single-restaurant artist-monk. "I don't want to just stay in one kitchen," Zakarian said on the opening day of Country, dressed in street clothes while tasting dishes brought to him by a sous-chef. "I have way more interests than just cooking." He continued, "There are so many ways to enjoy this métier" — everything from multiple books to opening a boutique cooking school to developing kitchen products to designing kitchens for other chefs and operators.

"What I think is going to change," Block predicts, surveying the celebrity chef landscape, "is that people are going to become less and less caring about who's the chef, and more and more caring about how it is done and what the food is actually like." He thus urges his clients to create what he calls an "esoteric brand," a restaurant that is associated with the chef but that doesn't require his name on the shingle or his presence in the shop. Artisanal Fromagerie and Bistro, for instance. Brennan, education complete, decided that opening in Chicago was in fact felicitous, and he has also found real estate on Manhattan's Upper West Side for his bistro's second location.

What's on the horizon for Block may be the ultimate reflection of top chefs' simplifying their work. Robert Goldstein, the president of the Venetian Hotel in Las Vegas, has put Block in charge of developing an 18,000-square-foot food hall, inspired by markets like San Francisco's Ferry Plaza Farmer's Market, for a new Las Vegas hotel and casino, the Palazzo. Featuring high-quality food at low prices — in effect, a Slow Food court — it is scheduled to open in fall 2007.

"We're trying to combine art and commerce in a way that's never been done before," Goldstein says. "Great food on a mass scale." Goldstein says he is hoping to see, for instance, shops selling Emeril Lagasse's po' boys, Mario Batali's pizza and Thomas Keller's oysters — a challenge Block is perfectly poised to choreograph.

If the idea works in Las Vegas, Block says he hopes to replicate it throughout the country, a portentous ambition: star chefs setting up in the food courts of America's malls, with Adam Block quietly leading the way.

Michael Ruhlman is a freelance writer. His most recent book, "The Reach of a Chef," was published in May by Viking.

Copyright 2006 The New York Times Company

Behind the economics of attention.

An interview with
Richard A. Lanham
author of The Economics of Attention: Style and Substance in the Age of Information

Question: The information economy is saturated with it: there's something like 80 million websites, 500 TV channels, countless online newspapers constantly updated, a gazillion blogs, podcasts, mp3s, video downloads, etc., etc. And worldwide there are about 1 million new books published each year. Economics is about the allocation of scarce resources. What are the scarce resources in the information economy?

Richard A. Lanham: The scare resource is the human attention needed to make sense of the enormous flow of information, to learn, as it were, how to drink out of the firehose.

Question: So, is the goal in the attention economy is to get eyeballs first, and the money will follow? Is that how to make sense of the enormous flow of free information that is at our fingertips? If so, who can help maximize the number of those eyeballs? Software engineers? Designers? Celebrities? Artists?

Lanham: You are asking three questions at once. First, yes, in an attention economy, you have to get the eyeballs first. But the money, as many found out with internet stocks, does not automatically follow the eyeballs.

Second, how to "make sense of the enormous flow of free information" is another question altogether, at least if I understand you. If you mean, "how do we explain the explosion of free information provided by the internet?," then there are a lot of answers to that, some beyond the traditional purview of economics. People put up information on the web often for the pure pleasure of sharing what they know-the pleasure of teaching. They don't expect money to follow. They are being paid in a different coin, the pleasure of teaching, which includes of course the attention your readers/viewers/students pay to you. One of the great surprises, at least to me, about the internet-based information explosion is the extraordinary human generosity which it has revealed. People want to share their information, their enthusiasms, their way of looking at the world and now they have a new and infinitely more effective way to do it. It may be what they know about Barbie dolls, or about digital cameras, or the specifications of sewer pipe for your house-the range is infinite. It is far more surprising, at least to me, how often people want to give this information away than how they want to be paid for it. So, how to explain the "enormous flow of free information"? Emphatically, not just in the expectation of future profit. Quite the opposite. This generosity of spirit has not been so remarked as it ought to have been.

Third, "who can help maximize the number of those eyeballs?" Ah, well, everyone is trying to figure that one out. To condense into simplicity, "all the information designers." And these people are various, working in words, images, and sounds, and the new mixtures of these three signal sources which are continually emerging.

"Software engineers"? No, I think they are usually a different breed from the information designers; they show how the design can be implemented.

"Celebrities"? Well, of course, especially if we are willing to expand the meaning from movie stars and rock stars to the objects of intense centripetal gaze which we are now creating. Robert H. Frank and Philip J. Cook, in their The Winner-Take-All Society, talk about this phenomenon. Take, for example, the enormous attention structures that have been built on The DaVinci Code. I read in the newspaper today that there are whole tours being organized to visit the sites mentioned in the book, pilgrimages to shrines, though often pilgrims of a sort the proprietors of the shrines (starting with the Louvre) are not always comfortable with.

Everyone is trying to hit the lottery of attention. In one way, of course, we have to do here with an old search indeed, the search for fame. There have not been so many Achilles figures, after all. But it is fame on steroids, stronger than Achilles, or so it seems to us now. What can explain it? One way might be to argue that such an intense central focus brings together groups of people who otherwise would not share the same stage, and so is a good thing. Another way, as I mention in the book, is to think of centripetal gaze as the way some primates organize and direct their group behavior. But these are not anything like full explanations, and they do nothing to assuage the desperate longing of the would-be actors and screenwriters who haunt Hollywood, hoping against hope that they'll hit the attention lottery.

"Artists"? Of course, they have always instructed us in how to pay attention to the world. I spend most of Chapter 2 on two of them, Andy Warhol and Christo.

Question: The famous English typographer Beatrice Warde once compared printed text to a crystal wine goblet: it should be transparent and offer no impediment to appreciating the substance it conveys. In the attention economy is that the solution? Or is that part of the problem? What should text be? What should it do if it is to hold the eye?

Lanham: Whoa! Big question. I answer it in the book, indeed in different ways in each chapter. Chapter 5 provides the most detailed discussion. I argue that there are two ways to look at a text: AT it, that is to say accepting its style, its verbal surface, as its way to make meaning; and THROUGH it, that is to say looking for a "content" beneath the verbal surface and independent of it. We usually think of communication as a THROUGH affair; cut to the chase, get to the substance. But in an attention economy, the substance is the style. That is the whole argument of Chapter 1. In such an economy, AT vision is as important as THROUGH vision. The essential skill, as much for an economist as for a cultural critic, is to know how to toggle from one to the other as circumstances dictate. I argue in the last chapter that this skill in toggling has stood at the center of rhetorical training since the Greeks invented formal rhetoric. So, "what should text be?" It is going to be, as it has always been, a combination of style and substance; the trick in an attention economy is to see that style and substance, and our expectations for them, have changed places.

Question: Way back when the information economy was just getting its growth spurt, in 1993, we published your book The Electronic Word. At the same time, you convinced us to release the book in electronic form. On the whole though, electronic books have not really—at least not yet—taken off as a form of publication: almost all books are still published and read in traditional print form. Why do you think that is the case and would you prefer it otherwise?

Lanham: Why haven't electronic books taken off? I spend the best part of Chapter 4 answering this question. Partly, it is because screen resolution has not yet equalled that of print; Sony's new Ebook reader has come close to solving that problem. Partly it is because we read in all kinds of places and postures in which even laptops don't fit. Maybe iPods and cell phones will solve that. Partly, it is because creating an electronic text, especially a mixed-media one, costs a lot more than publishing a printed text. But the production technologies are getting cheaper by the day. Partly it is because of how Ebooks have been marketed. They have been hard, and sometimes illegal, to transport from one computer to another. It is as if you bought a book but were told that you could read it only in the living room. For the bedroom, you had to buy another copy. Idiotic.

The dedicated devices sold to read electronic books on have been expensive and incompatible with other such devices. The devices should be given away to sell the books, as Kodak did with camera and film. Partly, it is because people have not seen what new mixtures of word, sound, and image digital expression makes possible. These possibilities are being illustrated on the web, by video games, and by some—a few—educational products. Partly, it is because there is no established sales structure to market electronic books. They have to piggyback on a printed book, as happened with my Electronic Word. None of these obstacles is insuperable, though. Just look at the internet. Lots of people read lots of words there. And lots of students read their assignments on a screen Most scholarly communication has migrated onto the screen, too. The journal is an archive, not a vehicle for breaking news. That happens on the blog. I can't see either of these directions going into reverse.

Do I wish it otherwise? No, because, also for reasons that I explain in the book, I think electronic expression better suits an economics of attention than pure print does. And it opens up genuinely new forms of human expression, and I rejoice in such possibilities. But do I think books will go out of style? Of course not. They have proved to be an extraordinary vehicle of expression and will continue to be. I rejoice in this too. I'm a creature of books. My house is lined with them. When I finally made enough money so that I could afford any new book I wanted, I felt nirvana had arrived. I never loose my wonder at their longevity. Last week my wife and I were at the Humanist Library in Selestat, in France. It has two volumes from the 7th century; my wife and I could make out what they were saying. How's that for miracles?

What is happening—you can see it in the courses in "the history of the book" which have popped up everywhere—is that we are coming to understand the book as an expressive vehicle in new ways because we finally have something else to compare it to. All to the good. And one more cheer for books. They are, dollar for dollar, phenomenal entertainment value. And if you reckon in the second-hand buys available through Amazon and ABE, the bargains are even greater. Electronic expression drives book-reading and book-buying. Who'd have thought it? And, as a final observation on the life of the book, we ought to note how audio books have given print a voice. This is an enormous change, and one which gives renewed life and vigor to the book. But, of course, it also makes the book into something else, a performance. The media continue to mutate.

Question: Universities are in the forefront of the purveyors of information. Somewhat like book publishing, some of the practices of higher education has been transformed in the information economy, but most undergraduate education still takes place in classrooms, over a four-year span, with a faculty person who is a specialist in some field, and who has—or wants—a guaranteed lifetime job. Do the practices of the university make sense in the attention economy?

Lanham: Not to me, they don't. I spend the whole of Chapter 7 ("The Audit of Virtuality") saying why. The four-year span, the classroom and the lectures in it, the idea of a university as a campus, all these are going to change, indeed are changing. So will the circumstances of employment for the professoriat—the "winner-take-all" logic is already beginning to apply to it. The pressure will come from costs, and from the need to democratize access, and these are being addressed right now by internet courses and by private educational establishments like Phoenix University, and by the enormous educational programs offered by industry and the military. No one can predict what the new mixtures will be but you can get an idea by the online courses now being offered, as economy measures, to students resident on campus.

I might add one change which I don't discuss much in the book, the changes in departmental structure brought about by the need to teach how to create, and attend to, multi-media expression. As I argue at more than one place in the book, the whole balance between the sciences and the arts and letters will change and, probably later than sooner, university structures will have to change accordingly.

Copyright notice: ©2006 by the University of Chicago. All rights reserved. This text may be used and shared in accordance with the fair-use provisions of U.S. copyright law, and it may be archived and redistributed in electronic form, provided that this entire notice, including copyright information, is carried and provided that the University of Chicago Press is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires the consent of the University of Chicago Press.

The economics of attention.

An excerpt from
The Economics of Attention
Style and Substance in the Age of Information
Richard A. Lanham

CHAPTER 2: ECONOMISTS OF ATTENTION
In the spontaneous unfoldings of history, the imaginative expression
of a trend precedes its conceptual-critical counterpart.
—Kenneth Burke

Our recently ended twentieth century overflows with monuments to artistic outrageousness. Never have so many artists flung so many paint pots and puzzles in the face of so many publics: urinals turned upside down and exhibited as art, Rube Goldberg machines that do abstract drawings, canvases that are all white or all black, paintings of Campbell’s soup cans, sculptures of the boxes the soup came in, trenches dug in the desert where nobody can see them, the Pont Neuf in Paris wrapped up in gold cloth for a few days and then unwrapped again. One strand of this outrageousness isn’t outrageous at all, once we see the lesson it teaches: During the twentieth century, art was undergoing the same reversal from stuff to attention described in chapter 1. Art’s center of gravity henceforth would lie not in objects that artists create but in the attention that the beholder brings to them. Some examples.

In 1917, the French artist Marcel Duchamp got together with two friends, the painter Joseph Stella and the connoisseur Walter Arensberg, to play a joke on the Independents’ art exhibition. They bought from the J. L. Mott Iron Works a urinal on which Duchamp, after turning it upside down, painted the nom de plume R. Mutt. They then sent it into the show under Mutt’s name, with the $6 registration fee. Since, under the rules of the show, any artist could submit any piece of work, it had to be shown. Some joke. None has become more famous or engendered more comment than this Fountain. [Offsite link: See a photo from Wikipedia.] In 1989, an entire museum show and book were built around it. The usual explanation of the joke has been that it illustrated the premise of the show: art was what an artist decided it was. This ipse dixit definition of art, though, however much it may elevate the artistic ego to godlike stature, doesn’t help much unless you take it a step further. Art is whatever the artist wishes to call to our attention. Art is an act of attention the artist wishes to invoke in the beholder.

Duchamp had developed this theme a few years earlier with his “Readymades.” The first, apparently, was a bicycle wheel mounted on a kitchen stool. [Offsite link: See an image from the Museum of Modern Art.] You could spin it around when you felt like it. Early “interactivity.” Later came an inverted kitchen bottle rack, less user interactive but equally stimulating to serious interpretation. [Offsite link: See an image from the Norton Simon Museum.] You could say, for example, that there was a great deal of beauty hidden in a bicycle wheel, but so long as it was attached to the bicycle, its utility obscured its beauty. Likewise with the bottle rack. From such efforts descended the long list of “found objects” littering the museums of the last century. The lesson was simple and, once learned, tedious. Art is not stuff made out of stuff taken from the earth’s crust. Art is the attention that makes that stuff meaningful. The more commonplace and physical the objects teaching the lesson, the more they taught the final insignificance of physical objects.

But Duchamp himself repudiated this interpretation. He said he did not think his Readymades had any hidden beauties to reveal. Furthermore, as he said on more than one occasion, he despised the high seriousness the beholder brought to art. Art, he thought, was a worse religion even than God. He made his feelings clear when he annotated a postcard of the Mona Lisa by drawing a mustache on it. [Offsite link: See an image from Wikipedia.] Art not only was a way of paying attention to the physical world, it was a pompous and overblown one as well. This disillusionment with art led him, in 1923, to stop creating it. His oeuvre since then, indeed over his lifetime, is slender. Yet his recent biographer Calvin Tompkins argues that he is the most important artist of the twentieth century. How could this be?

Duchamp said that he wanted to deflate the seriousness of art. He wanted to make a game out of it, a game with the beholder. We might, thus, consider his career as fabricating a series of attention games with the art-loving public. Consider the famous urinal. It illustrated the premise of the Independents’ Exhibition and so constituted a serious statement. It mocked the premise of the Independents’ Exhibition (“See, art is a real pisser, isn’t it?”) and so mocked the serious statement, and the conception of the artistic ego that the exhibition stood for. The art historians and interpreters have fallen into this ironic bear trap every time they’ve walked over it.

Inquiry of all sorts has to be serious. That is its organizing premise. But if you subtract the object of that seriousness by putting a urinal in its place, that seriousness is turned into a game. To understand it, you must then write a serious treatise on games and play, wondering all the while what you are about. The critic, like a bull bemused by the toreador’s flashing cape, starts pawing the ground, angry and confused. Such confusion has made Duchamp famous. The urinal proved to be an extraordinarily efficient generator of fame because other people—the critics and historians—did all of Duchamp’s work for him.

Likewise with the Readymades. Duchamp said he made the first one, the bicycle wheel, just because it was fun to spin the wheel around. But when you exhibit it, when you put it into an attention field called “art,” it becomes a catalyst. You must look at it differently. Yes, we should indeed pay more attention to the utilitarian world, savor its beauty as beauty. But when you find yourself gazing at it worshipfully, Duchamp turns around and says, “It’s just a bicycle wheel, you silly jerk.” The final result is to make us oscillate back and forth between the physical world, stuff, and how we think about stuff. It makes us look at our own patterns of attention and the varieties of “seriousness” we construct atop them.

That oscillation constitutes a serious lesson about seriousness. But it does not constitute great art, if we think of art as composed of stuff shaped into beauty, as forming part of a goods economy. In this industrial framework, Duchamp is the charlatan some have taken him for. But if you are willing to put him into an attention economy rather than a goods economy, let him work in attention, not in stuff, then things look different. Duchamp, as few before him, knew how to catalyze human attention in the most economical way possible. The disproportion between his oeuvre, the physical stuff he left behind, and his reputation can be explained in no other way. If we are looking for economists of attention, he provides a good place to start, an excellent lesson in efficiency.

When we consider the twentieth century from this point of view, we are reminded that futurists not only ushered us out of it but into it as well. These first futurists were led, and often financed, by Filippo Tommaso Marinetti, a wealthy Italian intellectual who wanted to catapult Italy into the future, or at least into the sophisticated present of Paris, where Marinetti lived in spirit and often in the flesh. He announced his utopian vision in an advertisement, a “Futurist Manifesto,” that appeared on the front page of the Parisian journal Le Figaro on 20 February 1909. [Offsite link: English translation of the text from Wikipedia.] Marinetti would have made a stupendous ad man in our time but, more remarkably, he already was one in his own, before blitz ad campaigns had been invented. He was, above all, an economist of attention. “Italian Futurism was the first cultural movement of the twentieth century to aim directly and deliberately at a mass audience.” He ran his intellectual campaign at the beginning of the century exactly as spin doctors would conduct political campaigns at its end. To reach this audience, Marinetti generated a torrent of manifestos and position statements. And, like an Internet company trying to buy “eyeballs” by giving away its product, he gave his products away to purchase attention: “It is believed that two thirds of the books, magazines and broadsheets that the futurists published were distributed free of charge as ‘propaganda’ material.”

The platform of this campaign for Italian cultural leadership, the famous “Manifesto,” might have come right out of the sixties. Here’s a sample: “It is from Italy that we are launching throughout the world this manifesto, charged with overwhelming incendiary violence. We are founding Futurism here today because we want to free this land from its foul gangrene of professors, archaeologists, guides and antiquarians. For too long Italy has been a market-place for second-hand dealers. We mean to free her from the innumerable museums that cover her like so many graveyards.” Get rid of everyone over thirty, especially those gangrenous professors. Forget the past. Fearlessly mount the Star Trek holodeck. Marinetti’s friendship with Mussolini and his association with Italian Fascism and its glorification of war have brought futurism into well-deserved discredit. But in a later manifesto, from 1913, he points to a less horrific future, one that Marshall McLuhan was to describe later at greater length: “Futurism is grounded in the complete renewal of human sensibility brought about by the great discoveries of science. People today make use of the telegraph, the telephone, the phonograph, the train, the bicycle, the motorcycle, the automobile, the ocean liner, the dirigible, the aeroplane, the cinema, the great newspaper (synthesis of a day in the world’s life) without realizing that these various forms of communication, transformation, and information have a decisive effect on their psyches.” Later on he speaks of an earth shrunk by speed and of the global awareness thus engendered. Like futurists today, Marinetti had no use for the past but rather tried to glimpse the operating system of the global village to come: “The earth shrunk by speed. New sense of the world. To be precise: one after the other, man gained the sense of his home, of the district where he lived, of his region, and finally of his continent. Today he is aware of the whole world. He hardly needs to know what his ancestors did, but he has a constant need to know what his contemporaries are doing all over the world.”

We’re not so far here, in the preceding, from the Internet-based paradise of perfect information prophesied by digital seers like George Gilder. And not far, either, from Peter Drucker’s conviction that information is the new property, the new stuff. Marinetti’s cultural campaign, in fact, makes sense only if we assume that such a world already exists. Assume that, in an information economy, the real scarce commodity will always be human attention and that attracting that attention will be the necessary precondition of social change. And the real source of wealth. Marinetti’s conviction that attention was the vital stuff ran so deep that it went without saying. Everything he did implied it.


Figure 2.1. Carlos Carra, "Words-in-Freedom." From Lacerba, 1914.

Look at how this worked out on a small scale, in his declaration of war on conventional typography. One favorite battleground of this war was the journal Lacerba, a revolutionary Italian journal published between 1913 and 1915. A page from it can be seen in figure 2.1.

Why would anyone want to construct such a ransom-note pastiche? The usual explanation—conventional typography symbolizes bourgeois convention, which the avant garde exists to épater—works well enough here. That’s what the journal was all about, after all, and what Marinetti certainly yearned to do. He called it “spitting on the altar of art.” But might there be another lesson lurking here? Who, or what, is actually getting spat upon?

It helps if you don’t know Italian and look only at the visual pattern. Conventional printed typography aims to create a particular economy of attention, but, since this economy is so ubiquitous, the basic reality of reading, we have long ago ceased to notice it. Print wants us to concentrate on the content, to enhance and protect conceptual thought. It does this by filtering out all the signals that might interfere with such thinking. By nature a silent medium and, for people of my generation at least, best read in a silent environment, print filters out any auditory signal. It also filters out color, prints only black on white. By choosing a single font and a single size, it filters out visual distraction as well. Typographical design aims not to be seen or more accurately, since true invisibility is hard to read, to seem not to be seen, not to be noticed. We don’t notice the verbal surface at all, plunge without typographical self-consciousness right into the meaning.

Print, that is, constructs a particular economy of attention, an economy of sensory denial. It economizes on most of the things we use to orient ourselves in the world we’ve evolved in—three-dimensional spatial signals, sounds, colors, movement—in order to spend all our attention on abstract thinking. The “abstraction” can be abstruse philosophy, but it can also be a particolored landscape description. Doesn’t matter. They both work within the same economy, one that foregrounds “meaning” in the same way that a goods economy foregrounds stuff you can drop on your foot.

The Lacerba typographical manifesto makes us aware of that “invisible” convention, forces us to notice it as a convention. By breaking all the established rules, it makes us notice them, look at them rather than through them. It makes an economic observation that is an attack not on a particular economic class but on a particular economy of attention. It aims to make us economists of expression.

In conventional typographical text, meaning is created through syntactical and grammatical relationships. In figure 2.1, “meaning,” such as it is, is created by visual relationships that pun on the meaning of the words. One example: on the right side, halfway down, “Gravitare” (to gravitate, tend toward) “of perpendicular masses onto the horizontal plane of my little table.” But little table gets a big bold font and an even bigger T,which is a letter and a table at the same time. We read the words for meaning—we can’t help doing that—but we are made to “read” them for shape as well, and in an uneasy combination. The print economy of attention has been destabilized. It is still there, but it toggles back and forth with a new one.

Marinetti’s spiritual successor was Andy Warhol. Warhol the commercial artist, Warhol the painter, Warhol the filmmaker, Warhol the writer, Warhol the collector, Warhol the philosopher, and, superlatively and climactically, Warhol the celebrity: all these roles float on a sea of commentary, nowadays mostly hagiographical. Let’s try, as a perspective by incongruity, to describe Andy Warhol as an economist, an economist of attention. And perhaps the perspective would not in fact seem so incongruous to him. Here’s what he said about the relation of art to business: “Business art is the step that comes after Art. I started as a commercial artist, and I want to finish as a business artist. . . . Being good in business is the most fascinating kind of art . . . making money is art and working is art and good business is the best art.”

Warhol was an avid collector of stuff. His last house was so stuffed with his collected stuff, from cookie jars to diamonds, that there was no room left for the people. He would have been delighted, had he been able to attend Sotheby’s auction of it all after his death, to see it knocked down for nearly $27 million dollars, far more than the pre-auction estimates. And to see his silk-screen painting of Marilyn Monroe Twenty Times (the actress’s face, taken from a publicity photo, silk-screened onto canvas twenty times) fetch nearly $4 million. He did not share the conventional liberal intellectual’s distaste for stuff and the advertising of stuff. It was his life’s work to illustrate the paradoxical relationship of stuff and attention.

Warhol used to ask his friends what he should paint. One friend suggested that he should paint what he liked best in the world. So he began to paint money. This wasn’t what he truly liked best in the world, however. That was attention. But you couldn’t paint attention, at least not directly. So he went about it indirectly.

He began, in 1960, to paint pictures of Campbell’s soup cans. [Offsite link: See an image from Wikipedia.] Never has a single source of inspiration been so commercially exploited. People usually remember him as the painter of a can of tomato soup but he developed the product far beyond this simple notion. His soup cans “had legs.” He painted pictures of the different kinds of soup—vegetable beef, beef noodle, black bean—in single portraits and in a group of two hundred that seemed, at least, to run through all the flavors. He painted them half-opened, crushed, in the act of being opened, with torn labels, without a label (you know it is a soup can because the caption tells you so), stuffed with money, and so on. Most were photorealistic in technique but a few were sketches. He then made exact models in wood of the boxes that the soup cans came in, along with the now-famous Brillo and Heinz ketchup boxes. [Offsite links: See images from Cybermuse Gallery and from the Chrysler Museum of Art.] These boxes then made wonderful gallery shows, stacked in various new and exciting ways. How’s that for brand name exploitation?

When he began, the New York galleries would not show him. You can’t blame them. The great pop explosion of the 1960s, the style that took the attention economy as its central subject, had not yet occurred, and nobody knew what to make of this new genre of mass-produced commercial still life. And so it was left to the Ferus Gallery in Los Angeles to mount the first Campbell soup can show in 1962. Let that show stand for many to follow. What happened there? Like Duchamp with his urinal, Warhol put a banal object in an alien attention structure. An art gallery, public or private, is a place to which we come with a definite set of expectations. Duchamp mocked these expectations; like Marinetti, he was spitting on the altar of art. Not young Andy. No disrespect intended either for the soup or the public who looked at it. No meaning, in fact, at all. What you saw was what you got. He never pretended otherwise.

The surface, he said, was all there was. He sung not of the soup but the can it came in. Obviously no art critic could be content with this dead-end candor. Those soup cans had to mean something. You could repeat the mantra of “art for art’s sake” but no critic can actually accept this as truth because it leaves the critic no function. There had to be some reason why the soup cans were put into an art gallery, why we were asked to admire their beauty, even take one home and hang it over the mantelpiece. There had to be some soup in the can. And so all the interpretive machinery, professional and amateur, went into action. The soup cans represented the detritus of consumerist capitalism, its vacuous tastelessness, etc. Or the tastelessness of modern mass-prepared foods. Or they represented the signage with which we are surrounded these days, no less fitting a subject for a still life than a dish of pears was for Renoir. Or, since the paintings were all the same, they represented the sterility of mass production. Or they allegorized the bankruptcy of the masterpiece tradition in Western art, a tradition based on skill of hand and beauty of form. Or, quite to the contrary, because formal decisions were required to transform the soup can labels to canvas, they represented an exquisite case of ever-so-slight formal transformations that elicited the beauty implicit in the Campbell’s label, lent it a tailor-made beauty the store-bought can did not possess.

It took time for this flood of commentary to flow downstream. Meanwhile, when the show was still up in the Ferus Gallery, another gallery close by put some real soup cans on display, suggesting that you could get the real McCoy for much less. Nice comparison. What did the exhibit do that a local grocery store could not? It created a powerful yet economical attention trap. A maximum of commentary was created by a minimum of effort. Subject? Off the shelf. Basic design? Off the shelf. Technique? Ditto. Replication? Silk screen, off the shelf too. Thought, allegory, philosophy, iconography, meaning? Nothing in that line required at all. Drafting ability: de minimis. The meaning, since this was an attention trap, would be supplied by all the interpreters waiting out there to make sense of such artifacts. For them, the more puzzling or outrageous the artifact, the better. Altogether, a dynamite niche product at a bargain basement cost.

But hasn’t it always been so? No. Attention traps had been tried before—Rabelais set them for his humanist explicators all the time—but they could come into their own only when there was a powerful and established Interpretive Bureaucracy of Attention Economists waiting there to be used. The Interpretive Bureaucracy was what made pop art such a success. Made it possible, in fact. The right cultural judo expert could make use of all that established power to get talked about, to get famous. And if asked about the meaning of it all, as Warhol repeatedly was, he could make up the meanings expected (I was raised on Campbell’s soup. I had it for lunch every day. I love it.). Or he could shrug and say that there wasn’t any meaning. What you saw was what you got. The surface was the meaning. Once the Interpretive Bureaucracy got started, it didn’t matter. With the bureaucracy’s relentless seriousness, it could philosophize surface as well as depth. And so what if Andy did say one thing one day and contradict himself the next? More grist for the mill. That’s how an attention artist works.

So there was a way to paint “attention.” You had only to add the right enzyme to a preexistent mixture. Then that enzyme—and a soup can would do as well as anything else—could represent the subsequent interpretive conversation. It would, as time passed, embody a complex attention structure, an entire cultural moment in the same way that, say, Barbie dolls do.

Once the attention-trap formula was worked out, it was easy to apply it elsewhere, to the celebrity portraits, for example. The day after the Ferus Gallery closed, Marilyn Monroe died. David Bourdon describes what happened next: “Within a few days of Monroe’s death, Warhol purchased a 1950s publicity photograph of her and, after cropping it below her chin, had it converted without any alteration into a silkscreen. The silkscreen enabled him to imprint her portrait hundreds of times onto various canvases. He screened her face one time only on small, individual canvases, and repeated it—twice, four times, six times, twenty times—on larger canvasses, positioning the heads in rows to create an allover pattern.”

Marilyn was already a cultural icon, and her death ensured that the golden hair would never gray. Here was an attention trap already made, waiting to be exploited. Its power could, in a simple judo throw, be harnessed for mass production. Some of the silk screenings were out of register, blurring the image, but that only individuated the various iterations. Again, it was such an economical, such a profitable and efficient, way to paint attention. A 1950s publicity shot, silk-screen technology, and you were ready for mass production. Vary the size, the number of iterations, the color, actually induce the off-register blurring, all these were the signs of real artistic creation and cried out for interpretation. The next step? Obvious. Extend the franchise to other celebrities. Get them into a contest to have their faces replicated.

Thus was attention converted into money by instantiating the attention in physical objects, stuff. The ingeniousness of the solution should not blind us to the difficulty of the problem. The Internet dot-coms have not yet solved it, and this indeed may be what shortened their life. Information, digital or otherwise, is not like stuff. You can eat your cake, let somebody else eat it too, and you both still have it. Books are a great way to bring information down to earth in a salable product. Warhol found a way to bring a certain category of information—somebody else’s celebrity, maybe even celebrity itself—down to earth in salable products. And no one sued him for copyright infringement, or unauthorized use of personal image, or trademark violation. All these starlets lined up to be violated. An amazing business coup.

The celebrity portraits, like the celebrities themselves, drew their power from Homo sapiens’ fondness for the centripetal gaze. We love looking at movie stars, sports stars, royalty. We simply cannot get enough of it. Louis XIV based the plan of Versailles on this centripetal gaze (all the alleyways radiated out from the king’s bedroom) and palace plans ever since have striven for the same visible ego enhancement.

The centripetal gaze, the flow of energy from the margins of a society to its center of attention, creates by its nature the winner-take-all society. To be one of the winners who took all, Andy knew, he had to create a public personality that would function as an attention trap as efficient as his artwork. As he himself said of his endless party-going and art-going: “But then, we weren’t just at the art exhibit—we were the art exhibit, we were the art incarnate and the sixties were really about people, not about what they did.” Such self-dramatization is as familiar as Douglas MacArthur’s corncob pipe or General Patton’s ivory-handled six-shooters. Andy’s social self stood out from the crowd, however. The celebrity press is built on trying to find out about the private selves of the social selves, the celebs’ scintillating inner lives. Andy preempted this effort. He had, he kept saying, no central self, no private self to peer into. As with the soup cans, he was pure wysiwyg (what you see is what you get). He aimed to impersonate a purely social, two-dimensional self with no central interiority other than the ambition to be rich and famous.

A more resonant incarnation of his time would be hard to contrive. And, apparently, he didn’t need to contrive it. He was naturally shallow, selfish, and unreflective, a person who would let his kind old mom take care of him for much of his life and then not bother to go to her funeral. Like Henry VIII, he was a genius at playing off the members of his entourage (and what a gallery of grotesques the Warhol entourage comprised) against one another. True enough. But he was unusual, truly unusual, in not pretending otherwise. Each time when asked about his early life, he sketched a new one. He even sent an impersonator on a college lecture tour for him, explaining when the imposture was exposed that the impersonator was much better at saying the kinds of things college audiences expected to hear. The customer is never wrong! The colleges asked for their money back or a visit by the real Warhol. When the real Warhol did come and was asked if he was the real Warhol, he answered no. He was a creature of the surface and happy to be so. “If you want to know all about Andy Warhol, just look at the surface of my paintings and films and me, and there I am. There’s nothing behind it.” The question of a “real” Andy, like the question of meaning in his painting, simply didn’t arise. In a pure economics of attention, one of his college-attuned impostors might have replied for him, such questions simply make no sense.

In Andy Warhol, then, our perpetual hunger for sincerity was finally given a rest. If you looked only at the surface, and if the surface was all there was, you did not need to peer beneath it. He was all package. That’s why he knew a genuine celebrity like Judy Garland when he met her: “To meet a person like Judy [Garland] whose real was so unreal was a thrilling thing. She could turn everything on and off in a second; she was the greatest actress you could imagine every minute of her life.” But what was such candor but another attention trap? The more he confessed that he had no central self, except hunger for the centripetal gaze, the more the celebrity-interpreting bureaucracy would try to pry out, or synthesize, a central self. There had to be one, just as there had to be soup in the can. Otherwise they would be out of a job. So also with the celebrity writers who perpetually searched for the “real Marilyn” or the “real Princess Diana.”

Warhol once remarked, “That’s what so many people never understood about us. They expected us to take the things we believed in seriously, which we never did—we weren’t intellectuals.” He was not lying but he was not telling the truth either. He did take the economics of attention seriously. That seriousness, however, differed from the kind the “intellectuals” operated under. They were always looking through the self-conscious surface of things to find the meaning hidden there. He was always looking at the surface instead.

We can see, too, that he understood the paradox of stuff. The stuff you dig out of the earth’s crust becomes, in an information economy, less important than the information that informs it, what you think about the stuff. Yet the more you ponder that information, the more you understand about that stuff, the more real the stuff becomes. To put it in terms of the art world Andy lived in, the more you see that style matters more than substance, the more you see the vital role, the vitality, of substance. So, like Andy, you pursue your twin hungers: for the spotlight and for collecting stuff, knowing that each needs the other to make it real.

Let’s summarize the rules of attention-economy art as Andy practiced them:

Build attention traps. Create value by manipulating the ruling attention structures. Judo, not brute force, gets the best results. Duchamp did this for a joke. Do it for a business.

Understand the logic of the centripetal gaze and how to profit from it.

Draw your inspiration from your audience not your muse. And keep in touch with that audience. The customer is always right. No Olympian artistic ego need apply.

Turn the “masterpiece psychology” of conventional art upside down:

Mass production not skilled handwork

Mass audience not connoisseurship

Trendiness not timelessness

Repetition not rarity

Objects do matter. Don’t leave the world of stuff behind while you float off in cyberspace. Conceptual art gets you nowhere. Create stuff you can sell.

Live in the present. That’s where the value is added. Don’t build your house in eternity. “My work has no future at all. I know that. A few years. Of course my things will mean nothing.”

---
Copyright notice: ©2006 by the University of Chicago. All rights reserved. This text may be used and shared in accordance with the fair-use provisions of U.S. copyright law, and it may be archived and redistributed in electronic form, provided that this entire notice, including copyright information, is carried and provided that the University of Chicago Press is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires the consent of the University of Chicago Press.