It’s a snowy February Monday in midtown Manhattan. Publishing magnate and tech guru Tim O’Reilly’s “Tools of Change” conference has just opened at a Marriott off Broadway. The timing is fortunate; publishers HarperCollins and Random House have just announced that they will be offering more book content online and au gratis. The affable O’Reilly—who has been urging publishers to go digital since the early eighties—refuses to gloat (much). “They weren’t even trying to keep electronic copies [of manuscripts],” recalls O’Reilly. “You look at these announcements today, they seem too little too late,… but it’s allowing them to start innovating, to become part of the technology process.”
“Twenty years ago, people wouldn’t have listened,” says Sara Domville, president of F+W Publications book division. “They’ll listen now.”
As the publisher of an extremely popular series of computer manuals, O’Reilly is a bright star in a field of drab. Dubbed the “guru of the participation age” by Steven Levy in a 2005 Wired profile and a “graying hippie” with a “hostility toward traditional media” by author Andrew Keen, O’Reilly makes millions of dollars promoting open source at his conferences and selling do-it-yourself know-how to anyone who browses the computer aisle at Barnes and Noble. His message to the world’s publishing elite exudes a Wizard of Oz simplicity: Give more product away on your Web site, thereby attracting more people to sell on something pricier than a book— like a bunch of books or a conference ticket. The approach works for him at least. Some 900 publishing execs from Simon and Schuster, Norton, etc., have paid $1,100 apiece (on average) to learn how to give content away.
“I think I’m optimistic,” said Sonia Nash of Random House, echoing the uncertainty of the attendees, editors, and publishers from around the world eager to find some reason to feel good about the future of what they sell.
Reading, Writing, and Publishing In the 21st Century
For people who make their living selling words to readers—and indeed for readers themselves—these are times of upheaval. The information technology revolution has led to an explosion in textual content. More people are engaging in more conversations, sharing more opinions, learning more, and learning faster than anyone could have imagined just a few decades ago. The site Blogherald.com counted more than 100 million blogs as of October 2005. According to the Pew Internet and American Life Project, 93% of U.S. teens aged 12–17 used the Internet in 2006; among them, 64% have created content, up from 57% in 2004. We’ve entered an era where the acts of thinking, writing, and to a certain extent publishing are indistinguishable, and where charging money for editorial content is becoming an ever trickier proposition. Book publishers, newspapers and magazines, writers, and readers are experiencing these same IT trends in very different ways.
For readers and the educated curious, the information revolution means immediate access to thousands of sources for minimal cost. It also means the opportunity to become a source, trustworthy or otherwise, and to share an opinion with the world the second the whim strikes to do so.
For many writers, particularly nonfiction writers, it means leaving newsrooms (often reluctantly) to join the online world of blogs, vlogs, and RSS feeds where the pace of news is accelerated, traditional journalism practice is routinely scoffed at, and the pay is modest at best. Even popular bloggers like Om Malik, senior writer for Business 2.0 magazine, report that the money from ad clicks related to their blog content is barely enough to cover the cost of blogging. A recent New York Times article points out that, due to growing competition, many bloggers feel they have to copy, paste, and post almost 24 hours a day. Bloggers complain of sleep disorders and weight loss. Three tech bloggers have died in the last few months from maladies associated with work exhaustion.
For many magazine and newspaper publishers, the goal now is to transition into a more Web-focused business model quickly. For book publishers, the mission is to make an industry built on a fifteenth-century technology viable in the twenty-first century. That means reinventing the concept of the book for the digital age. Theirs* is perhaps the biggest challenge.
Many of the execs at the Tools of Change conference hope that O’Reilly can lead them to greener pastures. He’s secured a permanent place in history for coining “Web 2.0,” shorthand for the user-driven Internet and the culture of snarky chat-room speak, scandalous celebrity photos, and YouTube flameups that goes with it. Kids today talk, breathe, and sweat Web 2.0. But are emoticon-laden screeds and homemade YouTube videos going to save “the book” or just further our cultural transition away from print literacy?
Is O’Reilly—unwittingly perhaps— selling a Trojan horse?
To get a sense of how market trends are affecting publishing, it’s important to understand how those trends break from, but also mirror, those of the past. In the 1700s, when booksellers came to replace the aristocracy as the primary patrons of the literary arts in England, writers such as Henry Fielding and the Irish poet Oliver Goldsmith decried the phenomenon as a “fatal revolution.”
Fielding wrote a dramatic satire on the subject called The Author’s Farce, and Goldsmith railed ceaselessly against book merchants even as he profited by them. To wit: “You cannot but be sensible, gentlemen, that a reformation in literature was never more necessary than at the present juncture, when wit is sold by the yard, and a journeyman-author paid like a journeyman-tailor.” Goldsmith argued that, while booksellers commissioned a greater number of works than did the aristocrats, the quality of writing was deeply compromised by the process of commercialization; literature was suffering a spiritual death as a result.
Today’s fatal revolution takes the opposite form. Publishers are concerned that market forces will hurt their ability to sell carefully edited and packaged material, or compel them to relinquish editorial control to readers—essentially outsourcing their jobs to the masses. According to Sara Nelson of Publishers Weekly, giving up control will be a hard sell. “There’s a lot of snobbery in the book business,” she said. “Book publishers are used to being the people who decide what others get to read. That’s breaking down against their will.”
Recent data shows just how resistant publishers are to today’s trends. While the world of online content is expanding (more content coming from more people all the time), the world of published books is contracting. Book publishers are finding it harder to back first-time authors, authors who aren’t already famous, or even established writers who aren’t selling. In June 2007, U.S. publishers reported a 3% increase in the number of titles released for 2006, but that followed double-digit declines in titles released during 2004 and 2005. Extrapolate those numbers forward and you arrive at a future where the books that are brought to print serve not to bring new authors and ideas to the public but to commemorate the already famous in book form.
Seth Godin, marketing expert and author of Survival Is Not Enough and Free Prize Inside, contends that that future is already here. “The book is a souvenir,” he excitedly told his Monday morning audience. “Once you realize you’re in the souvenir business, you’ll play by different rules.”
To Douglas Rushkoff, best-selling author of Media Virus and Innovation from the Inside Out, the key to book publishing in the future is recognizing that readers are after more than information. They’re seeking an intellectual connection with an author and a community experience organized around an idea. Publishers have to look at what is still scarce, says Rushkoff. “While the book isn’t scarce, I’m scarce. I can only be in so many places. So there are a lot of different experiences that attend the book that [readers] should be participating in, to think about the book as a way to promote a set of ideas. How to work with those ideas is limited.”
Those attendant experiences can include lectures, classes, even parties. The more personal the experience, the more people are willing to pay for it. The book party is hardly a new invention. The trick, said Rushkoff, is to think of the books as marketing materials for the event and the author, not the event as marketing for the book. “Every time I do a big talk, I have to arm wrestle the publisher to help me sell a thousand, five thousand books to the people who want them… They want them at a discount because they want 5,000 copies. In reality, if I’m going to get $5,000 on a talk, my publisher should get half of that money and should help me administrate the talk and get books out.” Frank Daniels, COO of Ingram’s digital group, believes U.S. publishers have a bright future, but only if they think of themselves as purveyors of information packages rather than printers.
“Right now, when you say ‘book’ you’re thinking of 300 pages bound in something, delivered, and consumed in one period of time,” said Daniels. “What is the new metaphor for the book? Is it something that exists more like a TV show? That’s episodic? That’s consumed in one sitting, or later? Or is it something more like a Web site and a Web environment? You both own it and sub scribe to it? As we create a platform where editors can be creative in their thinking process, then we will truly begin to break the metaphor for the book and have an experience for our consumers that they will consume and continue to consume.”
According to Daniels, textbooks (hardly the most glamorous fiefdom in the book kingdom) actually represent the best opportunity to bring a wide variety of talents—audio creation, film, even acting—to bear on the job of publishing, leading to a variety of possible monetary transactions that can occur around a single product or “book.”
“The beauty about the Kindle,” Daniels said of Amazon’s newest handheld e-reader, “isn’t that the device is great. The device is terrible. But the buying experience is wonderful.… We want to make it so that it’s as easy to do digital stuff as it is to buy a book. As we get that done, publishers will say, ‘Yes. There is a market there. I now want to add some video and audio and create a truly different metaphor.’”
O’Reilly agrees that transcending the traditional concept of the book will be essential for publishers in the decades ahead. “A lot of people think of publishing as printing books on paper, paper between covers, those objects in bookstores,” he said. “I always thought that publishing was about, first of all, understanding what matters, figuring out how to gather information, and then gathering readers who that information matters to. There’s a kind of curation process. What the Internet has done is bring us new methods of curation,” meaning, presumably, finding publishable material and refining it. “A lot of publishers are fighting those models instead of saying ‘This is new stuff that helps us do what we do better.’”
The first step for publishers, according to O’Reilly, is to realize that the information coming from readers is as valuable as the information they give to readers in book form. The next step is to involve the reader in the publishing process.
“A really great example,” he said, “is a session here from a company called Logos Bible Software. Who would think of these guys as doing really cool stuff? They basically publish electronic editions of really obscure religious texts or scholarly texts that are used by people in religion. Would you like your Liddell and Scott’s Greek–English Lexicon online? Guess how they do it? They basically have community pricing software where they have people vote on whether or not they would buy the book.… They’re actually harnessing the community to set prices and to tell them which products to publish. That’s cutting edge.”
Stephen Abram, a past president of the Canadian Library Association, took the argument a step further.
Publishers, he said, need to “stop telling and start listening, to start working from the reader’s, the user ’s, the experiencer ’s contact in. Then they can start creating the products that actually match the behaviors of their user base. In many markets, the traditional publishing formats are misaligned with what needs to happen.”
The reading of static text is a poor substitute for a visceral experience and always has been, said Abram. Plain text sufficed because there was no alternative, no superior way to convey complex data. That’s changed. Abram argued that it’s up to publishers to pick among the available media tools—including video clips, audio files, even virtual reality—and pull them together into a package that facilitates learning, not just reading.
“Do you want your cardiac surgeon to walk into your room before he does your surgery and say, ‘I read the article last night’? No, you want him to have had a thousand experiences putting his hand in someone’s chest and know what it feels like. It should be just like an experience a car mechanic has where he can put his hand on the hood of your car and say it’s the manifold because he’s seen it, heard it, smelled it a thousand times.”
The future of the book as conjured by O’Reilly, Abram, and Daniels is one where the end-users (what used to be called “readers”) give specifications to an editor for the product they want—a combination of movie clips, animations, software applications, possibly even games functioning in a multifaceted learning product accessible across platforms.
The question becomes, who does the writing?
The Future Writer
In the 1990s, most news publishers responded to the rise of the Web with the assumption that Web sites could serve as advertisements for newspapers, but weren’t likely to replace papers themselves. Others began tweaking their business models to fit what they saw as the new market conditions—creating more content and giving more of it away, incrementally. A few, like Daniels, who was working at the The News and Observer in Raleigh, North Carolina, in 1993, saw the Internet as a gamechanging technology for journalism.
“When Mosaic came out, we began doing Web sites and publishing on the Internet,” Daniels said. “What we learned was that it’s all around story. It’s about creating the opportunity for consumers to get information in the most effective way possible— what they want, when they want it, at the time they most need it. We began telling [news] stories in multimedia with audio and interactive graphics and video back in 1994.… We sold all our newspapers but one in 1995.”
Across the United States, newspapers and magazines are focusing their resources more and more on their Web sites. In the process, they’re giving voice to an entirely new breed of digital journalist even as they show the door to news department veterans. Many writers are justifiably alarmed by the shift, but, according to Daniels, writers who are willing to view themselves as storytellers first and foremost, who are eager to incorporate new technology into the writing process, have a bright future.
“What’s the biggest thing that held up Hollywood in recent years?” he asked. “The writers went on strike. They’re the ones who put the story together.”
Read differently, in relation to books, this means that the cloistered author—holed up in some wooded cabin to perfect his or her tome—has become an artifact of history. Today, cultivating and communicating with a reader base has become an essential component of building a platform and positioning oneself as viable to publishers, particularly for nonficiton. O’Reilly reported that more than 70% of literary agents advise their clients to blog at least five hours a week.
Beyond blogging, this means that the writers of the future (both fiction and nonfiction) will work with Web designers, software writers, and other professionals to create product.
The more ambitious among them will likely start the process before ever approaching an editor with a manuscript. For many young writers, and writers outside of the United States, that’s already standard practice.
Japanese novelist Yoshi became a cross-continent media sensation when a novel he had first texted into his cell phone became so popular that it went on to sell more than 2 million copies as a printed book. He’s not alone; half of Japan’s top 10 best-selling books last year started out as cell phone–based.
The good news is, you don’t have to be a Tokyo text-novelist to take advantage of technology as a writer. In a conversation last fall, Lewis Lapham, historian and long-time editor of Harper’s magazine, acknowledged that his first response to electronic media was “denunciation.” Lapham’s newest publishing venture, a journal called Lapham’s Quarterly, represents his first foray into multimedia content generation.
“The Web site is not just a translation of the journal onto the Web. They’re different. They’re different media. I’m trying to do an analogous thing, give a sense of the past. But I’m also doing a radio program with the same premise. These are separate media. There are things you can do on radio, on the Web, that you can’t do in print and vice versa.”
To Lapham, the crudeness, silliness, and uncultured quality of today’s Web culture is a symptom of the immaturity of the new medium and the youthfulness of its users. The change will be gradual. “We’re still playing with it like it’s a toy,” he said of the Web. “We don’t yet know how to make art with it. McLuhan points out that the printing press was 1468, it’s a hundred years before you get to Cervantes, to Shakespeare.”
The Same Language
Shakespeare seems a far cry from where we are today. An unremarked consequence of our new information age—one that will influence readers, writers, and publishers in the future—is that bad writing, chat speak, text, millions of message board posts that come from and lead nowhere, are having a cheapening effect on all written content. As veteran journalist Mitchell Stephens has pointed out: “Editors and news directors today fret about the Internet as their predecessors worried about radio and TV, and all now see the huge threat the Web represents to the way they distribute their product. They have been slower to see the threat it represents to the product itself. In a day when information pours out of digital spigots, stories that package painstakingly gathered facts on current events—what happened, who said what, when—have lost much of their value.”
Consider French management professor Philip M. Parker’s recent invention of a system that can aggregate all the available information on a subject into book form in just minutes. According to the New York Times, Parker has “written” more than 200,000 such books.
The idea that the practice and craft of writing can simply retool itself for the digital age overlooks the fact that the Web is giving rise to totally unique forms of expression, a writing that is different from the kind traditionally found in books. YouTube clips, animations, and other video applications account for more than 60% of Internet traffic today. The proportion is growing rapidly. While very little of this visual content has any monetary value (the most ad revenue a YouTube clip has made has been $25,000), there is a seemingly ceaseless supply of it, as well as numerous vehicles for distribution.
What does Web 2.0 portend for the written word itself? O’Reilly has an opinion on that, too. In addition to being a Web enthusiast, he is himself a bibliophile. He graduated from Harvard, won a grant from the National Endowment for the Arts to translate Greek fables in 1976, and later wrote a biography of Dune author Frank Herbert. He disagrees with the notion that the Internet is spurring us toward something like a postliterate age, but he acknowledges that technology inevitably changes that which it comes in contact with, including forms of communication.
“Look at Notre-Dame de Paris,” he said. “The novel is not about the hunchback so much as it is about the church, and the idea of sculpture as a way of communicating stories. In the preliterate era they told the stories through these churches.… Victor Hugo was lamenting the loss of that stone literacy, where people would look up at the church and know what it was about. Yes, something was lost. But we gained a lot. I remember a conversation I had at our open source convention with Freeman Dyson, the physicist. He said something wonderful; someone asked him what do you think about the fact that we were losing something or other, and he said, ‘We have to forget, otherwise there would be no room for new things.’ That’s an important thing to take.… Be accepting of the losses and the gains.”
“Reading isn’t going to go away,” agreed Abram, “but it’s only one aspect. Probably, it will be some combination of reading, visual conversations, and lessons. What you’re authoring is contributing to a corpus that is significantly larger than it is now, electronically. Most of the important stuff will have been converted 20 years from now. We can convert the entire Library of Congress for $9 billion right now, which, in terms of national priorities, is only five weeks of Iraqi conflict. It’s doable. It used to be undoable. The corpus, the ability to create cultural context, is going to change the nature of how culture is expressed.”
Lapham was likewise dismissive of the notion that IT is bringing us to the brink of postliteracy, but he acknowledged that written material will likely never regain the cultural primacy it enjoyed in earlier centuries. “The written word will survive because there are things you can do with the written word that you simply cannot do with film or with radio. I don’t know if it will be a mass medium,” said Lapham. “The large majority of mankind is passive. The change comes from the active minority. Those people will continue to read. Books will continue to be read. Maybe the more popular forms of writing will be taken over by video games. But it’s up to members of your generation to teach young people how to read and what the difference is between reading literature and sifting data.”
Rushkoff sees new kinds of information systems springing to life next to writing, and sees this as part of a grand evolution in human communication. “Just because things became written down, we didn’t lose oral culture,” said Rushkoff. “Read Walter Ong [author of Orality and Literacy: The Technologizing of the Word]. We changed, but we still talk to each other, dance for each other. We do them in different situations. The written word is cool. It’s for a certain kind of thing. The more media we have to exchange, the better we understand what the biases are. The written word is abstract, contractual. It launched monotheism, ethics; it launched evolution. It was really important for a lot of things, and that will remain. But visual media will lead to other kinds of insights.”
For lovers of literary writing, who are now watching the marketplace and Internet erode the remains of nineteenth-century print culture, these assurances may not be particularly consoling. We have no choice but to accept them. Arguing against the forces of digitalization is as much a losing battle as cursing the coming of the evening tide. But before we invest ourselves too deeply in this future, consider this: If new technologies expose the biases inherent in print and text, so the converse is true as well; that the written word is uniquely suitable for revealing the myopias of our digital age. If poor old Oliver Goldsmith were alive today, he might argue that critical reading abilities, cultural literacy, and traditional literacy were never more vital than the present, when Linux writers are regarded as the modern incarnation of holy monks and the newest Facebook application is treated with the deference of an illuminated manuscript. Coding skills are highly marketable in the twenty-first century. We, as a civilization, are duty-bound to encourage technological know-how. However, before we make the mistake of convincing ourselves that a knack for writing software is more valuable than the ability to simply write well, we might consider looking anew at the souvenir that is the book. One day, computer programs—these objects of our fascination and frustration—will learn to write themselves. And we’ll be left with our ideas, however grand or shallow.
Originally published in THE FUTURIST, July-August 2008.