Closing the book: A spiral of advancing strategy

Contents of this chapter

Distancing (2020)

Photo of a typical O'/Reilly business card and my personalized card case

“Were you surprised?”

Friends often asked this question, usually somewhere between the formulaic expressions of sympathy and the generic offers of assistance that are required in such situations. Their question was hard to answer. On the one hand, I didn’t expect my twenty-eight-year tenure at O’Reilly Media to end as it did. On the other, signs of difficulties in our business environment had poked out their snouts during our most recent company meeting, three weeks before the layoff.

Every month or two, management invited employees up and down the hierarchy to an all-hands meeting. There the chief executive and financial officers, demonstrating the respect and trust that management had for their staff, shared sensitive financial data. They also used the all-hands meetings to rally us around the latest refinement of their strategy.

I took the subway down to join the audience in the Boston office. My wife Judy had traveled to Washington, DC a week before to share the sorrow of her family after the death of an elderly cousin. The next day, I would join her in a trip that we were afraid to return from a week later, because in a twinkling the COVID-19 virus erupted as a force from which we could no longer hide.

This all-hands meeting was in the first week of March, 2020. We all had been anxiously watching the news of COVID-19 challenging South Korea and forcing Italy to adopt strict isolation policies far out of line with its cultural norms. Analytical models later suggested that COVID-19 was already rampant in Boston.

O’Reilly had announced the day before that it could not hold its California data conference, called Strata, which I believe was by then our largest event each year. Only two other computer conferences throughout the world had announced cancelations at that point, so our managers were almost unique in listening to the fears of our speakers and attendees.

Quick legerdemain by our conference team and technical staff rescued pieces of the Strata conference by moving them online. I was impressed once again, as I had been repeatedly over the years, by O’Reilly’s resourcefulness and expertise. Not only could we offer some of Strata’s strong content and training to the otherwise bereft attendees, but we were groping forward toward a model of online conferences that might anchor us in the future.

Here I can provide a bit of history, as I hope to do throughout this memoir, of events forgotten by most observers. O’Reilly had actually tried online conferences several years before, with disappointing results. Having attended dozens of face-to-face conferences and basked in their excitement, I understood why going online strained the medium. I had learned first-hand that the appeal of conferences lay in the magnetic energy that passed between attendees, ricocheting between the formal sessions and the gatherings in dining rooms and hallways.

Some of the most significant moments in my own career took place in the pregnant envelope of new encounters that clustered around conferences. I had shared my hotel room at a major health care conference with one of the leaders in the field, and bonded with him over his story of an amazing recovery from life as a drug pusher and jailbird. In the first hours of a conference on copyright, I met the person who later would give me the material for articles in two prestigious publications. Another conference on free software gave birth to a clever comic book aimed at drawing young people to computing.

Yes, face-to-face gatherings are the best. But the technology for online meetings has improved greatly, while our tolerance for expensive travel and for pouring airplane exhaust into our atmosphere has withered. March 2020 gave us a hint of a world where online conferences would be the only option for meeting.

Caution warned against complacency, though. There is no way for a company hawking online conferences to cajole the money out of attendees and sponsors that they would shell out for a face-to-face convention. The reduced costs of hosting the conference online could not rebalance the equation. Over the next two weeks, countries everywhere discovered the imperative of physical distancing. The global conference circuit fell to the ground with a heavy thud, along with businesses ranging from cruise ships to jazz clubs.

Another part of this memoir lays out some of long-term impacts, not so obvious, of the closure of O’Reilly’s conference business. The immediate effect is that, on the day when CEO Laura Baldwin announced publicly that the conference business was being disbanded, the company went through a large layoff that went far beyond the conference group. The ax fell in late March, scarcely giving one a chance to turn around after the all-hands meeting. The announcement also happened to come on the day when, on the Jewish calendar, I commemorated the death of my mother many years before. Thus, the yahrzeit of my mother will forever also be the yahrzeit of my career at O’Reilly.

I didn’t immediately know who else had been part of the grand departure, but through the grapevine I finally learned about some of the notable positions eliminated. At hearing some of the names, I could not suppress a cry of amazement. These were people who drove the company forward—as I had done at times—through strategizing, data gathering, writing, editing, and more. Without them, the company might look the same, but it would take a different direction, operating under a new set of parameters.

Similar closures were going on around the world that month, a precipitous loss of jobs unlike anything seen before in history. I was one of 3.3 million people laid off that week. Most were suffering more than I, and it was impossible to focus on personal feelings during that time. Still, this I knew: The layoff deprived me of more than an income and a career in publishing. The O’Reilly that I had known was gone forever.

But what was this O’Reilly? How did it play such an outsized role in the history of computing? Indeed, how did the parallel and intertwining histories of O’Reilly, the computer industry, the Internet, and my own participation in these developments help to create the world we’re in now? This memoir will attempt to cast light on these questions.

I actually got the idea of writing a memoir about my time at O’Reilly about four months before the layoff, in December 2019. Did I have some premonition of what was coming? I doubt it. I was feeling, in fact, more secure over the past year or so than I had felt for many years before. A string of highly successful and critically needed reports from my pen had recently demonstrated my continuing value.

But starting the memoir, which had reached about 7,700 words at the moment of the layoff, was a life-saver for me. Every time I sat down to add a memory to the log, I had to place myself outside O’Reilly, imagining that I no longer worked there. When reality caught up with my speculations, I was psychologically ready to pick up life outside the role that defined nearly three decades of my life.

Programmers, when confronted with an unexpected result, enter a debugger and create a list of their program’s behavior going backward from the end state: a backtrace allowing the programmer to examine the data at each point in the program’s run. Similarly, this memoir will unpeel the historical layers in each major area of my career. The present chapter revolves around O’Reilly strategy, which itself evolved gradually in a spiral of unexpected turns.

The best brand to have (mid 2010s)

Strategy normally refers to a company’s way to adapt its own activities with changes around it, but our company’s strategy has prompted changes to the whole book publishing industry. During one period, the strategy led management to create an online service as a separate company called Safari. But our boldest forward-looking move in the 2000s was a backward one: to bring Safari in-house again. Shortly after that, managers decided the name Safari must go. The brand with which we’d subsequently talk about everything was O’Reilly, which has persisted through superficial name and logo changes during the company’s whole history.

The choice of O’Reilly over Safari represents a continuing tribute to the founder who put his stamp on our culture and procedures. To emphasize the continuity, I minimize my use of official names (O’Reilly & Associates, O’Reilly Media) and just refer to the company as O’Reilly, a lowest common denominator that is shared in casual usage by employees and constituents alike. Although I’m not here to offer a biography of Tim O’Reilly, some of his traits that justifiably won him fame and followers will emerge through anecdotes that I scatter throughout the memoir.

The company O’Reilly occupies a unique position of trust in computer books during the feverish period of the 1990s when the Internet was stringing its cables through everything, and when the whole world—technologists, business leaders, and everyday computer users alike—was reading computer books to figure out where the hell we were being led by the new, uncanny intruders into their work and living spaces.

The name O’Reilly became the guarantee of quality as well as relevance. Once, out of curiosity, in the 1990s, my mother went into a bookstore and spoke to the clerk about whether they carried any O’Reilly books. The clerk answered, “O’Reilly is the only publisher customers ever ask for by name.” So it wasn’t just like O’Reilly was the best of a class of publishers. It’s more like O’Reilly was one class of publisher, and then there were all the rest.

Contrasts can tell you a lot about what people think. When some newcomer to the publishing industry did something notable, readers might recognize it by saying, “They’re the new O’Reilly” or “They’re doing the commendable work O’Reilly used to do.” Many of these other companies released some good books and did fine in the industry, but there was never a new O’Reilly—except the new company that O’Reilly made itself into from one era to the next.

There was a deeper strategy behind the Safari name change, of course. Safari had marketed itself as an online content provider, but we needed it to be much more: a tool for staff development that would serve our business clients, something of a companion that would marshal their staff and provide education in all media toward the goal of deep business transformations. After the re-acquisition of Safari, O’Reilly could finally implement a plan called the integrated media strategy (IMS), first articulated several years earlier. The company could also wrench itself free from constraining atavisms in the publishing industry, which I will come to describe. Finally, we no longer suffered from confusion with the Apple web browser of the same name. Thus, it was a smart pivot to leave the name Safari behind.

With the re-acquisition, O’Reilly was entering a phase of radical transition that, given the volatility of today’s markets, would probably never abate. The company’s bold online strategy was meant to transform us in the minds of our customers, and therefore in our own day-to-day behavior, from a content provider to a Sherpa or guru who guides customers to their intended destinies. As the strategy came into focus, the company put new employee evaluation tools into place in the late 2010 decade. By aligning individual incentives with the goals of the company as a whole, we could benefit from each person being accountable. To bring the goals of every employee into line with our great vision, a new compensation system was installed, and my jousts with it produce an amusing story to end this section.

We’re talking here of the last stage of my career, when my goals were reduced to raw productivity. I devolved into an editing or writing assembly line, fed raw material on one end and spitting out content on the other. Of course, my real responsibilities were reflected only dimly in page counts. I needed to grasp the direction taken by the technology I was writing about, the goals of O’Reilly as well as the sponsors who funded much of my work, and the best ways to reach the reader with complex, abstract concepts. I was an emissary of the company to everyone I encountered. My goals could in theory have reflected this subtle self-presentation, such as rating my ability to “Make sponsors eager to return and pay for more content.” But nothing of this was reflected in my actual goals, and I was very relieved to have it omitted. It’s too hard to determine whether my actions helped or hindered soft goals, such as winning sponsors or even new customers.

The generic, overarching goals given to me, and that I was certain to fulfill every year—such as “Produce high-quality drafts based on input from authors or other parties”—should have relieved me from much interaction with O’Reilly’s compensation software, but I could not stand outside the system. Quarterly measurements required me to deal four times a year with the same opaque, inconsistent, and mind-numbing interface that served others who were involved in planning and carrying out the company’s strategy. And every quarter, something generated an urgent message to me from other staff. Often my attempts to fix the problems were ineffective or worse, and I resorted to asking human resources staff to tap into their sometimes esoteric expertise with the system and do the job for me.

Memorably, one January I was contacted by a distressed human resources person because my achievements for the previous year added up to 150 percent of my goals. This struck me as a resounding success—how could anyone complain? But the system wouldn’t tolerate mathematically a success above 100 percent. The cause was obvious: Some goal had been duplicated and counted twice. But I had no idea how I had put the system into that state. Fixing it required a hidden web interface that the human resources team introduced me to. Eventually we got my achievements safely back to the same 100 percent that I had in every quarter.

The exhausting wheel: Continuous publishing (2010s)

The COVID-19 pandemic was not the first time O’Reilly was in danger. During 2001, the era of the September 11 attacks and the dot-com bust, we tumbled into a financial crisis and approached folding or being sold. That’s when Tim O’Reilly brought in Laura Baldwin as chief financial officer. I don’t know what discipline she exerted to turn the company around, but we recovered and rustled up some money to put into new projects. We entered a fertile period of experimentation, some successful and some not.

Huge efforts were invested in a process we called continuous publishing. We recognized that the most important projects, especially those fresh out of the egg, moved too fast for us to provide high-quality information in large books developed through standard, methodical procedures. Most of our books took longer than a year to write and release, whereas even an elapse of six months was too much. And we certainly weren’t serving our readers to leave eighteen months or a couple years between editions. The need for change reflected several trends in computing.

The squeeze on publishers caused by rapid updates was partly a symptom of the general speed-up in communications caused by the Internet, which furnished so much ground for O’Reilly content. Several related trends, in my estimation, spurred a sudden flowering of new software. Free software communities had learned to work together on large projects. Perhaps an even more important benefit of their collaboration was the prodigious outpouring of new programming libraries that reduced a lot of programming to plucking the proper function out of some free software project.

Programming libraries are simply collections of common-used programming techniques, usually organized around tasks such as math, web operations, Internet communications, or something else the library’s developer programmer finds useful. The central importance of libraries can be seen by their role in choosing a programming language. Ask a bunch of programmers why they chose the language they’re using on their current project. Few will point to some intrinsic quality of the language. Most will say that it offers a library they need. This reliance on libraries also lends a certain conservatism to programming, pulling back on the field’s continuous lurch toward new languages.

Also, in my opinion, sleek new programming languages cut down programming effort, and improved test frameworks reduced the time required to ferret out bugs. I don’t know whether research supports this conclusion. As we’ll see later in the memoir, it’s hard to find research in software engineering that generates trustworthy insights.

For a while, we dealt with the mismatch between reader needs and author capabilities by asking authors to write blog postings about new features, until time came to do a new edition. This workaround didn’t solve the problem—-the original book itself would quickly need a thorough update.

So we tried rethinking books significantly. No longer were they the fixed texts we had known through the millennia of books’ existence. Instead, our books should be evolving repositories of the latest information on a topic. Our mission to institute this new viewpoint proved exhausting. It ran aground both human limitations and hallowed assumptions undergirding the publishing industry, which was in many ways stuck in the nineteenth century.

We were technically and organizationally ready for dynamic updates. (Our authors thought they were too, although we’ll take a closer look at that momentarily.) It was other factors pervading the publishing industry that held us back. The degree to which the publishing industry is stuck in centuries-old thinking was shown by their repeated talk of replacing “brick-and-mortar stores” with online offerings. Very few bookstores are housed in brick and mortar anymore; their buildings are made of concrete and steel.

Part of the new initiative was successful: We could offer customers a series of pre-released online books, which we called early releases or rough cuts, that included whatever chapters were finished at the moment. Each chapter might or might not have undergone editing and tech review before inclusion. Readers appreciated access to this early material, and it proved a powerful selling point for each book. We even accepted comments from readers, but I never saw a useful tech review comment come from an early release. At best, we got reports of trivial typographical errors.

The post-release phase was where our strategy failed. Let me give just a couple examples of the publishing hurdles that led to intense discussions, as we strove toward solutions that didn’t materialize.

First, take the ISBN. That’s the publishing industry’s equivalent of a universal product code (UPC) or European article number (EAN). Most readers don’t associate books with ISBNs—for instance, they don’t usually make it into book reviews. But distributors and retailers depend on the ISBN. It is the treasured seal borne by a book from conception through final sale. A hardcopy book will have one ISBN, the paperback of the same book another ISBN. Each new edition gets a new ISBN. There is no linking or continuity; there is only distinction without nuance.

The rigidity of the ISBN has long been a marketing problem for online sales. For instance, we found that we could build up hundreds of positive reviews on Amazon.com for a book, and then lose them and have to start from scratch when we put out a new edition. Amazon.com by then had established its dominance as the world’s major retailer of books, and was branching out into other areas of commerce. For a while, people searching Amazon.com for a book would pull up an old edition that was highly rated, and fail to see the new edition because it had just started to build up ratings. In a field where buyers prioritize the recent date of information as a key to relevance, that design flaw could kill sales.

As another example of the resistance to our continuous publishing strategy that’s inherent in the book industry, Amazon.com penalized publishers for putting out new editions too often. One of my authors based an excellent and very popular book on some free web-building software, which made it easy for readers to get started with a few clicks. It was his rotten luck that the software was taken down by its distributor right after the book was released. (A problem that could come up only because the software was proprietary: free for download, but not truly free and open source. I’ll explain this further in a later chapter.)

The author quickly found a new freely downloadable package and rewrote the book around it, but the algorithms at Amazon.com flagged the new edition as suspicious. The British subsidiary (alas, the author happened to be in Britain) took the book off its site altogether. In this and other perplexing situations, our marketing team was left with the unpleasant task of explaining arcane retail behavior to irate authors and editors.

In such an inhospitable terrain, we couldn’t assign new ISBNs to updates. Supposedly, we had a work-around: Because we could issue books on a print-on-demand basis, so that few or no old copies would lie around the warehouses of Amazon or another distributor, we could simply ship the new version without saying anything. But then, of course, the marketing on the retailer’s site couldn’t reflect the changes, which would concern potential readers a great deal because it would influence their decision to make a purchase. The retailer couldn’t even indicate to readers that the material was current.

I can’t wrap up my litany of complaints about publishing without a reference to the dreaded ONIX classification system. ONIX is a standard set of keywords used by publishers and retailers throughout the industry to categorize books. Bookstores, including Amazon, relied on these keywords to fit books into their recommendations and search results. So my marketing team repeatedly stressed the critical task of choosing the right keywords. The problem is that authors and editors didn’t get to choose keywords. We were prisoners of a system that appeared to be some 30 years out of date. Computing subjects that were nearly forgotten decades ago still appeared on the approved list of words, which totally omitted the hot topics on which we were urgently pushing out books. So every time a book came up for release, the requirement to choose ONIX terms made me want to scream.

The mephistophelian ONIX system sharpened my interest in knowledge bases, ontologies, and taxonomies, which a rapidly growing Web was trying to build in various ways. I don’t need to bore you with a list of technologies that have been used to relate pieces of information to one another—to create a rational structure for a world bursting with disorderly innovation—because modern ontologies allow you to find them yourself through web searches. Ontologies are the modern version of Hermann Hesse’s Glass Bead Game, the prophetic technological invention in his final novel. (Hesse, true to his philosophy, rejected the promise of a universal data processing system.) Like other models, no ontology is perfect.

I have to relate another story to explain my cynicism about ontologies. A friend of mine once proposed a book on how to create them. Finding herself going around in circles as she tried to capture her process for developing an ontology, she brought on two other industry leaders. That didn’t help: They just collectively continued to go around in circles. I commented in detail on their drafts, as I did with every author, trying to extract from haystacks of confused text some needles of truth. The authors finally canceled the book, admitting that they didn’t understand their own processes and couldn’t arrange those processes into a set of coherent guidelines.

Back to O’Reilly’s headaches with the publishing industry. How did we finally resolve the issues? We ultimately left traditional publishing (while still allowing our books to sell through retail channels, as a sort of by-product of our content generation) and focused on the online platform, where we had total control and where distribution issues around ONIX and ISBNs were moot.

But continuous publishing wasn’t destined to be a lasting strategy anyway, because of the other consideration I mentioned: human limitations.

When we were trying out continuous publishing, I prepared my authors for it well in advance of completing their books. They generally liked the idea and wanted to keep their books up to date. And why not? They must have loved their topics—otherwise they wouldn’t write about them in the first place. Of course they would keep following developments in their field. And why wouldn’t they want to put in effort to keep their books relevant and selling?

All these commendable intentions fell apart by the time they finished the book. A full-length book is an unfathomably gargantuan undertaking. I had often sensed in my own body the burn-out authors felt toward the end. They needed to recover from the effort, to get back to family, work, and professional commitments. So they just had no mental cycles for keeping a book up to date. Basically, we were asking them to continue the extraordinary, draining process of finishing a book forever.

Occasionally, an author would turn in one desultory update. I think one or two authors actually did try to update their books faithfully, but these books weren’t selling well and the exertion just wasn’t worthwhile. Other editors must have discovered the same problems, because continuous publishing quietly disappeared from company talk after months of intense discussion and preparation.

Continuous publishing, although we couldn’t pull it off, showed our cleverness and agility in responding to a changing market. It was made possible by our unique platform for automatically generating electronic books in multiple formats, and that platform had a crucial role to play later. We return to it in the next section.

Our next innovation to deal with the publishing landscape of the 2000s was Make Magazine, which was thought up and launched in 2005 by Dale Dougherty, second in command to Tim O’Reilly. I believe Make to be one of the most significant responses by our company to issues of the day—more significant perhaps than our championing of open source, because we had a big impact in a relatively young field.

Dougherty was picking up on a new Do It Yourself (DIY) attitude that was entering a number of fields. He focused on grassroots electronic and computing experimentation such as aerial cameras (with some departures into purely mechanical products). They quickly took up the new quick-and-dirty embedded devices enabled by cheap, mass-produced computer boards. Our company separately tracked other grassroots, DIY movements such as biochemistry. I myself visited the BioCurious lab in the Silicon Valley, one of the most vibrant outposts of the DIY biohacking movement, and interviewed one of its founders, Eri Gentry.

About the same time Make Magazine started in 2005, an exciting new embedded system for artists, hobbyists, and engineers was released by a university team in Italy. Called the Arduino, it was a processor board that could run on its own and also slip conveniently into mechanical or electrical contraptions. The Arduino could do such things as water houseplants on schedule or watch over a gate and signal the owner when a potential intruder arrived. The Arduino was completely open source hardware. Another popular system called Raspberry Pi came later; it ran Linux and its specs were publicly available. A wealth of free software ran on both. Cheaper and easier to set up than traditional embedded systems, these boards were ideal for educational purposes, experiments, home projects, and prototypes for entrepreneurs with ambitious plans. Arduino and Raspberry Pi exemplified the DIY philosophy and brought it to any activity that could be digitized.

DIY electronics and computing offer huge potential for human development. The research of Eric von Hippel—highly relevant for free software—revealed that many important products were created by the users of commercial offerings, and were incorporated into the new official products by manufacturers who watched what their customers had created. In other words, innovation that we ascribe to manufacturers often comes actually from their customers.

Tim O’Reilly also had a favorite analogy that came up in his talks over the years. He’d say that a lot of DIY energy early in the twentieth century, up to maybe the 1970s, went into modifying (“modding”) one’s car. The same ingenuity was applied by a subsequent generation to computers.

Dougherty and his team identified, celebrated, and pushed forward this social movement with Make Magazine. They started up a garage-style lab in the Sebastopol, California campus at O’Reilly. To me, visiting the Maker space felt like leaving the dull grouped desks of a conventional office (the Sebastopol managers had adopted the trendy open seating style) and enter a mad scientist’s fantasy lair.

Make Magazine was an extremely audacious venture, attempting to succeed at print publishing in an era when scads of print magazines were moving online or shutting down. This Maker division was probably the most dynamic part of the company during those years. And as Dougherty reported at one company meeting, it wasn’t fair to work for Make Magazine because they had a totally disproportionate share of the fun.

The Maker department also started the Maker Faires, where people could demonstrate truly magical creations. The concept was adopted throughout the world. No verbal description could bring to life the experience of being personally present or these outbursts of talent and technical dexterity. Digital cleverness mingled with wild mechanical concoctions, such as cycles driven by ten people and fire-breathing metal sculptures that recalled the famous Burning Man desert festival.

That’s all I have to say about two of O’Reilly’s innovations in this difficult period. The third was the integrated media strategy (IMS), which expanded us from books to conferences and then to all kinds of media, such as video and interactive online learning. As mentioned earlier, this really came to fruition with our repurchase of the Safari company and integration of content production with delivery.

Electronic publishing and trailblazing with integrated media (2000s)

A large geographic split between offices was pretty rare in small companies when Tim O’Reilly moved from the Boston area to Northern California around 1992 to break ground on his Sebastopol office. Before his move, staff worked in an office building in Newton, Massachusetts, and, at the very start, a barn on Tim O’Reilly’s Newton property. The barn remained a legendary origin story that I missed out on.

When Tim relocated his family to California, he took most of his staff with him, but opened an office in Cambridge for those who wanted to stay in Massachusetts. Those who sought their fortune in the wild West (Sebastopol was still known for apple-picking more than high-tech, but now everybody supposedly works in computers or renewable energy) and those who chose to remain cleaved into rather natural divisions of labor. The productive forge of the O’Reilly & Associates company—editors, production team, the artist, the print coordinator—occupied the Cambridge office. Marketing, customer support, and other necessary but secondary functions relocated to California. Occasionally staff would grumble about a clash of cultures, but I felt untouched by that, perhaps because I found ways to spend a lot of time in the Sebastopol office. By visiting regularly I could renew my friendships and feel that I was participating in O’Reilly as a whole, while other staff were stuck in one office or the other.

Because the split took place in such a young, burgeoning organization, entire departments would grow up around the individual staff people who chose to stay or go, and I think the seemingly arbitrary division based on personal geographical preferences ended up benefitting the company. The West Coast office recruited people who were attracted to a life out among the trees and the neighborly California counterculture. (I imagine a lot of this lifestyle changed as population, traffic, and housing prices increased.) The East Coast office appealed to creative types who craved the bustle and heavyweight intellectual arenas of Boston and Cambridge.

Among the benefits of our bifurcated location were the long-distance working relationships we learned to cultivate, which prepared us for a new world of globalized creativity. And our books contributed to building that world. Being forced to use the primitive collaboration tools of the 1990s, such as the FTP file transfer protocol and email with minimal graphics, along with the need to adapt to different time zones, gave us the expertise to deal with authors, reviewers, and technical experts anywhere they lived. We were very alert to the power of the Internet, an awareness that paid off in our first real blockbuster, The Whole Internet, which Mike Loukides edited and drove to fame in 1992.

The critical role of the Cambridge office in writing (when done in-house), editing, artwork, production, and all essential aspects of getting our product out did not immunize us against decisions made by individual managers with personal agendas in Sebastopol. At least twice, in the design group and the tools group, a West Coast manager felt incapable of dealing with staff outside their office and insisted on consolidation. Some of the Cambridge office staff were summarily laid off, while others were given the option to move West but—as they told me—without reimbursement for their relocation expenses.

Not a single staff person in Cambridge who was placed under the tool manager or design manager in Sebastopol lasted through the consolidation. I don’t believe these managers took the pulse of the company or explored the ramifications of flexing their managerial biceps. They probably had no inkling that they were discharging some of our most creative and accomplished professionals. Geographically distributed departments were unusual at that time, and organizations lacked the understanding as well as the technical tools we have today to manage remote work. But I’m still puzzled that upper management failed to rein in the managers and veto their activities.

Both functions under attack, design and tools, were intimately entwined with the other creative activities for which the Cambridge office was responsible. Naturally, both functions crept back to that office after the original staff were deposed.

Tools provide a particular educational example of the ways creative teams fill in skills gaps. This story provides an even stronger lesson about the opportunities that a fluid company like O’Reilly can provide to someone with vision.

The tools group reconstituted in Sebastopol proved incapable of solving everyday problems that turned up routinely in Cambridge. I don’t ascribe this failure to geographic distance so much as a lack of listening—a lack of regard for the concerns being expressed in the Cambridge office. After a while, back in Cambridge, a junior production person named Andrew Savikas—watching requests go out toward the West Coast team and perish somewhere in between the prairies and the Rocky Mountains—started to collect tasks from his fellow production staff and write small scripts that solved their problems. Without recognition at first, Savikas single-handedly recreated a tools capability in Cambridge.

As his purview grew and upper management started to give him responsibility and resources, Savikas looked at trends in the publishing industry and realized that we needed a whole new writing environment to meet our business goals. Essentially, he gave us the tools to write books in a standard format called DocBook XML. And here a historical digression is in order.

O’Reilly was a hardy native plant in the stony but nurturing soil of Unix, an operating system that represented the boundaries of the entire known universe to most hackers and researchers. (Mention Microsoft Windows to these people and they’ll say—well, just take my advice and don’t mention Microsoft Windows to them.) Documentation in that world used a format called troff, pronounced Tee-Roff, that has probably never been surpassed for rigidity and inscrutability. Because documentation for the X Window System in the 1980s used one of the Unix troff formats, we could quickly adapt their manuals for our own publication. So there were modest benefits to sticking with it.

The basic troff syntax was a period at the start of a line, followed by one or two characters. The two-character restriction saved precious memory and disk space at an early stage of computing, as did the fearsome Sendmail configuration files with which every Unix system administrator wrestled. Although one had to limit any new troff commands (known as macros) to two characters, these characters did not have to be letters or even digits. Thus, inventive troff designers used all manner of weird punctuation in addition to inscrutable sets of letters and numbers, In general, troff code was as readable as a Sendmail configuration file, a notorious jumble that few humans could even start to parse.

O’Reilly clearly couldn’t impose troff on all our authors, so we also produced books in FrameMaker, a doggedly proprietary product of 1980s vintage. We could accommodate nearly all authors by giving them either a license to FrameMaker or a Microsoft Word template. Microsoft Word was so prevalent that one could count on everyone having the software, or some other software that could produce the same format. One of our tools staff with a fondness for TEX (software created by programming legend Donald Knuth and widely used in academia) created a specific O’Reilly template for it.

But during the 1990s, managers at O’Reilly realized that online books would soon become a central part of our offerings. We even prepared for the possibility that all content would go online. The popular Kindle and ePub book formats were yet to be invented, but we were girding our loins for an online disruption of publishing. There were two reasons for our huge investment in a transition. First, the availability of free online content—and most importantly, the Java documentation from Sun Microsystems, given the dominance of the Java programming language at that time—raised formidable competition. Second, we expected that the public would get accustomed to reading online and would want books delivered that way. The advantages of the online medium for reaching international markets cheaply and instantly were also enticing.

Dale Dougherty, whose prophetic insight in regard to Make Magazine was described earlier, led an initiative in the 1990s with twin technical and corporate goals: firstly to develop a standard that was equally good for print and online publication, and secondly to sign up other far-thinking people in the publishing industry to adopt it. We had to keep changing this standard, though, to keep up with the best of publishing technologies.

The handiest format used by large companies in the late twentieth century for documentation was Standard Generalized Markup Language (SGML). Even Tim Berners-Lee turned to SGML for guidance when he designed his HTML format for web pages. Basic SGML conventions survive in HTML, such as angle-brackets around tags and ampersands to start special characters.

After a decade or two of use, the burdens of SGML were apparent. It was overly complex and hard to write programming tools for. As one example, SGML boasted of being highly structured, nesting elements (such as paragraphs within lists) in a very strict way, but then went ahead and undermined this discipline by allowing the end tags to be omitted under some circumstances. HTML carries over the less disciplined aspects of SGML and violates the conventional rules in many other ways.

Eventually, web developers created a format called XML that was easier to support than SGML. They encouraged web designers to use a structured form of HTML called XHTML. But most web developers by that time were using WYSIWYG tools to create their web pages. The tools had been designed with no appreciation for structure or discipline, and had wandered too far from robust coding practice to be upgraded so they could take advantage of HTML’s or XHTML’s structure. A look at how web page designs are rendered into HTML by popular tools is like watching an eighteenth-century typesetter count out spaces and dashes. Doughery’s group designed a template they called DocBook around SGML originally, switching to XML by the time they were ready to release it.

XML itself proved to be more heavy-weight than most users needed, while version 5 of HTML provided stunningly powerful web features such as canvases for drawing. So a number of years after investing so much in the creation and promotion of DocBook, we relegated it to legacy books and started using a lightweight format that made writing easy in a plain text editor, along with a simple Web interface like that offered by blogging sites.

We had audacious goals, such as distinguishing different types of words so that we could refine searches. DocBook had not only a tag for emphasis, but one for file names, another for commands, another for URLs, and so forth. These were all rendered in italic font, but we initially tried to mark every word with the precise tag so that we could exploit the distinctions in sophisticated tools that we meant to develop later. Our tools team even learned how to process italic from Word files using tools that could guess quite well when an italicized word was a file name, a command, or a URL. But nothing ever came of all this arcane differentiation.

Format changes aside, our platform was robust and powerful, able to spit out high-quality content in all the popular electronic book formats, including PDFs for printing. When Amazon.com’s popular Kindle ebook came out in 2007, we quickly accommodated its format. This was crucial, because Amazon.com was the first major company to enter the digital book market, and they had the overwhelming market clout to make their format a must-have. Other publishers were trying to provide books for e-readers through miserable accommodations such as scanning pages and using optical character recognition (OCR), which could easily be detected by its ugly formats and the computerized errors it left.

Savikas, the young production visionary who revived our tools group, was recognized by upper management and put in charge of a team to write new tools for manipulating our DocBook standard and rendering it into the formats we wanted. In addition to creating a smooth path from the tools used by authors to the deliverable sent to the reader, his team grounded our workflow on the reliable processes that programmers used to ensure accurate code.

Savikas went on to many more achievements. In order to cull ideas from around the publishing industry and pursue new opportunities given by the Internet, he launched a conference called Tools of Change, integrated into O’Reilly’s conference offerings. As the acronym TOC suggests, the conference was meant as a “table of contents” for those interested in transforming the media industries.

Our company was so proud of what we were doing with TOC that the entire Cambridge office was invited to one of the conferences in New York City, all expenses paid. The experiments on display there were quite impressive. Most involved multimedia. But of course, it takes a lot of money and specialized expertise to create large amounts of graphics, video, or interactive content, even leaving aside the challenge of integrating them in ways never done before and creating an experience that delights or educates the viewer. The fine works shown at TOC remained mostly one-off demonstration projects. The publishing industry has not made a wholesale turn toward integrated multimedia—although many video games show impressive sophistication, and some online interactive teaching systems have used new media well.

Savikas ultimately joined the Safari board and then became head of that company. His career is a kind of Horatio Alger story appropriate for Silicon Valley style innovation, and in my opinion showed O’Reilly’s staffing policies at their best.

The results of our corporate transformation were pretty spectacular. We managed finally to create a “learning platform” that guided the perplexed through the steps and tools they needed to pick up the confusing new technologies that were required for success in our software-dominated world. I believe the term “learning” could apply not only to the customers using our platform, but the platform itself: Data collected on use could improve its recommendations. Later, we integrated other media into our offerings, but I rarely participated in that. Sponsored content became part of our integrated media strategy, and that will be the topic of the next chapter.

Even the completed merger between O’Reilly and Safari took a long time to carry out our integrated media strategy. And along the way, the concept took a leap in sophistication. A year or two into the merger, Laura Baldwin started talking about how it wasn’t enough to offer great content, or even great content in a variety of media. She called the current Safari a “library” and said we had to be more forceful about helping our clients take advantage of our learning opportunities.

Gradually, during all-hands meetings that took place every month or two, a complex strategy was unveiled. We would prompt people on “learning paths” to proceed from one book or video to another. The learning paths could be as simple as a list made up by a reader who felt that their sequence of learning could help others. Over time, learning paths were supposed to be tailored to individuals by tracking and analyzing their behavior.

I don’t know whether this AI-rich strategy took off. Clearly, it was inspired by the recommendation systems used by online media giants of the day—Amazon.com, Netflix, YouTube, and so on—although I don’t believe O’Reilly management mentioned that. Recommendation systems, obscure and highly prized by their designers, lurk inside everything Internet-related as I write this: retail, social media, search. The recommendation systems make all those things work, while threatening to wipe out independent culture and productive discussion by pushing all consumption toward a few fortunate choices that quickly get outsized. Recommendation systems enforce the evangelist Matthew’s words, as expressed by Billie Holiday, “Them that’s got shall get, them that’s not shall lose.”

The O’Reilly vision was a hundred times cooler and more brilliant than the strategies of the retail and streaming media companies. What the others promoted was a set of lateral moves: You watched one thriller, so they showed you another, and another. You weren’t improving, you were just passing the time in hopefully enjoyable diversions. But O’Reilly’s strategy represented actual progress: As you moved from one tutorial to another you were approaching a goal.

Although I was insulated from the planning around our IMS, I think it helped me a great deal personally. The IMS encouraged sponsors to think about a hierarchy of materials ranging from blog postings to conference sponsorships. When they sponsored written content, they created new work opportunities for me as either writer or editor.

The new strategy committed us to focusing on business-to-business (B2B) instead of the old business-to-consumer (B2C) model of selling individual books. It unveiled an awe-inspiring concept: helping a business decide what its technical staff needed to learn, and guiding entire departments through their education. Dashboards and department-level tracking were to be involved.

Scarcely a library.

Experimentation and expansions of series (mid 1990s)

The 1990s saw its own set of threats to publishing—as well as journalism and other existing media—that demanded quick and bold responses from practitioners who wished to survive. And survive we did, thanks to a mature culture of experimentation. Let me list here some of its forays into new topics, formats, and approaches to human psychology.

My first example is Jeffrey Friedl, who briefly became a superstar after writing a unique book about regular expressions. Originally a small corner of the tools for text processing, this method of generalizing searches in text became a part of the average programmer’s skill set after being built into the Perl language, and then copied by other scripting languages. You don’t have to understand regular expressions to appreciate the story of the book, but I’ll explain why there was so much buzz about them.

Any text editing tool lets you search for a particular word or phrase. Regular expressions were a little mini-language that made this kind of search almost infinitely open to generalization. You could describe a form of the text you were looking for, such as “three digits followed by a hyphen and four more digits” (a common way to represent telephone numbers in the U.S.).

Think a minute about the millions of bytes flowing over the Internet second by second, and how developers want to find content of interest in these rivers of data. Regular expressions become a necessity.

The term “regular expressions” itself has an arcane mathematical origin that was irrelevant even to the original Unix implementation, and many prefer the term “patterns”. In their various forms, regular expressions came to be indispensable as the Web cultivated increasing amounts of digital text.

Friedl had the most creative approach to layout of any author I ever worked with, perhaps stretching the tech book medium even more than the graphics-heavy Head First series that O’Reilly later created. His proposals for quizzes, cross-references, and special characters to mark off parts of the regular expressions went far beyond anything our XML tools could handle and represented a puzzle too difficult for our tools team, so Friedl made his own copy of the tools and implemented his layout strategy himself. For instance, he managed to contort our tools to display a quiz on the top right page as one opened the book, and the answers on the next top right page (so they would be hidden while you read the quiz). He invented special characters and markings to distinguish the parts of the text he used for examples. The book became something of a cult classic and went into future editions, each of which Friedl did himself because no one else had his tools. Friedl stayed at my house for a while during the writing of his book, and got friendly with my kids long before he married and had a child of his own in Japan.

He offered three-hour tutorials on regular expressions at our Open Source Convention, which was still displaying the influence of Perl—the impetus for launching O’Reilly’s conventions, as I’ll explain elsewhere. With programmers hanging from the rafters in his presentation hall, Friedl wowed the acolytes with both deep insight and a well-cured instinct for leading an audience. He was very tall, and could have been imposing were he not always so gracious. When the cell phone of a front-row audience member went off, Friedl didn’t ignore the noise or wince, but calmly announced, “You have a call.” He then leaned back, folded his arms, and said in the same gentle manner, “Take your time.” The audience lapped it up. Friedl was also kind to the staff at our booth, buying them a pie to thank them for all they were doing at the conference.

Every tech book has a life cycle. None is meant to last forever. So eventually, the field of regular expressions ran ahead of Friedl’s marvelous book. He had written it when Perl was king of both scripting languages and regular expressions. Other languages caught up, Perl’s own implementation evolved, and the field became more and more jumbled with different implementations. Friedl himself lost interest in the computer field to which he had made so many contributions, and left the field several years after moving to Japan.

But Friedl conveyed deep learning in a way few authors have ever done in computing. Many have written good textbooks about languages and protocols, while others convey enough basic skills to put users on a path where they can experiment and find their own way to mastery. But technical books don’t turn readers into experts in the classic understanding of mastery that evolves from novice to expert. Here I say “expert” to mean people who intuitively know a field. Think of the term “grok” that Robert Heinlein introduced in his science fiction novel Stranger in a Strange Land and that became a popular colloquialism in the 1960s. Reading Friedl let programmers grok regular expressions.

Another author I worked with, Diego Zamboni, applied his virtuosity with tools to O’Reilly’s. Here’s the problem he wanted to solve: Many technical books contain scattered snippets of programming code. Authors usually maintain test environments to ensure that the code is correct, then copy and paste the lines of code they want into their book. Whenever you record the same thing in two places, you’re asking for problems. For instance, you may fix a bug in your test environment and forget to fix the book. This problem was solved by Zamboni by enhancing the tools O’Reilly gave us so that instead of pasting in code, he could paste in a pointer that would read in the precise lines he wanted in the test environment.

One long-forgotten experiment by O’Reilly, in my opinion, held the promise to bring users to the threshold of expertise. We released some very short printed books called “notebooks” that started from the common observation that learners would rather follow a concrete example than read how to assemble their own programs or configurations out of the abstract concepts.

Authors write most traditional guides with the assumption that readers will go through them sequentially. Important abstractions are introduced in the first few chapters, supposedly preparing readers for the practical guidelines that follow.

But give the average reader a programming manual, and they will turn immediately to an example showing the task they want and copy it. Only when their version fails do they turn back to the text to figure out what they need to do to adapt the example. I have witnessed this sequence in my own hasty interactions with manuals, as I work backwards to higher and higher levels of abstraction to find a key that unlocks the meaning of the code. Although the author has diligently organized the material to proceed from high levels of abstraction to lower ones, I am too busy to care about any level higher than I need at the moment. John Dewey’s concept of learning by doing describes how most programmers study.

So what did the O’Reilly notebooks do? They started with an example. Each example was then followed with a section titled, “What just happened?” The readers had the chance here to match their understanding to the subtler significance explained in this text.

The format of our notebooks reveled in their unconventionality. They were short (each under a hundred pages), with light blue covers and crude fonts reminiscent of the actual cheap notebooks that elementary school children used to carry around and fill with scribbles. This atavistic metaphor might have contributed to the indifference with which the public greeted the books. The series was abandoned after a few tries.

If no one could appreciate the sophisticated concept driving these notebooks, many readers could enjoy an earlier series along similar lines: our cookbooks. The first cookbook was written by Tom Christiansen, one of the brilliant thinkers at the head of the Perl community. He saw that many common tasks required a combination of skills and syntax elements, and created a Perl Cookbook loaded with gems on various topics. Other cookbooks followed, some edited by me. O’Reilly did not trademark the term Cookbook as it did “in a Nutshell”, a phrase we used in the titles of our reference books, so cookbooks on all kinds of programming topics quickly emerged from many publishers.

Good topics for cookbook treatment are actually tough to uncover. Increasingly, the truly boilerplate software activities are embodied into libraries by leading programmers, so solving the task is simply the matter of finding the right library and the right function. A cookbook recipe that consists of just a few lines and invokes a library function does not contribute anything novel. It may have provided a convenience in the days before powerful search engines could turn up the desired library for a task, but is redundant today. Christiansen’s recipes, by contrast, offered elegant models for creating novel solutions, and this is the practice of good programming cookbooks.

Later the company developed a very popular series called Head First. It was proposed by two authors new to us, Kathy Sierra and Bert Bates. They spun out in their first book a unique implementation of insights from educational research.

It is well known among educators that people have trouble retaining information from the text so beloved among aficionados of language, who gravitate toward professions such as editing. Learning springs from action, as well as from eye-catching and memorable images. Sierra and Bates evangelized a doctrine, backed by editor Mike Loukides, of engaging the responses of the human brain through low-level channels employing a roster of puzzles, games, quizzes, and images. Their Head First series had a small ratio of text to page. From the wide-angle lensed photo emphasizing a human head on the book cover to the fonts and stock retro mid-century photos, the Head First series redefined everything computer publishers had done with a book. Each concept was conveyed through many channels in order to find a chink in each reader’s brain and enter there.

Expanding Sierra’s and Bates’s concept from their opening Head First Java book into a regular series placed a strain on O’Reilly’s publishing process. Authors had to go through a grueling boot camp to learn Sierra’s philosophy and her unique application of it to technical content. Editors, artists, and production staff needed to provide intensive support and use a different process than the rest of our books for online publication, which became more and more central to our business model. Head First books used up a lot of space to offer the same basic information as more conventional books. However, they sold well enough to justify the investment.

In this section, I have been trying to show how O’Reilly threw out old publishing conventions and responded to what the editors thought our readers needed at that moment. This was rare in computer publishing. Throughout the 1980s, all publishers of computer books followed a simple three-part task division inherited from the age of mainframe computer manuals: user tutorials, programming guides, and system administration guides. The divisions between these three audiences were starting to dissolve, however.

One project in the early 1990s might be called a user manual, but one unlike any computer book that we or others had ever done: a guide to using the Internet. At O’Reilly, we were all busy using email, FTP (file transfer), online message boards, and some sophisticated experimental systems. The latter included Gopher, which presented text through a hierarchical set of menus in a crude implementation of hypertext, and Archie, a tool that crawled the Internet to find content matching search terms. We knew that researchers and techies around the world were doing what we were doing, and that more and more people were buying modems to enjoy these wonderful services at home or at work. (Modems turned digital computer output into signals that could cross telephone wires designed for human voices. More on the digitization of the telephone network in a later chapter.) Information was becoming more democratic—or so we thought at the time—largely through messages that individuals could share widely with sympathetic listeners around the world

Message boards, a grassroots way to form community. existed before most people could get on the Internet. These message boards were often called news groups, but there was no actual news online in those days. You couldn’t read the New York Times on your computer, unless someone typed in an article they saw in the paper and sent it around. So the word “news” came to be applied to the personal confessions—and not a few rants—by thousands of individuals seeking access to their online peers. Accuracy was no more guaranteed than on the social media that came much later. But the system was a crucial support mechanism for people lacking a voice, whether closeted gays in the Bible Belt or privacy advocates fighting corporate surveillance. The ideal of online communities as liberating environments had grown up in Berkeley and the San Francisco bay area during the late Viet Nam war era, as documented in Steven Levy’s book Hackers and more recently in Claire L. Evans’s Broad Band: The Untold Story of the Women Who Made the Internet.

“News” was exchanged through an ad hoc connection system called Usenet, before nearly anybody was on the Internet. People connected informally to each other using a protocol called Unix-to-Unix Copy (UUCP), where each individual’s computer passed on the messages sent by others as well as their own messages in daisy-chain fashion from one modem to another.

Along with sober and professional news groups, under categories such as “biz” for business and “comp” for computing, an oddball collection of informal groups proliferated—porn galore, of course, but also serious discussions of socially delegitimized topics and marginalized populations such as recovering substance abusers, sufferers of child abuse, and gay and lesbian people. In order to make it easy for mainstream institutions to screen out topics they might find uncomfortable, all these unauthorized news items were segregated into an “alt” or alternate news group.

Although “alt.sex” was probably the most highly trafficked subgroup, many political tendencies with sometimes unbridled and irresponsible postings were put under “alt.left” and “alt.right”, the latter being the origin of the name for the alt-right propaganda movement that received so much publicity during the 2016 presidential election campaign. I myself persuaded my system administrator, at one of my employers before O’Reilly, to open up access to the alt.sex group for me. This was for research on a project about censorship, not for my personal indulgence. The postings on some alt.sex groups helped me show that attempts to “clean up” the Internet were suppressing important voices.

News and multiple other services formed the subject of the book edited by Mike Loukides. Someone decided toward the end of the project to include a chapter on an obscure but fast-growing service called the World Wide Web. As a down-to-earth description of how to use the Internet, the book was simple in concept but explosive in its implications.

The success of The Whole Internet in 1992 rocked the publishing industry. Two other publishers had noticed trends and came out with books about how to use the Internet around the same time, but they failed to hit the sweet spot among the public. Torrents of sales and publicity poured over The Whole Internet. Nor did it quickly retreat into the archives of publishing history. The New York Times included it as one of their one hundred most significant books of the twentieth century.

At an editors meeting called shortly after the publication of The Whole Internet, we evaluated the elements of its success and made a major strategic decision…to publish more books about the Internet. This was not an obvious route to take in the early 1990s. People whose careers revolved around computers needed a strong signal to recognize the growing importance of networking. Two more signals of that nature soon came along.

Noting that the Web rapidly increased in popularity and deserved more than its modest chapter in The Whole Internet, O’Reilly was wrapping up a book on the Mosaic web browser when our system administrator beckoned me conspiratorily into his office one day. “I have something to show you,” was his come-on. And on his screen was a new browser with a much spiffier layout than Mosaic. It featured the integration of graphics and text in an attractive manner that could make you think you were looking at a magazine instead of a computer screen. Netscape, Marc Andreessen’s successor to Mosaic, quickly became the dominant mode of interacting with the Internet. Our Mosaic book crumpled into irrelevance upon release. The incident provided a lesson in the speed at which technology could advance, particularly given the collaborative possibilities and mass audiences created by the Internet, which frustrated efforts to cover technologies in books.

Before Netscape, the Web was text-heavy and really not much different in its experience from Gopher. Both were forms of hypertext, a concept introduced by Ted Nelson in the 1960s and already implemented clumsily by Apple Computer in the 1980s as HyperCard. (I had played with HyperCard when it first came out, and felt it limited by the tiny amount of content permitted on each screen.) Whereas Gopher offered access to distinct text sites, the Web’s HTML allowed you to write an engaging, readable document and attach links to other documents to chosen phrases of your text.

It took a while for people to translate the print concept of footnotes and references into the power of linking. The new Web style focused an author on just what you wanted to say, without wasting time and distracting the reader with summaries of what other people had to say on some subtopic, instead, you would just link to their documents for further illumination or for validation of your claim. (And as Ted Nelson would pedantically point out, you’d suffer the consequences when the site you were pointing to plunged into eternal darkness.) I was soon to try out the new writing style permitted by the Web.

Netscape’s support for graphics added a new dimension, opening up a Web that integrated multimedia. Yet soon my system administrator, and thousands of others, were excoriating the developers of Netscape. Its sin? Deliberately opening up to four TCP connections simultaneously in order to download graphics faster. (TCP is essential Internet software that, among other tasks, manages how quickly data flows over the network.) The early 1990s dial-up connections were straining under the load of grabbing a picture of a few thousand bytes. The Netscape developers gamed the system by opening multiple TCP connections so as to provide as pleasant a surfing experience as they could. Users rejoiced over the speed-up in delivering web pages back then, when people joked that WWW stood for “World Wide Wait.” But old-style system administrators pontificated that a well-behaved Internet application would open a single TCP connection and politely wait its turn. It would be many years before Steve Souders would work with me on a book about web performance that codified, among other tricks, the most efficient number of TCP connections to open under different circumstances.

Another indication of the Web’s trajectory came in a book proposed by a talented young college student, Shishir Gundavaram, on a late addition to the HTML protocol called the Common Gateway Interface, or CGI. By adding forms to the Web, CGI expanded it yet again in unprecedented ways, because now a user could interact with the site. (Tim Berners-Lee expected users to upload as well as download content when he designed the Web, but that capability took a long time to be exploited.) Thanks to CGI, instead of just plunking down long, boring menus of options in front of the reader, a site could ask you to enter a phrase such as “sleeveless vest” and reward you with a list of relevant products.

It was a short path from CGI to hybrid systems linking up forms to databases, a trick that ultimately led to a completely innovative use of the Internet: e-commerce. CGI also made search engines possible—a feature of the Web few people could imagine living without now.

I became editor of Gundavaram’s book, which quickly became one of our biggest hits. He used the supple and popular Perl language for coding. Gundavaram and I became friends and I would visit him occasionally in his Silicon Valley home as he bounced from one start-up to another. I was there to congratulate him shortly after the birth of his first child, and again when his family played out the American dream and bought a house. His brother and I visited an art exhibit in the Silicon Valley featuring tech-based installations. This was one of the first friendships that I was to develop with an author.

We already thought the Web was a pretty big deal. But it was only barely being used yet for commerce. In fact, through the first half of the 1990s, commercial use was forbidden. (So much for myths about the primeval Internet as an unrestricted medium.) Retail brought with is a whole new set of tools and languages.

It’s worth looking at a few of the most popular tools of the 1990s (many of which persisted up to the time of this writing). To understand how they worked, you need to see how they worked together—and the concept of a “stack”, which helps you organize how the tools work together. The same word “stack” is also used by programmers for something deep in programming internals, but we won’t consider that here.

Right above the fundamental hardware and built-in software (firmware) provided by the computer’s manufacturer, the basic software that runs a computer system is the operating system. For a long time, the increasingly popular operating system on serious computer systems running big services was Unix. It wasn’t officially free software, but many people had access to the source code of popular Unix implementations, and Unix was treated as a standard. Linux reproduced all the behavior of Unix as free and open source software, and came to dominate these services in ways I’ll describe elsewhere. Linux is also called a “kernel”, to set it apart from all the higher-level software that runs on it.

To kernel developers, everything running outside and above the kernel is an application. The applications are granted their own computer memory in what they call “user space”, and are treated more or less as equal in the kernel’s eyes. But programmers working on the application level see subtle layers of their own, with some applications supporting others. That’s the concept behind the word “stack” as used here. We’ll look at a couple examples of how one application supports another.

Because two types of applications are crucial to web programming, we’ll focus on them here. The first is the web server, which holds the content that the web site wants to display. The web server accepts requests from clients across the Internet and sends results that browsers display. Apache became the dominant web server of the 1990s. It was developed by a loose collection of free software programmers. The organization they formed to handle logistics and legal issues, the Apache Foundation, later entered terrain far from the Web and became host to many important projects in artificial intelligence and big data. Apache is a bit heavyweight and has a huge configuration file to handle all kinds of web-related options, so some simpler web servers have intruded on the near-monopoly it had in the 1990s.

The second support application is the database, which is responsible for storing and serving up the huge amounts of data needed by major sites. Let’s say that a visitor to a web site asks for all the articles on that site that talk about free software. The web server receives the request and queries the database. For instance, the web server might look at the article database and look for the fields that hold titles and keywords. If one of those fields contains the text “free software”, the article is retrieved from the database and offered to the user. MySQL became the database of choice for web users. Like Linux and Apache, it was free software (although unlike those, it was distributed by a single company).

The SQL part of the name refers to Structured Query Language, developed by IBM in the 1970s specifically to interact with databases. It’s idiosyncratic and inconsistent, and has split into so many versions that it consummately illustrates the old joke: “The nice thing about standards is that there are so many to choose from.” Yet SQL is so firmly established that nearly every new database has to support some version of it. MySQL’s version is more limited than many other database engines, but it provides everything web developers and small businesses want, and it’s a cinch to install and start using. I edited a large MySQL series that sold spectacularly well for years.

The web server doesn’t know exactly what to ask the database, because that depends on the particular business running the web site. So the business’s programmers have to write small programs that run inside the web server. (You’re seeing how hard it is to define layers and applications in the “stack”. The business application is now running inside the web application. It gets even more complicated when we get to JavaScript.) When a request comes in from a visitor to the web server, the web server can either return a static page of information or invoke the business application. The business application in turn interacts with the database if necessary, and creates content on the fly for the web server to send out. We call this the back end, whereas the visitor’s browser is the front end.

Although early web sites had back-ends programmed in Perl, a new language named PHP soon introduced itself with a bow. Dispensing with some of Perl’s syntax oddities and promising a more consistent programming experience, PHP quickly took over from Perl on the Web and maintained its dominance through challenges from Ruby on Rails, Node.js, and other tailored back-end frameworks.

Now you understand the elements of web development, and the most popular examples of those elements: Linux as the operating system, Apache as the web server, MySQL as the database, and PHP as the business application language. The LAMP stack, in short. With LAMP, average web site builders could exploit the promise of interactive sites to use CGI to return content of interest to users within seconds.

Because the web developers could choose easy defaults for Linux and Apache, they focused their attention on MySQL and PHP programming. Books combining those two technologies shot to the top of programming lists. Readers prefered to start with a book tightly coupling MySQL and PHP than separate books devoted to a single tool.

I’ll throw one more element of web programming at you, because of the role it played in our publishing strategy. A developer named Brendan Eich, working at the company that made the Netscape browser, made an observation that would end up changing the way we use the Web. In addition to running programs inside the web server, Netscrape realized that running programs inside the visitor’s browser would also be valuable. For instance, suppose you have to fill out a form to order a product, and you forget to put your address in. This is obviously needed to send you the order. It would be nice to inform you about the missing information right away, instead of waiting for the browser to send the form information to the server, have the server’s PHP program check the information, get the bad news back from the server, and redisplay the form.

JavaScript was the result. Eich made browser-side or front-end processing possible by providing a new language and asking browsers to support it. JavaScript had a superficially Java-like syntax, but despite the name was a completely different language. Programmers used JavaScript first for trivial tasks such as checking that fields in a form were filled in correctly. Once they discovered that the language gave them omnipotent access to everything about a web page, they put more and more of the page into JavaScript so that they could respond in real time to visitor’s actions. The language soon became a necessity for nearly every web page.

My CGI book had been the first out on the subject, so it sold very well. Once every publisher comfortable with strings of capital letters jumped into the MySQL and PHP space, we found it much harder to draw attention to our books. I worked hard with a number of authors, but some other publishers would always outsell our books, the favored publisher and author rotating year by year. Online retail shoves marketing toward a winner-takes-all model, because many people simply choose the first option that pops up during a search. In the critical MySQL/PHP space, we needed to become that first option.

I finally conquered the top spot in 2009 with a book by Robin Nixon. A web developer as well as professional author, Nixon meticulously researched the differences in browsers, servers, and operating systems, and could be relied upon to give the most up-to-date advice. He wrote tight instructions with just the right amount of needed background, with me hot on his heels to point out forgotten information or muddled explanations.

But the stroke of genius that let Nixon conquer the field was his inclusion of another central web technology, JavaScript. He understood, as other authors did not, that JavaScript had joined the other tools as an indispensable skill for web developers. By disciplining himself carefully to cover the most important tools, and to do so succinctly, Nixon left himself room to include an entire extra programming language without making his book significantly longer than the competition. No other book included JavaScript with their coverage of MySQL and PHP. Nixon’s book was number one in the category for years, going through five editions before O’Reilly management moved to different topics.

Bucking conventional wisdom became something of a competition that affected other aspects of the company besides what we chose to publish. One year we fired all our marketing staff. I remember Tim O’Reilly repeating a nostrum heard somewhere saying that “Marketing is for when you’re not remarkable”, and I thought perhaps that played a role. It was around this time that the business world noticed how remarkable the growth of the company was, generating a lot of positive press for Tim personally and the company as a whole. (I was featured in Fast Company simply for the novelty of creating a personal web site to promote my work.) In his review of this memoir, Tim denied carrying out a mass layoff or denigrating marketing, so this memory is my personal recollection. Whatever really happened, it left a strong impression on me because I had formed a good understanding and mutually supportive relationship with the marketing person in my area, and mourned his loss.

Actually, we knew very well that we still occupied a niche in the computer industry that was intensely appreciated by those who read us, but isolated from the vast majority of computer professionals. Mentioning our name to a stranger would impel one of two highly divergent responses: Some would utter exclamations of exaltation (“I have a shelf full of your books…”) whereas others would say, “What’s that?” And the latter cohort was much larger, even within the computer field.

However, a basic insight we observed about our readership stuck with us: We thrive on repeat customers. Casual computer users did not provide enough revenue to justify the cost of outreach and marketing. Among professionals in the field, on the other hand, we could find an entrepreneurial segment always seeking to learn the next promising software tool. A natural set of steps led from this observation to the creation of an O’Reilly online platform, described earlier in this chapter.

The expense of marketing an end-user book on Windows or the Macintosh could pay off if we achieved best-seller status. I think O’Reilly managers had enviously eyed the success of the Dummies series. The publisher IDG had launched that series with DOS for Dummies, and I have always assumed that the choice of the name was a somewhat fortuitous exploitation of alliteration. The marketing strategy of calling one’s readers stupid, counterintuitively, hit a positive chord in the public.

Computers remain one of the few areas where an admission of ignorance can be displayed without shame and perhaps even with a touch of superiority. In the 1990s, most people had been exposed to personal computers just a few years before. Unlike other widespread technologies, computers rarely inspired love and pride like automobiles or motorcycles. Nor could computers retreat discreetly into the background and become silently indispensable, like telephones. The attitudes of the users mingled a strong desire to reap the potential that computers offered with an equally powerful anger toward the devices’ bizarre interfaces and mulish refusal to do what they were told.

The love/hate relationship persisted for decades, finally mellowing into an appreciative acceptance in an age of smart mobile devices, and later voice interfaces. Still, computers will take a while to reach a golden era where their behavior is coterminous with our expectations.

The helplessness of the general population when faced with software contrasted with the joviality of those few who could grasp computer behavior and fix any flaw. So it was predictable that most people would think of themselves as dummies and assume that a book marketed as a guide for the perplexed would suit them. O’Reilly felt that we could tap into this gigantic market with better offerings than most of the Dummies series, so the company responded positively when approached by one of the most talented and prolific Dummies authors, David Pogue.

Pogue wanted to start a competing series called The Missing Manual. And thus started an unaccustomed venture for O’Reilly: a whole line of consumer-oriented books. (In his review of the memoir, Tim O’Reilly explains that he started the series to help us recover from the dot-com bust, and knew that consumer books wouldn’t be central to the company’s strategy.) The books flaunted a unique writing style, which I had the chance to hear Pogue describe once in a presentation at O’Reilly’s Sherman Street office in Cambridge, Massachusetts. He turned certain rules on their head, such as the common injunction to avoid italic emphasis in technical communications. He loved italic, because it made the author’s voice more conversational. He looked for ways to establish friendly relationships with the reader, while eschewing cheap effects that really were dumb.

Apparently the Missing Manual series filled a need, and it made us a lot of money. Pogue himself started on a trajectory to fame, weaving his expert banter in consumer-level knowledge of technology into a career as New York Times science writer and PBS film-maker.

Like the Dummies series, The Missing Manual occasionally poked its head into more arcane topics such as the PHP programming language in tandem with the MySQL database. The series wagered that the audience for this technical pairing was large enough to justify a book, because that combination of tools drove thousands of web sites in the 2000 decade.

Lax chaperones: Perl and Python (1990s)

O’Reilly has published our share of duds: books that showed promise but didn’t sell well. But we have also been lucky, releasing a few subpar books that become hits. Two of our major series, one on the Perl language and the other on the Python language, were saddled from the start with books that had demonstrable flaws but were hailed as indispensable by large communities.

Perl was an early favorite at O’Reilly, because it was the first language to hit a sweet spot among system administrators and was quick to learn. As a scripting language, Perl relieved programmers from tedious quality measures such as declaring all their variables (although it was so eventually added as an option to reduce errors). And despite its ease of use, it was powerful enough to write programs, particularly through regular expressions that were of unprecedented sophistication and grew only more powerful over the years. Perl also wooed system administrators by mimicking the syntax and behavior of Unix utilities they already knew, so that instead of tying together several tools with cumbersome connections known as “pipes” in Unix, they could write a few lines in one consistent, familiar style.

The historical origins of the first book on Perl, whose success went off the charts for programming books, are not really my story to tell. Nor are the reasons for its unique expository style. I’ll offer just a few nods to key people in its development. Tim O’Reilly launched the project by recruiting Randal L. Schwartz, a Perl contributor so insightful that a whole Perl algorithm (the Schwartzian transform) was named after him. He in turn brought in the brilliant and gentle inventor of the language, Larry Wall.

Schwartz taught classes with a laser-focused practicality leavened with quirky humor. Wall, along with his own quirky humor, has an idiosyncratic way of finding new layers of meaning, illustrated when he was later invited to keynote our Perl conference. He found it not just amusing but also deeply significant to call his talk “The State of the Onion”. And he continued to give his Onion speech long after the Perl conference expanded to be an Open Source conference and Perl was no longer the centerpiece. Although fewer conference attendees in later years turned up in sessions about Perl, Wall’s thoughtful Onion talks were swamped by large adulatory audiences.

But I don’t know how these two talented authors came up with the grab-bag of speculative excursions, advice from the shop floor, and other thoughts that constituted Programming Perl. Wall was working on the official Perl documentation at the same time that he wrote the book, and the DNA transductions between the two types of material did not merge seamlessly.

Despite the book’s unprecedented and inconsistent exposition, the Perl community responded with incredible love and affirmation. Readers outside the community who wander in bewilderment through the tome complain from time to time, but their views are lost in the flood of adulation. The book sold year after year, expanding over multiple editions almost without editorial oversight. I was responsible for one later edition where I did what for me was unprecedented, and followed Tim’s lead in letting the new author Tom Christiansen do whatever he thought best. Wall was not available to do much writing at the time. But Christiansen, a master of natural languages as well as programming languages, grasped the essence of what Wall and Schwartz were trying to do.

What many programmers seek from publishers is affirmation, and the mere release of this book in that period was enough to establish the importance of all the changes in computing that Perl represented—changes I will cover in the description of our historic Perl conference. To refer to Programming Perl, programmers would use shorthand and call it the Camel, after the animal on the cover. Many programming forums, such as the popular discussion board Slashdot, would slap a picture of this camel on items about Perl, as if that picture were a trademark of the Perl language rather than an artistic choice by Edie Freedman, our designer. Indeed, Freedman’s decision to use old woodcuts of animals on our book covers branded O’Reilly more than anything we did.

Tim O’Reilly, in his review of this memoir, said, “We had a strong house style shaped by how I liked to explain technology topics, and often heavily rewrote what our authors turned in. But I realized that Larry’s voice was sui generis, and surprisingly effective, even if a bit difficult for beginners. I always loved the book, and over-ruled the editors and reviewers who wanted it to be ‘fixed’.”

Wall writes, “Perl was designed to evolve rapidly, and we were already recognizing that the book couldn’t possibly keep up with the manpages for a rapidly evolving language. So the main point of the book was to build a culture around the language. Because my training in linguistics concerned how to help dying natural languages survive through literacy and cultural self-reinforcement, I already had some idea how to get an artificial language to thrive by building a community around it. So the book itself was designed to reinforce the quirks that were already becoming evident in Perl culture. More than a set of quirks or puns, though, the fundamental purpose was to convey a certain joy in programming. I’m not aware of any prior computer language book that treated community and culture as critical language features before the Camel. ”

Schwartz went on to plug a gap left by Programming Perl by writing a more conventional book named Learning Perl. I found it completely lucid. I’ve heard that some people are deterred by the basic Unix knowledge it assumes of the reader. But for the Unix users and administrators learning Perl at that time, the tone was perfect.

Our approach to other languages sometimes took other odd directions. Like observers everywhere, we often undervalue new technologies, especially if they seem derivative. When PHP came along, management treated it as a kind of toy language and allowed the first couple books to be handled by a junior editor who did not offer much guidance. I don’t think anyone was asking the basic editorial questions: “What does the reader know at this point? Are you giving them the information they need right here? What is relevant to the task they have to do? How does your passage here contribute to their growth?”

Eventually, because it was easier than Perl to use for programming the Web backend, PHP emerged as the leading language by far on web pages’ back ends, and one of the most important computer languages, all the while suffering from derision by people in the computer field who saw it as conceptually inadequate. (They did cite legitimate security concerns, and these were fixed.)

And then Python. I myself paid little attention to it at first. I had suffered through older languages such as COBOL and FORTRAN that required strict spacing and layout, so I could not see a large role for Python with its own strict formatting requirements. I also had suspicions about the lithe promises by Python lovers that it would reduce the visual complexity of code.

Python is vaunted as the quintessential easy language, free of fussiness and extraneous punctuation. True, it makes parentheses optional in many places where other languages make them mandatory, and it removes the need for curly braces by using indentation to set off code blocks. But was it good design to make indentation a significant syntax feature?

Forcing white space to be meaningful is an atavism of languages going back almost to the beginning of programming history (FORTRAN, MUMPS, and the Make utility that I wrote about in my first project for O’Reilly). No other modern language has followed Python’s indentation model, despite its popularity, which suggests that language designers have considered and rejected this aspect of Python syntax.

Even when I first saw Python, I didn’t think it could do much better at readability than any other language after decades of research into compilers and programming languages. As Python gets used in non-trivial real-life applications, I think my prediction has proven correct. Certain difficult abstractions, such as nested arrays of diverse types, turn up over and over as programming languages evolve—Python just like all the rest—because these complexities are necessary for producing maintainable programs that grow large.

When a proposal about Python came our way, our managing editor Frank Willison took nominal control of the project, but not in the strategic manner that Tim O’Reilly’s gave free rein to the authors of Programming Perl. Like the Perl book, our first couple books on Python were sprawling and disorganized. Once having found a piece of information, you could boast that the book had it—and how could the book not have it, when it approached a thousand pages in length? The real test of good writing is if a topic appears where the reader can make use of it, a criterion neglected by both Wall and Lutz.

But also like Wall’s Perl book, our two early books on Python attracted large numbers of enthusiasts. They made us money, but stood in the way of our creating a coherent Python series as that language showed signs of taking a central role in modern computing. Python stands as virtually the default language for the most important developments in key areas such as machine learning and embedded computing. The O’Reilly series contains redundancies and some books that miss their mark.

Editors and collaboration (early 1990s)

I think many fields depend on a few carriers of “truth”, even rich, complex fields that thrive on the contributions of many participants. Only a few attain the exalted position of truth-bearer. In the health care field, for instance, many institutions listen only to people who have achieved the status of placing MD after their names. In many religious communities, nobody feels safe resolving a debate without consulting someone who has passed through an ordination ceremony, such as a Catholic priest or a Jewish “Rebbe”. Every contract has to be examined by a lawyer, and so on.

Editors at O’Reilly were long privileged to occupy this position as bearers of the truth. It was a weighty responsibility, tugging at our sleeves during every step from budgeting the upcoming year to marketing completed books. The whole company understood us as privy to a profound grasp of trends in our field and the next round of dominant technologies. We had powerful impacts not only on the planning of individual books, not only on marketing campaigns, and not even just on the direction of our company, but hopefully on the broader use of technology.

O’Reilly is now much more than a publisher—in fact, by the end of the 2010s we didn’t even like to call ourselves a publisher. We became something unique where excellent content swirled together with other educational activities and tracking mechanisms. But ironically, in the way we produced the books that we continued to release, the 2010 decade found us a much more conventional publisher than we were when we started.

Let me take you back to the half dozen years starting in 1992, when I joined the company. We had only a handful of editors. These editors were technically trained. Some had worked as professional coders, and all of us could administer our Unix system (we had only one) and fashion some Perl code on demand. We liked to pride ourselves on being different from other publishers because we were an integral part of the communities whose code we were documenting. Mike Loukides wrote a book on Unix system administration, not shying away from such highly technical topics as reconfiguring and recompiling the operating system, which many system administrators did as a matter of course in those days.

And there was no distinction between acquisitions and development, as there was at other publishers. We got to know authors in the community, perhaps contacting them on a mailing list or by surfing project sites, and we stuck with the authors once we contacted them. They could count on a single person to research the field they were in—Perl or another language, networking, or whatever (I did it all)—to work up a proposal and outline with them, to edit the book, and to shepherd it through the marketing process. This was a tremendous amount of work, but fulfilling in a total sense that kept the editors going. (Generous monetary rewards did too, but I honestly didn’t think often about that.)

As O’Reilly outgrew its narrow Unix focus, it hired some extraordinary people as editors. For instance, in Brian Jepson it found a professional programmer with an insatiable thirst for exploring new technological areas and for sharing them with others. I always went to Jepson with questions on technology and where it was going. I felt flattered to see him praise my own work repeatedly, because I always felt like a novice next to him.

Jepson has worked for years on community technology forums in his home town of Providence, Rhode Island. I visited a kind of small Maker Faire event he organized there. He was familiar with software development for mobile devices and equally adept at tinkering with hardware, including 3D printing. Drawing on that aspect of his background, the company moved him into the Maker part of the business, and he eventually wandered on to other things. I don’t believe the company has hired editors with such strong grounding in technology since then.

Our editors made us into the publishing equivalent of what 1980s computer programmers used to call an “engineering-driven company”. This term indicates that technology trends informed what we did, and that those who made the product (in our case, the editors who produced the books) determined the overall direction. This is in contrast to a “marketing-driven company”, which could also be successful and had much to recommend it—but wasn’t as much fun for the engineers.

This chapter is about “strategy”, but you shouldn’t imagine that we were executing detailed plans drawn up long in advance. The growth of a company doesn’t resemble the football play calling systems that impressed me as a kid, so much as the tactical thrusts made by a fencer or tennis player from instant to instant. The contrast between the image of a cool planner rising above the fray and the sweaty contender holding her own on the ground stays with me as I look back over the evolution of O’Reilly.

We have never adopted a routine—there was no “business as usual”. In fact, I would excuse the frustrating dearth of corporate charts and assigned responsibilities over the years, justifying it by our fluidity. True, it was a challenge to find who could carry out a trivial, everyday task such as sending a book to a potential author or providing marketing materials to a partner. But I realized that our mission, goals, and hence organizational structure was always in flux, so it would be a wasted exercise to create a clear chart listing whom to go to for what. Everybody was permanently committed to the overall success of the organization, so the right person for a job would get back to me eventually.

The company did take steps to set direction, though, and characteristically made its editors responsible for doing so. Realizing that the editors needed to pool their expertise and come to consensus regularly, Tim O’Reilly brought us together about once a year for very intense summits of the eight to ten people. I remember, for instance, the summit we held in 1992 or 1993, Ed Krol’s book The Whole Internet User’s Guide and Catalog rocketed to stardom. We had never taken on the Internet as a focus. But at this editorial meeting, we made a critical decision: We’ll do more books about the Internet. Wow.

I’m making it sound like this decision was preordained, but it didn’t seem like that at the time. We charted a generally correct direction at these meetings, and built up tremendous camaraderie along the way. The meetings were as engrossing—I imagine—as meetings where the Federal Reserve Board decides whether to raise or lower interest rates. Our meetings lasted a few packed days. Some of us would drive, others needed to fly. Once at the site, isolated from all distractions, we bonded, dined and tippled together, and determined how in our minds to change the course of human history through technology. Once Tim O’Reilly actually invited us to his Sebastopol house for dinner. I noted that it was fairly modest, evidence that Tim wasn’t materialistic.

Apart from editorial meetings, I had the tremendous luck to get to Sebastopol once a year for many years. This was not a luxury available to most editors or most employees of the Cambridge office. The reason I could get there is that, with a minimal extra expense, I could tack several extra days onto trips taken for other reasons. I attended conferences in the Silicon Valley at company expense, and I had two brothers living in Marin and Sonoma counties. So all I had to do was leave an extra few days or week after the conferences when booking my flights. I would pay for meals and stay with my brothers, going in to the Sebastopol office for a few days and bolstering relationships with people there.

It was particularly valuable to meet Allen Noren, who managed the web team and O’Reilly’s online bookstore for many years, and Betsy Waliszewski, with whom I collaborated closely on the company’s open source strategy. But I tried to drop by colleagues in every department from legal to conferences and customer service. I had the jaw-dropping opportunity to see the bustling Make Magazine lab.

Tim Allwine, a database expert employed for many years in the Sebastopol office, gave me lots of insights. Once he offered to show me a new schema for their reorganization of our MySQL database. Because I worked on most of our MySQL books (and attended the MySQL conferences, run by O’Reilly) I eagerly accepted the invitation. He took me into a room with several rectangular tables draped with long printouts covered in entity diagrams—thousands of fields in dozens of different tables. To see how large and complicated the schema was for our small company taught me a lesson in the difficulty of handling relational data. It may not be surprising that database administrators get paid so much, or that so many organizations find relational databases too heavyweight for many modern applications.

The responsibility that I felt as an editor came out in one exchange with one of my peers. Editors always develop a fondness for certain projects that don’t make money. We propose some books in the hope that management will take a chance on them, and reluctantly accept the judgment that their technical superiority will not translate into sales. So fellow editor Linda Mui once told me of her sadness that a project she deeply cared about had been rejected. I reminded her that our company’s income came entirely from the projects we editors led, and that a hundred other employees depended on us for their livelihood. Thus, we should see ourselves as fiscally responsible as well as visionary.

During the dot-com boom, few professionals made off with more loot than O’Reilly editors. We actually got royalties on every book we edited, just as authors did. This was pretty generous on O’Reilly’s part. They were already paying better royalties to authors than most publishers. (Other publishers put impressively high percentages in their contracts but insert clauses that whittle down payments, in schemes reminiscent of those used throughout the content industries such as music recording.) An editor, on top of that, could receive four percent of profits. I used the proceeds to expand my house and send both of my children to college with barely any loans.

Eventually, management realized that editors were draining resources that could be used to develop other departments. We were put on a conventional bonus plan, which rested on tiers of performance at different levels of the company in order to promote cooperation. First, the company as a whole had to show a profit for the year. Next, your department would have to end up with a positive balance sheet. Finally, you had to demonstrate a substantial contribution to that achievement.

I received bonuses pretty often, but couldn’t ever anticipate what I’d get, much less choose projects that would potentially maximize the bonus. Cooperation and information sharing were part of O’Reilly culture from the beginning, so none of us had to change our behavior for the sake of a bonus. I ended up regarding the bonus plan like airlines’ frequent flyer miles: nice to get, but not worth changing behavior for. The change in compensation pointed to a gradual trend at the company: Although content was still king (a common saying in the 1990s, attributed to Microsoft founder Bill Gates), the editors’ royal status was being curtailed. Even the editors’ meetings eventually came to an end without drawing remark.

☞ Parallels always intersect in the public sphere: Activism