Thousands of hands to the tiller: Writing as empowerment

Contents of this chapter

New media, new surprises: Blogging and public presence (2000s)

I’m slow to try new platforms, whether sites for self-promotion like Medium, or entire new types of outreach like Twitter. My attitude differs from many wannabe super connectors who go online and seem to be everywhere. I prefer to make the most of a few outlets, not to subdivide my time and attention into finer and finer shreds. I ended up on Facebook, LinkedIn, and Twitter. And I started late in the game, not feeling at first the need to send or receive wisdom in single-sentence packages. But an action by Tim O’Reilly, who attracted nearly two million followers at the time, persuaded me of social media’s perhaps insidious power.

In 2009, both Tim and his company were heavily invested in a movement called Government 2.0 that appears later in this memoir. At that moment, I analyzed the structural risks and weaknesses of open government. I listed 19 problems that governments had to resolve to exploit data and engage the public. I prepared a draft that I reviewed with several people including Tim. For easier review, I put the draft on my web site but asked reviewers to keep the location secret.

Security through obscurity didn’t work. Soon I was getting messages about the article from people I had never heard of. Luckily, the reviews were all positive. Asking around to uncover how the URL got out, I learned from Tim that he had put it in a tweet. My roster was now published, whether or not I felt ready.

I’ve been told that Tim was actually the one who invented the use of Twitter as a way to share news articles. Before he did so, it was a swamp of trivial observations about what people were having for breakfast or what their cats were playing with. Whether or not Tim played the historic role of turning social media into news feeds, his action of publicizing my article showed me how one could benefit professionally from Twitter. I joined up and used it a few times a week.

I was slow to get to blogging a decade earlier, too. When I heard about weblogs, I considered them a deplorable narcissism. I said anything I spent time reading should be thoroughly researched and carefully fashioned, preferably with review by a separate editor. Who would want to peruse someone’s random jottings day after day?

In fact, I still feel that way about weblogs. Most postings I’ve seen are either cotton-candy ephemera that wouldn’t interest anybody except the author’s mother, or a gee-whiz expounding of ideas I’ve seen a dozen times in other places. True insight is rare. Although I track the blogs of a few leading thinkers, I notice that they tend to post only once every few months. Furthermore, those blogs all go dead after a couple years when the authors find something more productive to do.

The O’Reilly web staff persuaded me to start blogging. My records show that my first posting went up February 2000 (now gone forever, like most of my postings). The title, “Promising Personalized News”, suggests that I was hip to an important trend very early. Two decades later, in response to the launch of a curated news site by Apple Computer, I published a piece on the same subject, probably with much more insight gained from years of observing the phenomenon. (Because O’Reilly no longer ran a blog, I put this article on LinkedIn.)

Blogging enhanced my career to an incalculable degree, even though I lacked an explicit motivation to post. Only a few months after that 2000 posting, I hit a gold mine with a posting that led directly to our engagement with emerging technologies, described later in this chapter.

For years after that I was blogging at least weekly on one O’Reilly site or another. The team kept setting up different domain names with different types of blogging; one of our most popular sites for a while was XML.COM. Yes, believe it or not, we thought XML would be the future of the Web and of publishing.

Most of the content I wrote for XML.COM is still there as I record this memoir, although the site is moribund. I cannot say as much for most of several hundred blog postings I wrote. They would have complemented this memoir, offering interesting views on the development of computing and the Internet, but they almost all got wiped out during one or another of the transitions initiated by some team of web developers.

I remember only once that O’Reilly’s web team made an effort to create an archive of old content. Most of the time, they understood their job to be just to reflect the company’s latest model for public relations and outreach. They clearly had no idea that there was any value to preserving articles such as my 2000 posting with such an august impact on O’Reilly and the computer industry—the article we credited with launching the company’s focus on emerging technology and “alpha” programmers; the article that ultimately led to our new mission statement: “to change the world by sharing the knowledge of innovators.” It’s as if only the future matters, not the past.

I must share some of their guilt. Writing frequently and hastily, I didn’t bother during the early years to keep copies of most of my pieces. I might have undervalued my own writing because it was just a blog. Or until I started to notice the disappearance of my work, maybe I couldn’t fathom that a company devoted to high-quality content would casually discard its own historical treasure.

December 2020: After some intensive web searching, I discovered a broken link to the historic article just described, which permitted me to retrieve the text from the Internet Archive, also known as the Wayback Machine. I have now restored the article to my personal web site.

At some point, management decided to shut down the weblog. The manager of the site told me they would concentrate on more substantial articles, giving authors time to do research and requiring them to go into more depth. I still wrote for them occasionally, when I had a topic such as algorithmic bias that deserved the research effort. Although I see good reasons for their shift in priorities—after all, thousands of other sites offer blogging opportunities—I regretted the loss of a place to post short pieces with the attitude of, “Gee whiz, here’s something new and potentially important—let’s pay attention to it.” I think such postings could generate in readers an anticipatory thrill of discovery, even a sense of wonder, that would bring them back to our site.

The company’s more focused approach to the web site paralleled a general narrowing of the topics on which we published. To get the biggest bang for their buck, management sought groups of related products we could bundle—for instance, content on machine learning. The new policy disparaged individual books in new areas; every book had to fit a strategy chosen by management.

Even before blogging, writing for a number of different web sites (and often print media too) maintained a constant hold on me. As I explain elsewhere, I wrote a weekly column for three years about Internet policy, from 1997 through part of 2000. All in all, I have over one thousand articles under my byline.

The variety of topics I blog about is so broad that I can’t even summarize them. That tripped me up once during a juicy networking opportunity.

An O’Reilly convention I had attended was a wrap, and I took a flight to San Francisco for my usual activities following any West Coast conference: visiting friends, staying with relatives, and dropping in on O’Reilly’s Sebastopol office. It so happened that Tim O’Reilly was sitting on the plane a few seats away. As my wife pointed out later, Tim could easily have afforded a first-class seat, but his characteristic frugality put him in the economy class, so I had the opportunity to chat with him.

Tim told me that he had been invited to an important opening in San Francisco: the office of the Creative Commons, which was changing the way musicians and writers thought about collaboration and content distribution. He couldn’t attend, so he suggested I take his place. I had the honor there of meeting John Seely Brown, one of the most famous researchers in the intersection of computing and society—an area clearly very dear to me. When I mentioned that my work at O’Reilly included a lot of blogging, he naturally asked what topics I covered. I didn’t have an elevator speech ready. Despite my fervent hope to impress him, I couldn’t think of what to say, and ended up mumbling that I covered a lot of things.

How I can write about so many different fields and causes, I don’t really know. I have become a Jack of all trades just by being interested in the world and following my instincts toward whatever is interesting. That curiosity is what led me to investigate and identify peer-to-peer technologies even before that term was applied to them. It also prompted me to research health IT, an intuitive scouting out that led me to numerous conferences, hundreds of articles, and a White House summit.

Along the way I came to understand the meaning of the rather fusty word “autodidact”, realizing that’s what I am. In this trait I matched Leonard Woolf (a journalist and editor who married the more familiar Virginia). He said in his autobiography that he could edit a journal on any topic by spending a few months intensively reading about the field. This didn’t make him an expert by any means, but it gave him enough background to guide the experts in producing material. I call this achievement the ability to develop a bullshit meter. You may not know the truth, but you can tell whether someone’s assertions contradict each other, or violate basic principles of their field.

I’ve always learned more from my own reading and practice than from school. I did reasonably well in high school, but found myself less and less engaged in college. I always wanted to go off exploring angles that interested me, rather than what the instructor thought was important. That’s why I never pursued an advanced degree after getting a rather flaccid B.A. in music.

This is nothing to be proud of. I highly respect people who can adapt to the discipline of academia—or who are so brilliant that they can meet its requirements while remaining focused on their personal vision. When I saw, at an MIT forum, a discussion among experts that produced dazzling insights, I realized the difference between being cross-disciplinary and my own learning, which is just undisciplined.

But are there advantages to flitting from subject to subject, drawn like an insect by the ever stronger attraction of nearby scents? Yes, I think so. My synapses are wired in such unusual, multifolded ways that I find surprising associations between unrelated ideas. I can carry principles into venues where they didn’t appear before. In my roster of articles, I have never allowed myself simply to repeat what others have said before. Every piece offers readers something new to chew on.

What everything comes down to: Copyright (2000s)

The digital tsunami, which has forced so many changers in careers, politics, and how we live, also shakes apart the delicate balances that existed in copyright, patents, and trademarks—the main subtopics in intellectual property. Each of these subtopics in turn has leapt on the stage during my career.

Editing Van Lindberg’s book Intellectual Property and Open Source came naturally to me, because I had been developing the kind of amateur knowledge I mentioned earlier for years in these areas. Lindberg, who later became director of the Python Foundation, got a law degree as a young coder and maintained an appreciation for free software. He could easily fall into nerd-speech while talking about either law or coding, and was the perfect author for a book straddling those realms.

Our deliberations over the title were fraught with caution, more than the discussions I always hold with authors to choose the best title for their books. Every book needs a title that instantly draws attention while conveying the book’s scope, even before the age of search engine optimization. The choice is easy enough if you’re talking about one well-defined topic, and especially if your book is the only one about the topic: Just assign a single-word title, or one garnished with a modest gerund such as “Using”. But most books have to fit into a pre-existing landscape of related titles and must somehow signal to readers that they’ll derive some special benefit.

Our book faced its own unique hurdle because free software developers were a major audience, but the Free Software Foundation had declared “intellectual property” a word to avoid. These activists love to ban words in everyday common use—the term “cloud” for third-party computing sites is another example—and their reasoning always has some legitimacy. They accurately argued that the various regimes falling under the term “intellectual property” were too different to be lumped together. But we were not about to saddle our book with the label Copyright, Patents, Trademarks, Trade Secrets, and Free Software, so we had to generalize somehow, and Intellectual Property was the perfect term. There are professors of intellectual property at universities, law firms advertising their services in intellectual property, and other firmly established uses of the term.

I doubt whether our potential readers were deterred by our using either of the terms the FSF dislikes. The book sold poorly because, I suspect, programmers and system administrators love to talk about the topics flippantly but won’t be coerced into actually learning about them. I idealistically thought that since numerous computer professionals complain all the time about the abuse of the patent regime or copyright system, they would want to understand the real principles at stake.

It’s easy to dismiss concerns about these legal areas as fussy. Tim O’Reilly has regularly said that licenses aren’t particularly important in free and open source software, and that building a healthy, responsive community is much more important to success. Although I could agree with his ranking—community is more important—the license is not a trivial matter. A failure to respect a license, trademark, or other aspect of intellectual property can lead to abuses that hurt free software and open source communities.

Nor is anything simple with these topics. The history of fair use is a case in point. Once, when talking to the O’Reilly company’s own lawyer about some matter, I suggested that someone’s use of a trademark might fall within the scope of fair use. “The concept of fair use doesn’t apply to trademarks,” he snapped at me. “Fair use is only for copyright.” This lawyer vanished from the company a few months later. I’m sure that any lawyer working for a publisher needs to understand trademarks, because we deal in them a lot.

O’Reilly was always the victim of flagrant copying, although we refused to take measures we felt were unethical to combat copyright infringement. We wouldn’t wrap our books with Digital Rights Management, or what opponents of the practice like to call Digital Restrictions Management. When we forged a partnership with Microsoft Press to distribute their books on Safari, Laura Baldwin declared that our refusal to use DRM could be a deal-breaker, and after some grumbling, Microsoft Press management agreed to forgo it.

DRM consists of techniques involving encryption and digital signatures to prevent people from using content without paying for it and obeying whatever rules the content producer has set up. If you bought a video in Europe and find you can’t play it in the United States, you have run up against DRM. As the example shows, DRM encourages a lot of arbitrary restrictions, which in turn contribute to odd business models. Worse yet, DRM is often poorly designed and fails, so copyright holders have forced numerous laws to prevent tampering with DRM. The far-reaching impacts of these laws are even worse than the DRM itself. For instance, it has a chilling effect on a lot of research. Researchers have actually been arrested for offending the copyright holders.

The easiest way to mitigate against infringement is to provide content as a service and update it frequently, so that no single copy has much value. Thus, a side benefit of O’Reilly’s turn to an online-first strategy in the 2010 decade was to create a bulwark against copying. In contrast to our earlier principle, I don’t believe that freedom from DRM was considered during the move to our online strategy.

Slump (2000s)

Several impacts of computing and networking on society in the early 2000s became evident to those who followed technology. We saw entire fields transformed or eliminated, and jobs with them. I could tell that automation would eliminate more and more jobs in manufacturing, construction, and other fields that would seem protected from the Internet. I started talking to friends and colleagues about the unhappiness this could cause, expressing concern over the role I might be playing by making the technologies easier to use.

All the people I talked to were reassuring. They may or may not have agreed that a worrisome social transformation was taking place, but they said that I shouldn’t take any burden for it on myself. Even my brother Alan, who is both politically astute and highly spiritual, took this position. But now that the damage to society and the public sphere is so obvious, I wonder whether the question should be revisited. Was I making life better or worse?

It turned out that I myself fell victim to these trends. My magic touch for finding topics and signing authors abandoned me as Internet usage unfolded throughout programming. And although the 2000s and early 2010 decade presented many gratifying opportunities of which I can boast, I constantly felt pursued by the gremlins of doubt, depression, and distrusting my basic competency.

The contrast with the 1990s couldn’t be greater. That was the period of the dot-com boom, when the door was wide open to fine texts on computing. I seemed to ride a surge of triumphant publishing campaigns. The two series I championed on Linux and MySQL flew off bookstores shelves. There was no difficulty generating proposals and signing authors on topics that reflected obvious user needs.

Shortly after 2000 this all fell apart. I still can’t explain quite why. I was doing all the same things I did during the 1990s—maybe that was part of the problem. The publishing field was different, and I think everyone in publishing, journalism, or media felt it. No series of books would save us. To avoid the fate of so many publishers who were throttled by their own business models and vanished, we had to go beyond the books that had formed my career.

I’ll offer two examples of failed projects to show the challenges facing me as editor at that time. Both examples involve an extremely popular database engine, MySQL, which delivered huge sales for O’Reilly in the 1990s and early 2000s. As editor of the series, I was always looking for new ways to milk the topic. I had formed relationships with numerous employees of the MySQL AB company (headquartered in Sweden), from the top of the company on down, and kept in close contact after Sun Microsystems and then the Oracle Corporation bought their company.

Sveta Smirnova came highly recommended by my lookouts at Oracle. She was responsible for troubleshooting there, as a Principal Technical Support Engineer, and wanted to write about how to troubleshoot problems in MySQL.

In any technical area—whether diagnosing the rumble made by your car or assigning blame for the Challenger space shuttle disaster—troubleshooting is the mark of the highest expertise. Most computer books offer you a set of steps, assuring you that if you follow these steps, things will come out right. Well, what if they don’t come out right? What mischievous element hiding in your environment is foiling your best efforts? Troubleshooting is where programming meets Agatha Christie.

When it came to MySQL troubleshooting, Smirnova knew the answers. She could do exactly what every programmer or database administrator wanted: to start with the error staring them in the face, and take them on a quest that unmasked the criminal. I was in awe of her understanding; I’m sure she had saved millions of dollars for MySQL AB clients.

And I’m sure that, had she written her book during the peak of our MySQL sales, it would have become a must-purchase. As released in 2012, however, the book left hardly an impact. People were still using MySQL. But they were turning their attention to other technologies. We just couldn’t spark excitement with the book.

The next example concerns a topic of pressing importance that got smacked down at the start. I had met two impressive engineers from MySQL AB who had written the book MySQL High Availability for us. These engineers, Mats Kindahl and Lars Thalmann, told me that MySQL AB, like O’Reilly, was highly scattered geographically. (I guess that Swedes don’t have the luxury of building a world-class programming team using just Swedes.) So MySQL AB had to manage teams of people around the globe, and they saw other companies facing the same challenge.

At that time around 2010, the rage among programmers was a spectrum of collaboration strategies that went by various names: agile programming, SCRUM, and eXtreme programming. These strategies all featured tightly integrated teams in a single geographic location. By talking face-to-face constantly with each other and with users of the software (or marketing people who supposedly could represent the users), these programmers could move fast. They built a tacit understanding and corrected each other’s missteps informally.

Kindahl and Thalmann, by contrast, had learned collaboration techniques discovered by free software developers who lacked the luxury of geographic colocation. (MySQL itself is open source software.) When you’re integrating code changes from Sweden, Canada, and Japan, you need the exact opposite of the Agile/SCRUM/eXtreme culture. Decisions must be explicit, and must be recorded. Each programmer must be granted autonomy and must be trusted to make key design decisions, although the results are certainly reviewed.

These policies about the location of staff concern all of us. They aren’t just an abstract argument over how to do software engineering. The conviction that smart people need to be crowded together in order to learn from each other snowballs into the “innovation hubs” I criticized at the beginning of this memoir. Dynamic companies elbow each other while taking over desirable downtowns and flying in more and more recruits to glut the neighborhoods. The result is an overcrowding in places such as the Silicon Valley that’s just as socially unhealthy as the corresponding starvation of the homesteads who lose their creative talent to these hubs. We’re living every day with these problems in the Boston area.

I also argue—and am sure I’ll draw heated rejoinders for doing so—that intense, colocated engineering practices hinder diversity in software engineering. Managers looking for new hires who can communicate easily with fellow team members, and trying to promote those who fit in most seamlessly, will consciously or unconsciously favor people who fit the team’s current culture. It takes extra effort to incorporate diverse voices—can the high-pressure Agile/SCRUM/eXtreme teams afford to do so? This all depends on how much the managers care about diversity and their awareness about what it calls for. But I can’t say that distributed teams solve the problem. These projects—including free software communities—need an explicit commitment to improve diversity too.

Knowing the achievements of distributed free software teams, and seeing similar distributed efforts all around me, I leapt on Kindahl and Thalmann’s idea of a book about how to coordinate distributed programming efforts. O’Reilly management did not share our enthusiasm. They rejected our proposal, and I have to say that they were probably right. In the early 2010 decade, the computer industry was not ready for our message. A book like that would certainly be popular after COVID-19 blew apart all those intimate software teams. But when I was ready to cover the topic, few people recognized its value.

The idea was certainly percolating, though, because a similar topic captured O’Reilly’s interest a few years later. Large companies with distributed teams were drawing on free and open source development techniques to get those teams to collaborate internally. When Danese Cooper, a strong open source advocate I had known for many years, approached us with the concept of InnerSource, the company assigned me to write a report about it. This report, “Getting Started with InnerSource”, came out in 2015 and became the most popular report we had ever released.

So something had changed between my salad days of the 1990s and the parched 2000s. Why couldn’t I continue acquiring as before? A lot of changes probably combined to quash my earlier success in signing authors.

The main change that wrecked my career, I think, may surprise you. It was the very movement I had championed and documented for years: free software. Programmers who came to understand free software and to witness its rapid success realized that they, too, could have a large impact on society by contributing to the software. In fact, free software could appeal to any sense of grandiosity a programmer might have. For a talk I gave in Tokyo in 2001, I discussed the fantasy of control that grips hackers as they code, documented by Joseph Weizenbaum as far back as 1976 in his book, Computer Power and Human Reason. What would be more gratifying to someone looking to have influence than to contribute to a project used by a hundred million people?

But influence is just a sweetener—perhaps an important draw, but probably not one that a programmer would be conscious of (unless they read the speech I gave in Tokyo). The more important attraction for programmers was the chance to be recognized. If you wanted to be a leader and influencer in programming, the way to do that used to be to write a book; now it meant writing software. Your fellow programmers would find the code you write more meaningful than a text describing what to do.

The publishing model in computer books was always based on converting programmers temporarily to authors to benefit their reputation. For years this worked magnificently. I heard from two authors who believed that the publication of their books allowed them to start their own consulting firms.

But if you could do just as well, perhaps even better, by remaining a programmer and contributing to free software, why go outside your comfort zone to write a book? And incidentally, royalties from most books were far less than one could earn at a conventional programming job, so financial compensation wasn’t much of an enticement.

The Internet also changed advertising, undermining old ad channels such as journals. Self-promotion became the royal road to influence, and this permanently altered the power relationships of authors and publishers. Let’s look at how the shift played out.

O’Reilly changed certain practices in response to the diminished reach of advertising and public relations. Although we usually found our authors through their active participation in technical forums, we used to ask them in our early years to withdraw from public participation while they were concentrating on their books. But as Internet communities became more and more important for gaining a following, we reversed our position and encouraged authors to stay connected to these communities.

Although at first that change was simply opportunistic, this gargantuan contest for visibility eventually hardened into a strategy—one used by all publishers. They started to ask potential authors about the sizes and composition of their “platform”, meaning how often they posted to blogs or forums, how many people followed them on social media, etc. Publishers were backhandedly admitting that their own marketing efforts fell short in selling a book; the author was explicitly asked to do their own independent marketing.

No surprise, then, that lots of authors decided to self-publish. They could well argue that their platforms were promoting their publishers more than the publishers were promoting the authors. O’Reilly’s move to an online subscription model was our response to this threat: Our data-enhanced service could link up readers to appropriate content much more reliably than conventional advertising had done.

I could cite lots of other reasons that all my old wiles crumbled when trying to recruit authors. The vast firehose of Internet content enticed people with the sense that they could get any information they needed for free (although most came to realize that quality was highly questionable), so sales of books were dropping. Technology started to change so fast that authors could barely keep up even as they were writing, and the pressure to finish fast was increasing. I’m speculating about all this, of course, but I think a lot of factors came into focus like sun’s rays through a convex lens, and the result was a career in flames.

It took me a couple years to assemble this explanation for my change in fortune. At first, all I knew was that authors weren’t returning my email messages. Or they might submit an outline but never follow up and sign a contract. Some even signed contracts and never turned in a single word of content—probably they started writing, found it more work than they thought worthwhile, and dropped silently away.

The slump from about 2005 to 2015, which more or less corresponded to our company’s Fawcett Street location in Cambridge, constituted probably the most difficult time of my life up to that point. Things that used to work for me no longer did. I was like so many other workers alarmed at their status during times of great economic change: I was doing my job competently the way I had always done it, and finding myself a failure nevertheless. I can empathize with people around the world who protest modernity after finding themselves driven from their customary positions. The closings among publishers in this period went along with the decline of journalism and other fields recoiling from the impact of the Internet.

My commitment to free software from the beginning of my O’Reilly work was so strong that years after we retreated from treating free software as a special focus of our week, I was generally identified by other staff as the “open source editor” as they directed authors my way. I tried to build peer-to-peer into a similar homestead on which I could settle and peddle my work, but the movement was not successful enough. I started to realize this around 2005. At the same time, I could see my status as a source of information and authority on computing trends slipping away. Invitations to conferences continued to come in for a while, but I eventually saw a decline there as well.

To be sure, many wonderful moments lay ahead. My work with the Peer to Patent project, which I had heard about first at a Chapel Hill conference about copyright and free culture, earned my articles publication in The Economist and Communications of the ACM. My investigations of health IT and open government opened many doors and introduced me to wonderful people who became new communities to me. In 2007 I edited one of my most successful books, both commercially and reputationally: Beautiful Code. It led to an entire series.

A survey of my blog postings over those years shows intensive activity. I continued to cover free software while adding copious amounts of material about government and health IT. I was traveling extensively, conducting video interviews, and signing up people for all kinds of schemes that didn’t pay off.

This period was filled with false starts, by both me and the company. Government work and health IT failed to produce revenue. I marshalled O’Reilly around an audacious comic book project, Hackerteen, and we just lost money.

Thanks to shifts in both technology and the publishing industry, every successful book that I tried to update failed in the new editions we put out during the 2001-2010 period. This includes our Linux kernel books, which could not keep up with the kernel’s speed of development and increase in complexity. Embedded systems, which had produced several lucrative books nestled in a pleasant little series, couldn’t sell any longer. My highly regarded MySQL series fell apart and new offerings in the series failed to sell. (At the same time, Oracle bought the MySQL company and decided not to keep sponsoring our conference, which I had covered every year.)

So the decade from 2005 to 2015 was therefore a time of confusion for all. In addition, for me it was a time of shame and vulnerability. I certainly wasn’t fulfilling the role of a senior editor, who is responsible for looking ahead, for setting the strategic direction of their area of activity—and not incidentally, for creating profitable books.

I don’t remember when I was promoted to senior editor. When I came to O’Reilly, there were very few editors and all of us took on a grand buffet of responsibilities: We interacted with the communities we were documenting, explored new possibilities, used the technologies we were writing about, and forcefully stated our opinions. The appellation “senior” barely registered with me. I noticed that many editors—at O’Reilly and at other publishers—put the title “Senior Editor” on their business cards, but I just stuck with the simple title “Editor” because I figured that the difference between an editor and a senior editor was an internal company matter, not relevant to outsiders.

At the beginning of this decline, I had little idea of the analysis I developed later. I just felt the ground falling away. With teens at home, my wife was busy with other things and didn’t really understand the position I was in. No one outside of work could appreciate what was happening, so I sometimes confided (and even complained) to coworkers. I admit it: I was seeking empathy and emotional support in a business environment, and even in the loose and accommodating O’Reilly culture, I was probably acting inappropriately. Perhaps I am still doing it here by writing up my memories. But it’s necessary to write about this period to convey the devastation felt by people dislodged from their comfortable, thriving context. In my case, the feeling went on for as much as a decade.

O’Reilly management was changing its strategy in reaction to market shifts too, and started rejecting more of my proposals (when I was lucky enough to get a proposal from an author) than they accepted. The company was squeezed from all sides, and so was I. Although the company could supplement its books and online articles with other forms of content such as video and online training, I had nothing special to offer in those media. The equivalent of book editing in video would be rather eerie: It would have meant watching an expert deliver their material in front of us and keeping up a running commentary: “Substitute this word for that…turn this way and raise your hand…look straight at the camera…now summarize the topics for the past ten minutes…” We are not film directors, however. And I was never asked to translate my editorial skills to the video realm even though I’ve delivered a number of my own successful presentations, some caught on film.

I might well have joined the ranks of the obsolescent and left O’Reilly with a severance package if Mike Hendrickson had not tapped me for his sponsored content program. In many ways, our shift from books to other forms of content was a great relief to me. The company’s scrutiny of schedules for our books softened. No longer would I be asked to provide detailed weekly updates to an editorial or production manager. Although I would note that a book is slipping and take decisive action to mitigate the problem, I would never again have to spend agonizing hours on the phone about it.

Influence? (2000s)

People often professed to me that they saw an indelible association between me and O’Reilly. This was quite flattering, but less and less accurate as years went on.

These people might be basing their impression on bombardments of blog postings and policy suggestions I made as O’Reilly got heavily involved in government and health IT. Or outsiders might be thinking back to my round of keynotes and conference presentations following our book and conference on peer-to-peer. Many people with schemes of collaborating with O’Reilly tried to gain entry through me (and sometimes I could successfully enroll them in our work).

Many remember me as a champion of free software, which I expressed through more than 400 articles, along with my books on the Linux kernel and other important free software projects. I was certainly at the center of corporate planning around open source, and had influence as late as 2007 through my friendship with author Steve Souders and our proposal for a conference on web performance, which I will describe later.

But the period when I really felt empowered was the 1990s, as part of that small team of editors who met once or twice a year during the early years of O’Reilly’s publishing business. This was a unique group of unconventional people with unconventional thinking. I don’t believe a single editor during that period came from another publisher. We were programmers, technical writers, or geeks with various backgrounds attracted to the joys of explaining difficult concepts. We were constantly talking to one another—or so it seems from this far remove. We often grumbled that we didn’t coordinate our efforts enough. But our camaraderie and shared vision from those young days are things that the company tried to recapture many times since, with only moderate success.

This editorial vision dissipated in three ways, and my sense of being an influencer diminished along the way. First, the locus of decision-making shifted away from editorial meetings until they eventually ended altogether. The last editorial meeting for which I have notes was 2011. The company set up other institutions, such as a steering committee, that drew on experts from many branches of the company. And clearly they did a good job, because O’Reilly continued to prosper and draw praise from everywhere. I simply never participated in those institutions.

Secondly, the company started to hire professional editors from other publishers instead of hiring programmers or system administrators. Starting around 2000, O’Reilly started to look a lot like other publishers. But we still prioritized inspiration and wanted to play a role in changing things, as evidenced by our Government 2.0 and health IT work.

Finally, as the options available in publishing narrowed, O’Reilly’s strategy took on a focus that ruled out quirky editorial projects such my 2007 best-seller, Beautiful Code, or my 2009 contribution to hip computer culture, Hackerteen. It had become clear to management that our survival called for a subscription model, thus limiting our product line to material for professionals who would come back for new learning. The company then coalesced further around a business-to-business platform, where content was directed to showing results. By that time, although I had the occasional chance to appear on the O’Reilly web site, I was in no sense a spokesperson or important entry point for the company.

Web performance and Velocity (mid 2000s)

The key to finding successful publishing topics is gaining a place of trust among the most successful people in a field. These people manage to balance the implementation of the best current technologies with a passion for advancing the field. I occupied a position of trust and companionship with such people for a couple decades, and enjoyed the momentum that made possible. That is, each successful project would lead to the next.

A good example is how I discovered and promoted the field of web optimization.

The story starts with a book on performance in a somewhat different area: relational databases. O’Reilly had a small set of books in the mid-2000s about the most popular free software database, MySQL. But the series really took off in 2004 when a database administrator named Jeremy Zawodny approached me with an idea for a book about how to significantly speed up queries. Aided by co-author Derek Balling, Zawodny put together High Performance MySQL and released it to a hero’s welcome by the community. Another set of star authors updated it, and the second edition continued to sell well. It then declined in interest, along with the rest of our MySQL offerings.

But in the meantime, I had made a leap to an even more lucrative area through a personal connection engineered by Zawodny. Working at Yahoo!, which was still an important force in the computer industry, he took notice of the interesting things that a colleague there was doing to speed up the loading of web pages. Zawodny brought me to Yahoo! one day, where Steve Souders dazzled us with a demonstration of tools that delineate exactly how long different elements of web pages take to display.

Why was web performance such a pressing issue? In those days, network bandwidth was much lower. It was common to wait several seconds for a page to load. Web designers thirsted for anything that could shave half a second or so off the “World Wide Wait”, as cynics called the Web.

Souders found ways to cut some loads down to a tenth of their original time. Some of the fixes were trivial, such as moving elements from the top of a page to the bottom. Other tricks were more complex. And there, as we marveled at the graphs in Souders’s office in Sunnyvale, California, the book High Performance Web Sites was born. After its stunning roll-out, Souders proposed a conference called Velocity, and pulled together experts in his field from several large companies to guide us.

After connecting Souders with our management and seeing his initiative come to life, I approached the first Velocity conference with excitement. I was booked on a flight from the tiny Manchester, New Hampshire airport instead of Boston’s Logan, probably because my family and I had planned a New Hampshire vacation around then.

As soon as we boarded the small plane, we became spectators to a freak hailstorm that passed over the airport. It lasted about fifteen minutes, but was enough to totally disrupt the airport’s schedule. They announced that our flight could no longer take off, because other planes had priority, and that there would be no other flight on that route today. I managed to find a shuttle bus to take me back to Boston.

Because Velocity lasted only two days, and because I would miss about two-thirds of it if I went a day late, I just canceled. Other people took over leadership in my absence. Even though I continued to work with Souders and do another book with him, I never played a leading role in our Velocity efforts. It seems like almost a sign from heaven: One fifteen-minute weather event that disrupted a career. This incident illustrates once again the central importance of conferences in O’Reilly’s approach to communities and publishing—the whole web performance effort revolved around the conference, and therefore around the staff who could attend the conference.

Technological improvements on the Web, including faster networks and the growing dominance of content management services such as Akamai, have reduced the importance of planning a web site around performance. But similar issues have come up subsequently around mobile devices, which also combine the dual challenges of low bandwidth and impatient users. I tried repeatedly to find someone to write a book like High Performance Web Sites for mobile apps. But by this time the relationship between publishing and technology had shifted, in ways I described earlier, to devalue books. Talented programmers were less interested in writing, and showed less tolerance for the stresses of being an author.

As for conferences, I was lucky enough to have a second chance. Peer-to-peer would give me an even bigger space than Velocity to move the computer field ahead.

Peer-to-peer (early 2000s) and the emergence of user-generated content

The most exciting phase of my tenure at O’Reilly, and of my entire working life, was taking a role as a representative and exponent of the peer-to-peer community. This section is the intricate story of that important movement and how I brought it within O’Reilly’s sphere of influence.

By 2000, I was enjoying the height of participation in the free and open source movement, which offered immense opportunities to both me and O’Reilly to make money, proclaim opinions, and embed ourselves in fast-moving community efforts. I was in the open mindset when I started hearing some buzz about a strange new file transfer service, and decided to look more deeply into it.

Some background is called for here—an unusual potpourri of topics about the Internet, the music industry, copyright law, and my own obsessions.

Nearly everyone with an Internet connection (or so it seemed) showed interest in getting music over the Internet, and the near zero cost of reproducing and transferring music over the Internet made sharing an irresistible temptation. No streaming services existed at that time—just individual files containing songs in the ubiquitous MP3 audio format.

Little would hold people back from illegally sharing copies, except guilt over depriving artists of revenue. Even guilt-prodded listeners might know that recording companies absorbed the vast bulk of the revenue, a long-standing scandal in the popular music field. Who had sympathy for recording companies? And who felt a strong commitment to copyright?

The minor barrier that held back rampant sharing was a lack of knowledge about who had the MP3 file you wanted and how to reach them. These logistical details kept enough people away from file-sharing to mollify the copyright holders. They could still sell enough CDs to turn a profit.

In short, there was a delicate balance between technological capabilities and the old methods of doing business. Whenever this is true, advances in technology (as well as the resourcefulness of smart people starting new businesses) disrupt the balance. In the late 1990s, the disruption consisted of a service called Napster, which connected people who didn’t know each other so that anyone in the world could send MP3 files to anyone else.

I did not try Napster myself, because I worked for a publisher and maintained some respect for copyright—but to be honest, I also did not want any of the pop music people were exchanging. By all accounts, the interface was horrid. Still, using Napster was simple enough: Sign up, upload the list of MP3 files you had converted (ripped, in common parlance) from your CDs, and search for a keyword that would turn up files other people had offered. Note that Napster didn’t keep any files on its site—only lists of files offered by other people. This kept it lean. Its own bandwidth needs were moderate, and the people who were sending and receiving files did so directly, keeping their own bandwidth use within tolerable limits. Napster’s delegation of file transfers to individuals at the edges of the network heralded more radical architecture changes to come—we’ll see how that led to peer-to-peer.

Soon, Napster was connecting an estimated 80 million users. Now the music industry had to take notice.

Law professor Lawrence Lessig wrote a book in 1999 called simply Code, in which he discussed how software could hand unprecedented power to companies and governments. His observations are no mere philosophy; they become central to the culture battles I’m discussing here. Lessig’s book productively distinguished four approaches to controlling people, such as passing laws or promoting norms. (Norms don’t seem to have the hold on people they used to.) The approach Lessig focused on was code, which could actually prevent people from even committing an infraction. Code is to law as a speed bump is to a speed limit. A sign posting a speed limit merely enjoins you to obey a law, whereas a speed bump damages your car if you disobey. Code controls us too, more than we ever think.

Lessig became a leader and even icon of computer activists like me (I reviewed drafts of the second edition of his book) and he participated in some critical historical movements among computer activists, such as founding Creative Commons. That organization took off from Richard Stallman’s brilliant free licensing idea for software—which Stallman called copyleft—and extended the concept to media. Lessig was an integral part of the free software and free culture movement before he decided to devote himself (tilting at windmills, in my opinion) to the distant goal of removing the corrupting influence of money from elections, and eventually even running briefly for U.S. president.

The music industry used several of Lessig’s approaches in a full-blown war on MP3 sharing. Many network administrators on college campuses used technical means to identify and then throttle Napster downloads, not so much because they cared about the revenue of the music studios but because the network traffic was burdening their routers. The music studios tried to work with telecom and Internet providers to do something similar, but the intrusive procedure was too creepy in the public’s eye to carry out. The music studios and some of the musicians also invoked norms, asking their fans to give up the practice of downloading music to which they were becoming addicted, in honor of the musicians who needed to make a living.

Ultimately, the recourse had to be a legal one. Clearly, copyright was being violated to the point of torture through Napster. But who was guilty? Napster wasn’t storing any music, wasn’t transferring any music, and wasn’t making money off of selling music. (I believe it made money from ads, the nearly universal business model of web sites since the mid-1990s.)

Individual Internet users were clearly guilty, and occasionally the copyright holders would come down hard on them. I don’t remember whether this practice started in the time of Napster or during the next phase of music sharing. The vendors would monitor publicly exposed information to identify particularly prolific music sharers, and would then haul them into court for serious monetary damages in order to make warnings of them. This was termed a whack-a-mole strategy, and was not a sufficient deterrent. Many of us in the Internet community tried to suggest to music studios that dragging their musicians’ biggest fans into court wasn’t great public relations. We also questioned whether the decline in music sales was caused by online file sharing or trends in the music industry itself.

Finally, copyright holders nailed Napster on the obscure doctrine of “vicarious and contributory infringement”. It had been part of copyright law for some time, and had many real-world applications. For instance, suppose you set up a flea market on your property and bring in a bunch of vendors who offer bootleg CDs or videos. The police will pick up the bootleg vendors for sure, but they will also prosecute you because you knowingly let them sell their wares. Incontrovertibly, Napster was doing the same thing over the Internet. And thus, in July 2000, a judge shut Napster down.

At that time, I was producing a monthly column for an online journal called Web Review, run by the adventurous and highly effective web expert Molly Holzschlag. While her site devoted itself mostly to practical guidance for web designers and programmers, she enjoyed publishing my column on policy and social issues related to the Web. I called my column Platform Independent, a play on the words used by the inventors of the Java language. They boasted that Java could run on all platforms without compilation (and therefore was platform independent), whereas I commandeered the term to show that I would maintain my own opinions without regard to other people’s political platforms. In those voluptuous dot-com days, Holzschlag could afford to pay me $300 an article.

On the day the court ruling against Napster came down, Holzschlag wrote me to say that if I could produce a thoughtful comment on the case for Web Review within 24 hours, she would pay me a thousand dollars. I wrote a nice piece titled “The Napster Case: Shed the Baggage and Move On”, where I highlighted subtleties ignored by the mainstream media (who jumped on the issue like sharks), and expressed my hopes that the precedent set by the case would not hold back Internet innovation.

Indeed, the crushing of Napster was not the end of the drama. It perhaps ended the first of three acts. Peer-to-peer came on the scene at the next curtain rise.

In 2000, news reports started coming out about new file-sharing networks called Freenet and Gnutella. With different protocols, they both solved the legal problem that slew Napster: They had no central point that could be sued. Instead, they used complex techniques for distributing content from computer to computer and for letting people ultimately discover who had the content they wanted. For practical reasons, each user of the system saw other computers up to a limited horizon—the services did not try to reach the entire Internet for all users.

The creators of these clever protocols explicitly presented them as workarounds that exploited holes in the copyright doctrine to which Napster had succumbed. The technologies also had legitimate uses, because they were a good way to exchange material generated and owned by the sender. (A later implementation of the concept, BitTorrent, became a major legal channel for distributing free software.) So the people who coded and distributed the software were protected from lawsuits by the Betamax decision, a long-standing court case saying that the makers of a technology are safe from copyright suits so long as it has “substantial noninfringing uses”.

The release of Freenet and Gnutella had the flare of publicity, if not an open provocation to the powers that be. So it came to the attention of the popular press, which had mostly ignored the Internet till recently (and were soon to find how destructive it could be to their own business models). Once already, the press had been dragged into trashing the Internet. This trigger was the fight over pornography embedded in the Communications Decency Act, which sounds like something from the nineteenth century but was actually passed by Congress in 1996 and soon struck down as a violation of the First Amendment. The Napster case raised the media’s leeriness to a state of panic. And now suddenly—Freenet and Gnutella.

So the news media went berserk. They had breathed a sigh of relief when the courts shut down Napster, but they realized that technology left no haven for old ways of thinking. They presented Freenet and Gnutella as an existential threat to traditional media.

While everybody else was debating the rights and wrongs of file transfer and whether the new technologies were legal, I took off in a different direction. I was sure that any technology that could scare so many people must be technically powerful. I read whatever technical material was available about Freenet and Gnutella. I extracted enough from this material to write a blog posting for the O’Reilly site in late 2000 with the title “Freenet and Gnutella Represent True Technological Innovation”. The title was almost click bait, not only challenging the conventional narrative but flaunting my disregard for that narrative.

Even though I expected some notoriety, I was surprised a week or so later when Allen Noren, who was running our blogs, told me that this posting had pulled in more views than anything else ever published on the O’Reilly web site. We agreed that more coverage of the Freenet and Gnutella phenomenon was called for. Others in the company noticed as well. Tim O’Reilly encouraged me to put together a book about new trends in decentralized networking, and corralled potential sources from his own Rolodex of august contacts.

Within a couple months, I assembled a top-drawer collection of technologists, along with a few social scientists to add that extra dimension, for a book on decentralized technologies. Freenet and Gnutella represented just one branch of an experiment in decentralization meant to shatter the current, restrictive structure of the Internet where powerful servers determined what their clients could do. This network architecture, client/server, had taken over during the past five to ten years. Peer-to-peer was a direct challenge to it.

Decentralization went hand in hand with the openness represented by free software. However, a few companies were designing proprietary decentralized technologies too, claiming to apply this old-new model to numerous business problems, just as many years later companies adopted blockchain in an attempt to solve everything in a new way.

I knew our book challenging the conventional hierarchy on networks would jump into a hot debate by people up and down the ranks of corporations and government. I also knew our book would be the only one worth reading. Having taken up the project promptly and swarmed the computer industry with solicitations before others thought of the topic, I knew that everybody who had anything worth saying was either writing for us, or was too busy to write at all.

Although we tossed around many titles for our anthology, by the time we rushed the draft to production someone had popularized the term “peer-to-peer”. It was obvious that this must be our title. And I found the term gratifying, for the Internet was founded on systems communicating with each other on an equal basis. Establishing connections was called “peering” from the beginning of the Internet. By the mid-1990s, only a few very powerful computers in upper tiers were still peering. The rest established contractual relationships with bigger networks in higher tiers and smaller networks in lower tiers. Still, communicating as equals is a fundamental Internet concept. The peer-to-peer movement around 2000 applied this concept to voluntary networks created on an ad hoc, volatile basis on top of the conventional Internet protocol stack.

The new peer-to-peer movement challenged a lot more than models for distributing content. At its most ideal, it represented a new way of tying together people without centralized mediators. It was also a new approach to the bandwidth problems hotly discussed around that time: Peer-to-peer exploited increasing Internet bandwidth, while offering intriguing new ways to use it. (Essentially, researchers found, peer-to-peer networks reduced the load on individual servers while increasing overall traffic on the network.) Lots of considerations lay behind a blog posting I wrote in September 2000 titled, “Peer-to-peer starts to make the Internet interesting again.” I was suggesting that we were all tired of what client/server had to offer.

At bullet speed, without sacrificing quality, I got the 450-page Peer-to-Peer to print in time for a conference of the same name that O’Reilly hastily called in February 2001. The 19 chapters of the book fell nicely into three sections: context (the social and philosophical underpinnings), projects (Freenet and Gnutella along with others), and technical topics. I expected that reporters, government employees, and business managers would read the first one or two sections, whereas those with the stamina for the technical meat would go on to the third.

Because it was a multi-author anthology, my name appeared on the cover. This was standard practice for publishing: I had assembled and edited the whole collection, although the only piece with my by-line was the preface. It was a fortuitous move, though. Having my name on the cover of the book led to great opportunities.

We held the Peer-to-Peer conference at a hotel in downtown San Francisco, eschewing the traditional convention sites, and limited registration to about 400 people. At the standard convention center, attendees cycle between cavernous, characterless meeting rooms and even larger, characterless corridors, bleaching inspiration from the attendees’ minds and souls. In contrast, navigating through the elegant twists of the San Francisco hotel, one was constantly wondering whom or what one would come across next.

A backpack handed out as swag at the Peer-to-Peer conference in February 2001 shows \

Backpack handed out as swag at the Peer-to-Peer conference

The conference sold out right away, as this was still the time of the dot-com boom. We brought early copies of the book to the conference for sale to the attendees. (Remember that point for an intriguing tale to come.) The official release was in March.

The first organization to pick my name off the cover of Peer-to-Peer and call me up for an interview was the radio show called The Connection, on Boston’s WBUR station. Under the dynamic host Chris Lydon, The Connection had achieved renown. Lydon was long gone (because, I suspect, he demanded a salary of a size commensurate with his fame and influence), but being called on the show was still a big honor. I did a good job, and ended by advising artists not to worry what the Internet would do to their careers. In this I was probably a little Pollyannish, but I stand by my basic message on the show: The Internet presents as many opportunities as it takes away, and it is up to us to find the places where we can live off of its value.

Lydon, by the way, went on to found an Internet show dedicated to ground-breaking trends. He called it Open Source, which I found gratifying even though the choice demonstrated a lack of understanding about what open source really is.

One amusing copyright incident accompanies my appearance on The Connection (copyright is never far from everything nowadays). For our book cover, our designers took the famous picture of Adam from Michelangelo’s Sistine Chapel, omitting the face and hand of God, and flipped Adam horizontally so that the cover showed two Adams reaching out to each other. I thought the concept demeaned Michelangelo’s masterpiece and trivialized the book’s serious concept, but went along with it. The staff at WBUR liked the two-Adam picture and put it on their web site to advertise the segment where I spoke. Within hours, our designer Edie Freedman phoned WBUR and warned them to take it down. She explained that because a cover designer had altered the Michelangelo, the picture was not in the public domain but belonged to the designer and could not be displayed without the designer’s permission. However, she said, they could display the entire cover containing the picture, which in relation to my appearance would constitute fair use.

My radio appearance was only the start of the fame that rolled out from my listing on the cover of Peer-to-Peer. I was invited to lecture in settings ranging from Japan to the illustrious FOSDEM conference in Brussels. The peer-to-peer idea as an independent movement then faded, but it had a long-term, unanticipated influence on the next phase of Internet development that changed the world.

I keynoted a system administration conference in Tucson, Arizona, where I urged the attendees not to block peer-to-peer traffic. Then I got a great opportunity to fly to Brussels and deliver a talk on peer-to-peer at FOSDEM, the leading free software conference in Europe. Staff from O’Reilly’s office in Britain were there—people who had helped me innumerable times over many years, appreciated my work, encouraged me to carry out bold suggestions—so I could put faces to names and share thanks with them.

Richard Stallman keynoted at the conference, joking around with his well-known Saint IGNUcius halo. I was surprised by his presentation, a generic exposition and defense of free software. My impression was that all the attendees were avid members of the movement, being users of and perhaps contributors to free software. But Stallman told me, in his review of this memoir, that many FOSDEM attendees failed to understand the ethical and social significance of the “freedom” in free software. So he believed that conveying this in his keynote talk was of critical importance.

My own talk at FOSDEM was not well attended because it was placed opposite one of the most celebrated free software developers of that time (Miguel de Icaza, if I remember right), and in fact I was not particularly happy with my own delivery. I think now that I could have pursued my ideas further and added more practical ideas to the talk.

A note on de Icaza is of interest. He had amazed everyone by producing a free clone of the complex Microsoft C# environment. The goal of this project, called Mono (“monkey” in Spanish, but probably a name indicative of its unifying ideal) was to let people interact with the proprietary Microsoft .NET at every level, using nothing but free software. Microsoft at the time still harbored a visceral fear of and antagonism toward free software, but eventually they welcomed Mono and collaborated with the project. Unfortunately, de Icaza couldn’t get a sustainable business going through free software—discovering what hundreds of other failed companies had found out—and ultimately announced that his next project would be proprietary.

The biggest boondoggle to come my way was a trip to Japan, a direct invitation from the head of the O’Reilly Japan office, whose name was Arihiro Iwata. When speaking English he went by the name Alex.

For this trip I prepared not just one but two new speeches about peer-to-peer. First I was to keynote at a large conference (whose exact purpose I never understood) called Info-Tech2001, sponsored by the Kansai Institute of Information Systems in Osaka. Then I would visit the O’Reilly Japan office in Tokyo and give a talk before a select group of political science professors at Senshu University. I don’t know how Iwata wangled these invitations. I am also not sure that he asked for two separate speeches. Following my editorial principles, I may have decided that the two audiences had different needs and interests, and therefore that I should write two speeches. That may stand as my basic principle for both writing and editing: Know your readers and deliver what they need.

Somehow, in the three months leading up to my trip, I managed to write both speeches and study some Japanese. I learned the two alphabets, Hiragana and Katakana, including how to write my own name, along with a bit of grammar and some useful words. Oh, and also carried out my editorial duties at work and raised my kids. The responsibilities did seem to slow down my blogging, though: My records show only three postings during that whole time.

After I mentioned to my relatives that I would be going to Japan to give speeches, I got an odd request from my sister-in-law Coletta Youngers, who was working on human rights for the Washington Office on Latin America. She was incensed that former Peruvian president Alberto Fujimori had fled charges of human rights violations by taking up residence in Japan, which showed no interest in looking into the charges or requiring him to go back and face them.

So Youngers asked me to work into my speech about peer-to-peer technology an appeal to my audience to extradite Fujimori. This would seem an odd fit, but I came up with: “individuals can do bad things on the Internet and then disappear, leaving behind the problems they’ve caused and popping up somewhere else. How will peer-to-peer systems deal with their Alberto Fujimoris?” Youngers thought this hilarious and apt. Sensitive to the mores of cultures I knew little about, I asked my sponsors in advance to check this passage and let me know whether including it would offend anybody. They approved it, and I delivered it with gusto in Osaka. I don’t take any credit, however, for the extradition order issued by Japan many years later.

Iwata treated me royally. He paid for first-class seats on flights to and from Japan, and took three days off both from his office and from his wife (whom I never had a chance to meet) to be with me every waking minute and guide me through the geographic and cultural labyrinths of his country.

I had learned a number of important points about Japan some twenty years earlier when working briefly for Hitachi. I bowed as gracefully as I could upon meeting people. When the professors in Tokyo presented me with their business cards, I accepted them with both hands and laid them out in a vertical row in front of me as I prepared to speak. I even referred to one of the professors by name to acknowledge an idea he had given me during our initial chat. (I hope I didn’t insult all the other attendees by leaving them unnamed.)

But I was not prepared for everything. During the dinner I attended at the Kansai Institute on the day before my speech, one of the organizers unexpectedly asked me to say a few words. I simply introduced myself and said I was honored to be invited. After I stepped back, someone else came up and offered a few minutes of talk in Japanese which I recognized was a follow-on to my speech. Iwata told me later that I was expected to say a lot more, and that proper protocols were rescued by this other man who filled in what I was supposed to have said.

Our schedule in Osaka left some time for sightseeing, so Iwata treated me to one of the travel highlights of my lifetime: three of the huge gardens of Kyoto. I had visited the Japanese gardens in Portland, Oregon, and in San Francisco, but they were like children’s models next to those of Kyoto.

We then took a bullet train to Tokyo. We couldn’t get seats together, but I had the tremendous luck to sit next to a doctor who had lived in the United States and therefore could chat comfortably with me in English. He actually invited me to visit, a lovely gesture (whether or not he really meant it), but I could never take him up on it.

The Senshu University talk also went well, so far as I could tell. I alternated speaking with Iwata’s son, who was a professional translator and translated my speech as I went along. I had to cut at least a third in order to fit within the time allotted.

Iwata took me to meet a technologist and successful entrepreneur who later became famous for good and bad reasons: Joichi Ito. We had dinner at a nice French restaurant, which I was looking forward to because I had had no one to talk to for days except Iwata, and looked forward to speaking French with the waiter. But it turned out that the waiter at this French restaurant spoke no French. With Ito, we carried on an animated discussion of the state of the Internet (although the details escape me).

Ito played many important roles later in computing, including stints on the board of Creative Commons and ICANN, before spending many years as director of MIT’s highly visible Media Lab. MIT got involved in a scandal with a donor, disgraced billionaire Jeremy Epstein, and Ito resigned from the Media Lab.

In the O’Reilly Japan office, I was asked to sign Japanese translations of the book on Make that had been my very first work for O’Reilly, some 12 years earlier. They were impressed that I signed my name in Katakana as well as English. One staff person praised my handwriting, which I found dubious (because my English handwriting is an awkward scrawl), but I decided she must have been sincere because she could have chosen to praise me another way. Simply knowing my name in Katakana showed that I had taken considerable effort to honor their language.

Another staff person picked up a bunch of roses that had been sent for me and held it up to her face, asking me flirtatiously, “What looks better, me or the flowers?” Thinking fast, I answered, “You go well together.”

Iwata rounded out my Japan experience with a mass at a Catholic church for a professor of his. I believe Iwata himself was Catholic. The mass was conducted in Japanese and took a long time, and I’m embarrassed to say that I fell asleep—Iwata had to prod me awake.

The only other regret I have about these three days in Japan is that, while Iwata took me to many restaurants, we never had sushi. I mentioned this to him, and he explained that he didn’t like sushi. His overwhelming hospitality washed away any disappointment I had.

My keynote at the Kansai Institute was awarded an honorarium worth some $1,400. But in those days, converting the honorarium to a currency I could use was a convoluted affair. Iwata took me to a bank, which charged fees, and probably some government fee was extracted as well. At one point I was handed a wad of Japanese cash representing a huge sum, and I carried it about me nervously, feeling like a drug dealer. By the time I deposited my earnings as dollars in my U.S. bank, the charges and conversions had reduced it to about $200. I didn’t mind losing my payment because I had gotten so much other benefit from the trip, but I said, “Somebody’s making a lot of money off of the global economy.” I didn’t know then how true this was.

My trip ended with another interesting moment. I knew that my child Sonny was playing viola in a concert by their youth orchestra on the day of my return, and I had assumed I would miss it. But arriving at Boston’s Logan Airport in the early afternoon, I realized that I was just in time to attend the concert if I went straight to Boston University instead of going home first. I asked the taxi driver to change direction, and dragged my big luggage into the college a few minutes before the concert. People could hardly believe that I had just disembarked from a flight from Japan.

These stories pretty much sum up most of the ways I milked the opportunities around peer-to-peer. I believe I was a worthy emissary for the movement. Other people possessed a deeper knowledge of one technical topic or another, but I had the view from the balcony that came from coordinating so much of the communications around peer-to-peer. The role I got to play lasted from late 2000 through about 2005, when the problems of peer-to-peer communication loomed large enough to drive the concept back into obscurity.

Even as we were working on our book in late 2000, fundamental design problems with peer-to-peer were becoming evident, and were even aired frankly in the book. I realized that pure peer-to-peer couldn’t handle basic elements of addressing, coordination, and trust. I wrote an article for O’Reilly explaining those barriers in 2004. Even Gnutella (which long outlasted Freenet as a large-scale network), had to abandon the pure peer-to-peer architecture and promote some systems to superhubs in order to connect users at endpoints effectively.

The parallels between peer-to-peer and blockchain are interesting. Both excited the computer field because they promised to re-examine nearly everything that was taken for granted about coordinating people. The two trends were extremely broad, trying to cover everything people do. Unfortunately, they were developed with a deliberate disregard for the drawbacks introduced by their precedent-breaking designs.

An incident on Amazon.com which I found amusing—although I could just as easily been infuriated—underscored the weaknesses of trust inherent in peer-to-peer, and the Web as we know it today. The very first Amazon review of the Peer-to-Peer book gave it the lowest possible rating—just one star. The reviewer openly admitted they hadn’t read the book, but took a look at some online material and claimed it didn’t go into much depth. In other words, Amazon.com’s open reviewing system allowed a random, anonymous individual with no knowledge or competency to harm the reputation of my book. Some of the authors on the book asked whether we could get Amazon to take down the review (probably not), but I said, “Let’s leave it up. It’s a good example of the problems we’ve identified with peer-to-peer.”

My writings that warned about the effect of malicious trolls on peer-to-peer systems apparently weren’t read by the managers of the popular social media sites that were started years later. Perhaps they would have been prepared for the manipulations of elections and public opinion by governments and sleazy political organizations, which use the same tricks as that Amazon.com reviewer.

Peer-to-peer did offer me one last perk, quite an unusual one. A startup called GNUCo was trying to produce a commercial service to serve a huge number of different retailers. There was no way they could scale up to the extent they hoped to achieve if they had to keep up with every product change at every company. They figured that peer-to-peer technology could solve their problem. Wanting some confirmation, they offered me a trip to Atlanta to discuss their needs.

So I flew down and spent a couple days in meetings. Their platform was truly interesting and novel, and I made suggestions about what would be viable. I was impressed also by their impulse to give something back to the free software community from which they had adopted so much of their platform—the community that underlay the very name they chose for their company, honoring the GNU free software project.

I identified a neat dividing line between the bottom half of their platform, which might be useful to many people in many contexts, and the top half that was of interest mostly to them. What I suggested to them was a kind of open core strategy, in which they opened the bottom half as free software. I don’t think they ever did this, and of course, they disappeared from the face of the Earth shortly after my meetings (I did get all my expenses paid, luckily), but it was an interesting experience that demonstrates the power of simple memes such as peer-to-peer.

Not only did peer-to-peer peter out as a large-scale solution to the Internet’s problems, but O’Reilly’s initiative failed to produce a big payoff. Sales of my Peer-to-Peer book never went high, even though it was recognized in the industry as an indispensable text and was widely cited in both academic and business settings. As I had predicted during our writing phase, no book produced by other publishers on the subject attracted any attention. We planned a second Peer-to-Peer conference in 2002, but canceled it because the September 11, 2001 attacks by Al Qaeda were depressing travel. I think that an even greater reason was the replacement of the 1990s dot-com boom by a dot-com bust. But peer-to-peer had an ongoing impact in different ways: some on our company and some on the world. I’ll use the rest of this section to detail those impacts.

Tim O’Reilly and other managers learned from our peer-to-peer experience that we could be influencers. We had played a historic role years before in promoting the meme of open source, but now we played up our strengths here more pervasively and consciously.

Instead of another Peer-to-Peer conference, we held a series of Emerging Technology conferences. One year, one of the technologies we highlighted was WiFi—yes, it was once new and perplexing! I found these conferences stimulating and accomplished a lot of networking at them, because people still recognized me as a leader and wanted to present their projects to me.

The Emerging Technology conference lasted only a couple years, but it inspired creativity in how we followed new trends. The difficulty with publishing is that very early technologies don’t reward major investments—and a book is a major investment by the publisher. You have to release a book at the right stage of the famous Gartner hype cycle: Hopefully you catch something well before it reaches its peak, but when it has more followers than the early adaptors that Tim O’Reilly referred to as “alpha geeks”. But an institution like ours, always striving to help innovation along, is critically tasked to hearing what the alpha geeks are doing.

In some sense, our attention to emerging technologies, which we made explicit when covering the peer-to-peer movement of the early 2000s, started much earlier. It was always top of our agendas to be hip to the first flares of new movements in computing. In addition to the brief period during which we ran Peer-to-Peer and Emerging Technology conferences, we used the web site to highlight interesting trends, particularly through the Radar blog. Mike Loukides started a low-budget online journal about biohacking, around the period we were covering health IT. And foremost among our touchpoints with the alpha geeks were our FOO camps, which Sara Winge started and which ventured far outside computing to health, science, education, and policy. The Peer-to-Peer book triggered a turning point in our consciousness.

In my observations, it’s fair to credit peer-to-peer with even more grandiose impacts.

Although peer-to-peer failed to build a free and egalitarian network, its meme entered the Internet at higher levels. Remember the lesson that peer-to-peer taught: Internet users do not have to be lonely, atomized individuals at the mercy of central servers. The users can become powerful when they come together. And they can offer their own ideas, not simply be recipients of messages from rich corporations.

In that context, what are the billions of contributions to Twitter, Facebook, and YouTube but the expression of the masses in peer-to-peer communications? During the 1990s, content was controlled by those who ran servers, downgrading almost all computer users to the role of mere consumers. With social media, computer users took the lead and met each other as peers. Creativity broke out of old bounds.

Naturally, all the problems of such connections came in spades as well. Two problems in my 2004 article applied more than ever: The problem of addressing allowed attackers from Russia or Eastern Europe to pose as British or American sources, and the problem of trust became even more obvious.

User-generated content was labeled Web 2.0 by Tim O’Reilly in the early 2000s. Web 2.0 went up the networking stack further than peer-to-peer, suggesting that individual users on their home or business computers would create value on the Internet. (I think Tim himself drew the connection between peer-to-peer and Web 2.0, although I’m not sure.) And Web 2.0 was a challenge to the standard paradigm too—a big one. Superficially, it suggested that the “web portal” concept popularized by AOL and Yahoo!, a model where large centralized corporations offered content to passive viewers, was obsolete or at least as not as fertile as peer-to-peer. Web 2.0 also represented an implicit challenge to the prevailing telecom business model, where cable and telephone companies reserved nearly all their bandwidth to download content to their users, and left users only a sliver of bandwidth with which to make requests for content.

Web 2.0 could theoretically have been implemented through a peer-to-peer network, and there were attempts like Jabber (later standardized as XMPP) to do so. But thanks to the problems I had identified with peer-to-peer networks, Web 2.0 peer-to-peer contributions ended up being implemented by centralized client/server systems. Hence the primacy of YouTube, Facebook, Twitter, and so forth.

By the same reasoning, peer-to-peer informs Government 2.0, a movement I cover in a later chapter. The goal of Government 2.0 was increasing public engagement, a key factor in which was to accept data and opinions from constituents.

Finally, peer-to-peer forced content producers to meet the challenges of the Internet head on, which they had avoided temporarily by eliminating Napster. iTunes for music and Netflix for movies are the studios’ responses to the demonstrated desire shown by their customers to receive content cheaply, instantly, and continuously. Jogged by peer-to-peer, the real Internet revolution in media had begun.

I’ve said that peer-to-peer challenged the hold that client/server technologies had on computing. Under client/server, an administrator (typically at the company where people worked) vetted and installed the software they thought was what people needed. Petitions for new services had to go through a bureaucracy. Technical barriers were used to suppress technical innovation.

In contrast, peer-to-peer offered end-users a way to serve themselves. As soon as they downloaded an app, they could run a service with anyone else who downloaded it. Naturally, security risks swarmed around this freedom. Still, that freedom was appealing to end-users who recognized the value of a service to which their network admins were indifferent.

I remember from this period a Dilbert cartoon where the lead character, the engineer Dilbert, brings home a large device with a screen. He tells his pet Dogbert, “I just bought a videoconferencing system. Now I have to wait for someone else in the world to buy one.” In the last column, Dogbert observes, “It’s unsettling to realize that progress depends on people like you.” Peer-to-peer services embodied just that wild combination of experimentation and trust. But adoption was stymied by another technical barrier: restrictions on the ports used to exchange information.

Readers without much Internet background may balk at trying to get their heads around the technical information they need to understand this social phenomenon. But hold steady and read on. Once again, we have an excellent lesson in the impacts of obscure technical matters on society, and on the changes in attitudes that must take place to enable technological change.

Each computer system is guaranteed to receive the traffic sent to it because the computer has one or more unique Internet addresses. But the traffic can be an unruly mix of packets sent by different applications—email, web, network administrative tasks, and so on—all jostling each other as they arrive.

Therefore, Internet software assigns each application an arbitrary number known as a “port”, mimicking the physical ports into which electricians plug cables. Email is assigned port 25, the Web gets port 80, etc. A special non-profit organization, the Internet Assigned Numbers Authority, assigns the most important numbers. Other applications can just pick some arbitrary high number and use it consistently.

The concept of ports presumes clean divisions between applications. You would no more expect email to come over the web than you would expect a dog to give birth to a parrot. The sanctity of the port numbers is paramount.

The early peer-to-peer applications picked their own numbers, but quickly found that they couldn’t get through corporate firewalls—the software and hardware responsible for both aggregating and filtering out traffic. Here again, administrators were policing computer use by restricting access to traffic on just a few ports. The administrators presented this as a security measure, but its main effect was to prevent users from running the applications they wanted. Such screening could also be problematic for the classic File Transfer Protocol, which used random high numbers.

Blocked by standard firewall rules, the peer-to-peer developers gazed yearningly at port 80. It was reserved for Web traffic, but the popularity of Web ensured that this port had to be open on every computer used by typical computer users. And so the peer-to-peer developers made a momentous decision: They would violate the tradition of providing a unique port number for their application, and send everything over the Web. The user’s web browser would receive and process the traffic.

Traditional network experts were appalled at what peer-to-peer developers were doing. The traditionalists called it “port 80 pollution”. (We encountered these hecklers in an earlier chapter, complaining about browsers that opened multiple parallel connections to download images.) But it led to a practice so universal that we all use it all the time—and yes, we receive our email over the Web. We also use nearly every service provided by every modern software company, and we call it web services. Port 80 pollution has become Software as a Service (SaaS).

Although web services are unlikely to be supplanted, Web. 2.0 is dangerously at risk, thanks to the problems with identity and trust that rendered peer-to-peer mostly unfeasible. The whole Web 2.0 movement depended on Section 230. This term, a rallying cry on the Internet, refers to the one section in the Communications Decency Act of 1996 that survived its Supreme Court challenge. The law protected web sites such as YouTube and Facebook, saying they couldn’t be found liable for content uploaded to them by individuals (so long as it was identified as the individuals’ content, not owned by the web site).

Ironically, one of the first cases to invoke section 230 was ruled incorrectly. Matt Drudge had contracts with America Online to publish his right-wing political content, often with a prurient and sleazy twist. (Drudge did act legally when he broke the Monica Lewinsky scandal that brought Bill Clinton to impeachment.) In 1998, Drudge posted fallacious material about a political operative named Sidney Blumenthal, who sued both Drudge and America Online. The court exempted America Online from paying out a major settlement, which I consider an absurd interpretation of Section 230. America Online recruited Drudge and paid for his content, and therefore should have been slapped with responsibility for his lies, according to any responsible interpretation of the law.

But Section 230 was generally a good law, allowing Web 2.0 to take off and sites such as YouTube to make real contributions to education and culture. Of course, these sites became extremely exploitative of contributors, visitors, advertisers, and news sites in many ways, but one can’t blame the business model on Web 2.0 or Section 230. These sites also invest a lot of money taking down objectionable and illegal content, partly to meet the demands of Section 230 and partly to assuage public opinion. Unfortunately, they are currently in a losing battle with the trolls.

I still have hope for the Internet. But it was obvious by 2015 or so that user-uploaded content was creating real problems. Governments were already trying to weaken Section 230 (which was imitated by many other countries, although with some important differences). Sometimes the governments’ stated concerns about porn, terrorist content, and other bad actors were veiled attempts to curtail freedom of speech, but there was real meat to their complaints as well. And limitations on content, by both private sites and government regulation, are sure to grow. It’s all a response to Web 2.0.

Anticipation and realization: Linux (late 1990s)

What big movement carried along my work before peer-to-peer? Nothing else in my career (or most careers) quite corresponds to that heady combination of writing opportunities, speaking engagements, technical inquiry, and social significance. But another fulfilling time at O’Reilly was the period of the late 1990s, when we threw in our reputation and our future with free software—open source, if you prefer. This movement, which gets a chapter of its own, shifted the whole computer field onto a new track, and along with it many processes in the larger society. O’Reilly drove the movement forward with a couple high-sales series. The chief series was on the Linux kernel, but MySQL was quite important too. I edited almost all the books in both series.

O’Reilly made history in 1998 through a summit whose attendees agreed to promote the term “open source” as a more easily understood moniker for free software. (I wasn’t at the meeting.) We performed a similar word twist on our Perl conference, renaming it the Open Source Convention. We had also put stakes in the free software terrain by publishing Eric S. Raymond’s book The Cathedral & the Bazaar in 1999. But it took a while for O’Reilly managers to convert abstract advocacy into business opportunity; to realize the bounty that free software offered us.

Why have I titled this section after Linux? Because during the late 1990s, people who heard of free and open source software associated it first and foremost with the Linux kernel and surrounding operating system—that’s how important Linux had become.

I had heard of Linux quite early, within a few months of its initial release by Linus Torvalds in 1991, while I was still employed at Hitachi. Still, when my friend Lar Kaufman explained it to me, I had scant interest in the new operating system. I told him, “There have been many Unix systems, some ported to personal computers. What’s special here?” Hitachi, where we both worked, had shimmied up a rotting branch of the Unix ecosystem called the Digital Computing Environment (DCE). Unix still dominated servers and advanced computing, but was reaching the end of the road.

My first privilege to touch Linux came in early 1992 when Kaufman brought me to a friend’s house, where we gently inserted and removed, in order, the 51 or 52 floppy disks provided by Softlanding Linux System. (By the way, it would be a few more years before enthusiasts of the free GNU tools started to attach “GNU” before “Linux”.) SLS was not the first commercial distribution of Linux, but it was the first that people found robust enough to take seriously. SLS provided the basics for Linux, including graphics. You couldn’t complain. That market soon became quite crowded, attracting the attention of software distributor Bob Young and prompting his own Linux distribution through the new company Red Hat.

After I saw Linux overtaking the field and crowding out free alternatives such as FreeBSD, I became an evangelist for it at O’Reilly and drove a book series that became perhaps our most noted and respected offering in the late 1990s. Linux was becoming quite the rage, standing in for everything happening in free software.

My shift of focus, from the quasi-standards that failed at Hitachi to the overboiling creativity of the Linux community, took place through random chemical bonding instead of deliberate strategizing. My journey mirrored the upheaval in the computer field as a whole. Look at what happened at professional computing companies such as IBM and Hewlett Packard: Like Hitachi, they all lined up behind DCE in the early 1990s and poured resources into making it the center of their offerings, but by the late 1990s they were all vying to offer the best Linux support.

DCE was unviable from its very premises. It was an overly ambitious attempt to tie computers together from different manufacturers, using the software considered best of class from each manufacturer. The whole thing was coordinated by an understaffed and underfunded organization the manufacturers threw together and called the Open Software Foundation.

OSF could not fulfill its role as a standards organization. It was a marriage of convenience between computer vendors who harbored no fidelity to one another. All of its offerings came from member companies, loaded with bugs and thoughtless design decisions (known to programmers as “technical debt”) that reflected short-term advantages at the time of their development years before. Taking poor quality components, and then piling on the additional impossible challenge of getting them all to cohere harmoniously, was a travesty of a project. It wasted the efforts of thousands of highly paid developers—not to mention earnest tech writers such as my team at Hitachi.

A deeper analysis provides a shorthand to sum up the previous complaint: There was no community around OSF. There was only a gnarly tangle of idealistic aspirations dragged down by grubby corporate considerations. OSF was last seen in an announcement that they had merged with some other obscure quasi-standards organization, thus reducing by one the immense number of obscure quasi-standards organizations worth ignoring. Many years later, the initials must have been considered free for the taking, because the OpenStack Foundation started using them.

Linux, by contrast, formed a community early and built everything in a truly open fashion. Torvalds’s adoption of a free license, which he often said was the best decision he made in his career, was just the starting point. Thanks to the license, a true community could build up around the software, keeping everybody honest and allowing for constant injections of new energy. As one example of the community’s strength, Linux soon ran on more hardware and supported more devices than any other operating system in history. No less important was the zealous love pledged to Linux by people around the world, many of them unschooled in the technical details that made it work.

So I had matured into a firm backer of Linux by the time I came to O’Reilly. Even though Linux was confined to desktops and small servers, hardly noted by a computer industry consumed by the battle between Microsoft systems and Unix, O’Reilly entrusted me with developing a series I took on this task methodically and with high-quality results.

Kaufman also kept up his interest in Linux and fed us valuable advice at O’Reilly, where I am fairly certain he took a job for a while before he went to law school. Kaufman even suggested the Wild West theme that our brand expert Edie Freedman adopted. Kaufman offered us a book with beautiful woodcuts of the nineteenth-century American West, which became the first pictures we used to illustrate our Linux books. In fact, our artist would seek out a unique picture to start each chapter of each book. After a few books, this became prohibitively time-consuming and we stuck to one image per book.

But in 1994 or 1995, Linux had not yet entered major data centers and become almost indispensable for large servers or virtual computing. It certainly didn’t turn up on cell phones and other mobile devices. Developers hadn’t noticed Linux’s value for offering graphics and robust networking on embedded systems. Our books sold rather poorly, so we gave up on Linux.

When O’Reilly managers told me to stop looking for Linux-related topics, I did not push back. I could have carved out time—as I was always doing to indulge my own intellectual interests—to stay in touch with the Linux community, but I didn’t. I think the reason that both I and my managers turned our backs on this phenomenon was our lack of understanding concerning the historic social change of which Linux was both impetus and beneficiary. It wasn’t just we who missed it—the whole world was slow to catch up. In particular, sociologists and economists on the whole had no clue what Linux represented, and they were just starting to nose around free software in the mid-1990s.

Now we have hundreds of scholarly, insightful explorations along the lines of Yochai Benkler’s Wealth of Networks. So now, standing on the towers of books and research papers (including a couple by me) that have appeared on free software and open source, I can analyze retrospectively what happened.

I won’t bother here with an analysis of the factors in the mid-1990s—globalization, the technical and social state of the Internet, and technical optimizations—that drove Linux to world domination; such punditry has been exercised by better observers. The Linux community has never been perfect, of course. Its problems with negative interactions, coming down particularly hard on women, aroused widespread criticism. After free software communities came to a general understanding of the traits needed to maintain participation, the cultural shift overtook Torvalds himself and led him to humbly relinquish his leadership position in Linux.

But in the meantime, Linux gave the free software movement both a proud platform for limitless innovation and a success story to shout from the rooftops worldwide. It inspired lawyers, economists, policy-makers, educators, and many others including (yes) publishers to re-examine their assumptions about ownership, value, and transparency, leading to a plethora of “open this” and “open that”. Linux also cemented my own commitment to free software, while offering me a path to a strong career as editor, writer, and advocate.

But the payoff didn’t come early enough to save my Linux work at O’Reilly. During the mid-1990s, managers tossed a diverse group of wan, uninspiring topics at me. I edited a book about virtual private networks (VPNs), a topic of continuing importance. VPNs are marketed to all network users from the home to the largest corporation. But VPNs at that time were mainly proprietary. Because of that, I think, the book didn’t sell. I edited some Microsoft-related books too, and those didn’t sell either.

Finally, later in the decade, we hired an experienced editor named Mark Stone, who possessed a strong understanding of technology and had watched the growth of Linux with excitement all the time that attention toward it was in remission at O’Reilly. Stone taught all of us a lot about what was going on in the Linux area. Over just a couple years, Linux had matured and taken on worldwide significance. He inspired me to pick up work again on Linux and other free software. We had no time to lose. As one of our editors, Paula Ferguson, said at the time, “O’Reilly is often either too early or too late to jump on a technology—or both, as in the case of Linux.”

The following six months were frantic. I lined up the original or new authors to update all the existing books in our Linux series, and solicited new books as well. The effort paid off handsomely, and sales were gratifying up to the early 2000s. They suddenly declined then, as so many of our series did, for reasons I have already cited, related to changes in the publishing industry and where programmers directed their attention.

But the shift signaled by our re-embrace of Linux went far deeper than a business decision to support a book series. It placed us back into the center of the free and open source software movements. We renewed our vows with these movements, which had been formed long ago by our coverage of Unix and the X Window System.

Many of the leaders in the free and open source software movements collaborated with us closely and even engaged in strategy with us; I became their friend, ally, and supporter. I jumped eagerly into the role of public advocate for free software and all the social changes it brought in its wake: free culture, open government, and more. My advocacy and writing abilities brought me to the Googleplex in Silicon Valley, to Brazil, Brussels, and Amsterdam, and to all places online where free software was under discussion. I was now part of a global community that was simultaneously idealistic and pragmatic, a Commons where work and personal connections intersected, all fueled by the passion for a new way of doing things.

This was the last glow of computer books’ golden age, which lasted a couple decades. The era started with universal adoption of personal computers in the 1980s and intensified as Internet access in the 1990s made those computers supremely valuable, then trailed off as more and more information came online for free. In the 1990s, people just automatically bought a computer book in their quest for needed skills. From the late 1990s onward, and particularly after the dot-com bust of 2001, people turned first to online searches and had to hit a barrier there before reaching a rational conclusion to buy a book. Switching gears mentally from “find it for free” to “pay for it” was a big hurdle, even though most people could easily afford a book.

I always maintained a healthy skepticism of the dot-com boom. Those of us with knowledge but not cash held back from overexuberance. Those with cash and little knowledge invested billions and lost them. I remember how a customer saw the word Linux on a T-shirt I was wearing one day in a convenience store during the 1990s. The customer said, “Gee, I’ve been thinking of investing in some of those new computer companies.” I told him, “Don’t invest—instead, study the technologies, so you can become an expert user.” I hope he took my advice.

A ludicrous controversy: Smileys (1997)

Until the funk that descended on me in the mid-1990s, there was only one time I feared getting fired. It was over a subject so trivial I’m embarrassed to air it now: a book about smileys. Yes, those silly little graphics that indicate emotion or convey an ironic message that’s in tension with the overt statement in the text.

Smileys are now pixel graphics, supported by most tools on a fully graphic monitor. Although most people don’t realize it, smileys are carved into stone—or actually the closest thing to that nowadays, which is to be included in the Unicode standard for characters and symbols.

Back before all these graphical representations, smileys were clever manipulations of plain Ascii characters, starting with the original :-) for a smile. Over time, clever inventors people published hundreds of these text representations of everyday gestures. Dale Dougherty thought it would be fun to put out a book containing all the smileys we could find online.

As the book went through production, grumbling from the production staff suffused through our Sherman Street office. Many of the smileys, they said, were offensive. They would not perhaps be considered obscene, but they contained demeaning stereotypes of women or minorities. Living in Cambridge, the staff were alert to these risks.

Dougherty would have none of it. He agreed that some smileys fell outside of good taste and might be offensive, but he seemed to think that it was our role to present the Internet authentically. Here I’m straining to be fair to Dougherty and to present an argument that seems reasonable, whether or not he would present it that way. One could argue that the full collection of smileys was required to convey the richness and diversity of the Internet, including aspects that might make enlightened readers uncomfortable.

The in-house controversy over this silly little project, a dumb distraction from the serious work of teaching professionals how to administer and program their systems, was rending the company. I finally decided to look at the draft myself. It convinced me that Dougherty was wrong and that we needed to follow the judgment of the protesting production staff. Sample smileys included “Hottentot” (with an exquisitely text-fashioned bone through the nose) and “Mexican run over by a train” (a sombrero atop two parallel lines representing tracks). I was afraid that O’Reilly’s reputation would tumble into the gutter by releasing a book with such disgusting stereotypes.

So I used the medium of email—not the first time I have been an internal activist in a company over the email medium—writing to Dougherty, Tim O’Reilly, and the other editors to insist that we heed the production staff and take out offensive smileys. I was terrified by challenging Dougherty, cofounder of the company, directly. I was ready to be fired.

I waited. Discussion ensued. Mike Loukides, who has always held a leading role and been highly respected at the company, came out in our support. Finally, Tim himself called me at home to say that he agreed with me and the production staff, and that the sexist and racist smileys would be removed.

Tim’s executive decision was a welcome salve to a company-wide crisis. But it was also a moral choice in the Titanic battle between two philosophies: one calling for a detached observation of life as it is, the other reflecting a vision of a better life. I think the results of this choice can be seen in many other positions taken by the company in the decades since.

☞ Intellectual prosperity: Writing and editing