Contents of this chapter
Philadelphia: cradle of American independence, cauldron of race tension, site of an enthralling collection of Rodin statues, butt of W.C. Fields’s dyspeptic wit. How did Philadelphia become a strategic location for an artificial intelligence summit?
The time was October 2016, and several threads in my work came together here. I had spent most of the past couple years at O’Reilly covering the technical underpinnings of the topic we were to discuss at the summit: artificial intelligence (AI) and the management of the enormous data resources that AI calls for.
By the 2010 decade, the terms AI and machine learning described a layered approach to algorithms and data processing that conjured up computing miracles apparent to all: voice recognition, chatbots, image and face recognition, improved recommendation systems, and so on. Controversies grew up on the heels of each advance. This section explains how knowledgeable computer activists took on those controversies.
Technical guides to AI, more than technical manuals in most disciplines, must address ethical issues, especially the risks of bias. Most jurisdictions outlaw bias against what are called “protected classes” of people. Excluding a protected class—women, or people of color, or gays—can be considered an error. But this error occupies a different level from the simple miscalculations that are so common in AI, such as using statistical models that are inappropriate for the shape of one’s data. This is because the latter, garden-variety errors analyze future data poorly, so they are usually discovered and labeled as disappointments by the organizations using them. But a biased model may analyze future data extremely well and be judged accurate according to the goals of the organization—but only because the model is based on past criteria, which might well be biased against one or more protected groups.
Another way to put this: managers and staff are blind to their own bias, and their AI models may reinforce this blindness. Biased AI may allow an organization to choose convicted prisoners or other people in a manner that is very satisfactory to management, because it’s just as biased as the people who previously made such decisions. (Of course, organizations hardly ever admit to basing decisions on an AI model, they call the model simple “input” to a decision that ultimately is under the control of a responsible person.)
The definition of what’s biased and what isn’t, the sources of bias, ways of detecting and measuring bias, and policy around all these things can be engrossing. It was a set of questions that I was ready to dive into, because questions of fairness and protection of individual rights had run through my years of work with Computer Professionals for Social Responsibility, and after the demise of that group, with the technical policy group created by the Association for Computing Machinery. The ACM, pre-eminent among non-profit organizations in the computer field, had started with a narrow focus on technical expertise and how to convey it through computer science education. At some point they added a policy committee called USACM, later redubbed the US Technology Policy Committee (USTPC), with connections to similar groups outside the US.
When I started writing about AI, its risks had been a hot issue for some time. The muckraking web site Propublica gave the issue a very public boost in 2016 in an article claiming racial bias in a system telling courts whether a convicted criminal is likely to reoffend (information that can influence sentencing). The details behind this proprietary algorithm—which the vendor refused to disclose, even in court—would make a fascinating story in themselves. In particular, one researcher discovered that a simple calculation you could do with pencil and paper, involving just two variables, proved more accurate on historical data than the highly vaunted proprietary algorithm. In any case, after this article, investigations of bias in algorithms took off in academia and filled the news.
Around 2015 or 2016, in reaction to this research, ACM created a special group within the policy committee to examine and develop recommendations. Leaders of this group obtained funding to bring a dozen members of the group, plus some outside experts, to a rare face-to-face gathering. The group deemed Philadelphia a convenient location because it is just a couple hours from either New York City or Washington, D.C. Someone probably also had a connection to the University of Pennsylvania, which furnished the room and refreshments for the meeting.
Simson Garfinkel, I believe, came up with the decision to invite me. He had emerged back in the 1990s as an authority in computing security and privacy, had moved between several important jobs in industry and government, and had met me while proposing and writing books for O’Reilly (although I didn’t end up editing any of them). When he asked me to join experts with far more experience than I in both algorithms and policy for the Philadelphia summit, I demurred at first. but he had seen me in action on other ACM projects and said something to the effect that I was good at moving people forward and getting a writing project done. I saved ACM (or was it my employer?) a bit of money by arranging to stay with cousins after the meeting.
The participants, outside of myself, were impressively credentialed. For instance, Cathy O’Neil, author of the popular book “Weapons of Math Destruction”, was invited, but had to bow out at the last minute due to illness. We had a stunning roster, even without her.
The immediate result of this highly educational meeting was a set of principles that were quickly approved by ACM leadership and publicized. I wrote up an account of the meeting and the principles for the O’Reilly web site. One of the meeting’s organizers, Jeanna Matthews, told me later that the principles received a lot of positive attention.
We expected to follow up with an in-depth research paper for the computer field’s flagship publication, Communications of the ACM. I took on coordination of the writing project. But I could never marshall the experts we needed to produce the list of publications, even though many people signed up, I created an introduction to “seed” the document, and I repeatedly tried to shake the volunteers up and entice them to contribute. A few members of the group produced a short overview for the Communications, but the larger piece never appeared.
I am not disappointed by our lack of follow-through, because I believe that the field of research into AI bias had advanced at rocket speed and could not be encompassed by a simple article, no matter how well researched. The range of relevant research was growing geometrically. As for the principles, they got lost in the welter of similar lists of principles that came along from expert groups around the world. All these lists were similar, essentially taking off from well-established guidelines on privacy, autonomy, and consumer rights to enunciate rules for transparency and accountability in AI models.
I’ve played a similar role—writing, prodding, strategizing—for several ACM projects, and even managed to slither into a leadership role for a year on the steering committee (probably because they were short of true experts).
Working with ACM has been the most recent of the volunteer work on which I have spent countless hours over the past several decades. During these hurried years, computers became embedded in every office, every home, and eventually every device. The world moved onto the Internet and bandwidth grew to undreamt-of speeds (although not uniformly in different places). Additionally, the struggle to get services to the public over the Internet graduated into a struggle to maintain some public control over these now-ubiquitous services.
I worked on essentially all the issues raised by this history. Although this chapter presents several of them in their own tidy sections, you can take a balloon up a few miles and see them swirling about in a maelsrom with accelerating speed. My work on each topic was like a great circle on a sphere: by the premises of non-Euclidean geometry, it is guaranteed to intersect with every other circle.
Some readers in more straitlaced work settings may wonder how I found time to do all the volunteer activities described in this chapter (and many others covered in the chapter on free software, or just not covered at all in this memoir). They may wonder how my employer, or the many associates who worked on books with me, reacted to my very public shenanigans in all these causes.
To tell the truth, I never devoted thought to these concerns. I have never separated my life into compartments. All the activities in this book constitute who I am. Nor was there a meaningful distinction between the people I worked with at “work” and people I worked with on other things. They often overlapped, such as on this project where I was recruited by Garfinkel, an O’Reilly author. More than once I would conjure up a book for O’Reilly out of the relationships and topics I indulged in during what some people conventionally call spare time. (An example of a book prompted by my activism is Van Lindberg’s “Intellectual Property and Open Source”, discussed elsewhere in this memoir.) O’Reilly was not only complacent but supportive while I explored all these paths, and sometimes they funded my trips. That may or may not be considered normal for an employer.
Like a dozen other senior members of Computer Professionals for Social Responsibility, I got the dreaded phone call in the summer of 2012. (Or maybe it was an email message—but I feel that its severity deserved a more formal channel.) CPSR was dying.
The current board of directors couldn’t see a path forward financially past the next few months, and they were organizationally in total disarray. Before they threw in the towel, they gathered resources for one Hail Mary pass. (I’m no good at sports or at sports metaphors, but here I’ve thrown in two of them.) They decided to fly out some of their most dedicated members, the people who had given hours and weeks and years of passion and sweat to the organization, and set us to shaping a rescue plan.
Interestingly, I had never played a leadership role at CPSR. I had never led my local chapter or put in a stint on the national board. But I had contributed so consistently to CPSR that at one national meeting, a member told me, “Sometimes, Andy, I thought you were CPSR.” I had written position papers, had helped with the logistics on at least one conference, and had represented CPSR to policy members—as well as allies in other organizations—for two decades. So they invited me to this final strategy meeting, and I took a plane out to Palo Alto, California.
For a long period, CPSR had politically savvy leadership who put us in the spotlight and brought in new blood. The organization came together originally around the Strategic Defense Initiative, which was popularly known as Star Wars because the movie series was fairly young at that time. CPSR activists testified before Congress against putting weapons in space, gave interviews to the press, and were widely credited with stopping the program (temporarily).
I had joined CPSR in the 1980s, after seeing a powerful advertisement displaying an atom bomb mushroom cloud with the caption, “The ultimate error message.” I heard later that the radical chapter in Berkeley, California placed that ad in computer journals over the objection of the more cautious central leadership. Well, that ad snared me. And I put my heart and soul in the organization for some 30 years.
CPSR conferences drew participants from around the world, notably an annual event called Computers, Freedom, and Privacy. (One of the French attendees at that conference circled back later to invite me to a conference in France.) We spun out the Electronic Privacy Information Center (EPIC) under Marc Rotenberg, who still ran it until recently and battled hard for our human dignity. Privacy has been a constant concern throughout the history of CPSR, and my immersions in the issue proved of critical value when I got involved in health care.
My heart ached for CPSR, because I detected its dysfunction and oncoming obsolescence as far back as 2001. I heard reports on two projects at the annual meeting that year that showed me we weren’t capable of turning our enormous expertise and passion into effective public interventions.
The first blow was when conference attendees discussed the recent Y2K effort. Thousands of programmers had to revisit old COBOL programs written decades before and convert dates from two-digit years to four-digit years. Dire predictions were aired throughout the press about what would happen on January 1, 2000, when programs would mistakenly interpret the year 68 as 2068 instead of 1968. Jail doors would open, dams would burst, basic services would fail. Many people, predicting the collapse of civilization, stockpiled guns and staples—I’m not joking.
The programmers came through. They fixed all the programs, and the calendar turned to the year 2000 with barely a ripple of computer problems. Some members of the public thought it had all been a tempest in a teapot, but the practitioners knew: the problem was real, and the problem was fixed.
Interestingly, I read one analysis suggesting that the convention of two-digit years was economically valid, even given the huge sums of money that went into fixing the problem. Disk space was so expensive in the early years of computing that companies saved far more money than they had to spend later on Y2K.
Where was CPSR all this time? It turned out, in our discussion at the annual meeting, that CSPR had established a mailing list about Y2K quite early on.
So why didn’t we seize the publicity and hype around Y2K and put forward our expertise for the world to see? Why didn’t we become the big heroes? Why weren’t we flattered and feted as the authorities on this pressing topic?
We asked something like that to the mailing list member at the meeting, and he mumbled that they considered their job just to share notes with each other. I avoid indulging in stereotypes, so my apologies here—I saw his statement a classic geek response. This is why scientists haven’t persuaded the public of the crisis of climate destruction. Geeks don’t (in general) do marketing.
Typical geek behavior also doomed to failure our other big opportunity to have an important public impact. The topic was elections by computer.
In the wake of the horrendous failures of the Florida electoral system in the 2000 election—hanging chads, poorly designed ballots, and more—many people were calling for computerized voting machines. One shudders when reading about the many vulnerabilities in these machines. I had followed, on and off, the discussion of computerized voting on a mailing list CPSR had devoted to it. We also held a panel on the problem back around 1995. We definitely were onto the issue, and were sitting on vast treasures of expertise.
But as electronic voting machines proliferated and the public debate reached fever pitch, CPSR was nowhere to be found. At the annual meeting we asked what held back the electronic voting group from joining and even directing the public debate.
Well, it turned out that there were two factions in the working group. One thought that computer voting was impossible to secure in theory (a position that I and most educated observers have adopted), whereas the other thought that current machines were wretched but that some theoretically robust system could be found. The two groups agreed on the urgency of opposing machines as they existed at the time, but because of this rift over theory, they could never agree on a public position statement.
So I realized that CPSR couldn’t be a force in the public arena. We had lots to offer but weren’t offering it. Furthermore, one activist pointed out that when CPSR’s experts spoke to the press or before Congress, they would identify themselves by a single institution, usually the university where they worked. The public never learned that CPSR colleagues had prepped them and perhaps set up the interviews.
Maybe I was already tired when the appeal came in 2012 to rescue the organization. But I took on the task with the discipline of a good soldier.
On the surface, the problems with CPSR were organizational. From one of our discussion documents in 2012 I quote this historical perspective: “CPSR has been most productive and successful when it had a knowledgeable executive director who worked well with the board. CPSR’s decline took place during times when it had no executive director, had one who fought with the board, or had one who did not understand the organization’s issues.”
That was just the starting point for understanding our problem. CPSR’s difficulty was its unusual reverence for bottom-up, grass-roots activity.
I’ve seen all these organizational tensions play out in other non-profits. The tension between professional direction and grass-roots energy has bedeviled numerous organizations that take on social policy, and the outcomes I’ve seen are not encouraging. Successful organizations congeal around strong professional leadership, and the grassroots efforts atrophy. Where the grassroots activists prevail, the organization meanders in a vacuum of control and eventually dies.
The loveliness of downtown Palo Alto under a bright autumn sky—we could debate and roam from our confined conference room to the sidewalk and back again—contrasted with the anxiety and gloom of our meeting. The old activists, who had supported each other through so many noble activities, bonded or rebonded afresh as we faced the sad state of the organization that formed a big part of our identities. We looked for a sliver of hope.
At the end of our appointed time, a couple days, we came up with a plan that we knew was unfeasible. We did it, I think, because after being brought to Palo Alto and housed and fed, we felt we should not just walk away with a shrug and an apology for producing nothing. Furthermore, we were engineers. Ask a bunch of engineers for a solution to a problem, and they’ll provide a solution, no matter how difficult to implement it may be.
The plan involved a massive effort to rethink CPSR as an activist-led organization, with a new mission matching the difficult environment in which we were operating. The plan magnificently reflected our deep collective knowledge of where CPSR had been successful and where it fell short, it was a masterpiece of reflection on the glories of CPSR.
We put forward to the board two possibilities: to launch one final effort for our heroic and probably quixotic plan, or to shut down in an orderly fashion. Of course, the board chose the latter. I probably would have too, had I been on the board instead of on the revitalization committee.
Doug Schuler took on the task of preserving the fine historical archives on the CPSR web site, in which he partly succeeded. The rest of the work of shutting down the organization was bureaucratic or clerical—and, of course, emotional. We all had to suffer private vigils for the organization we had helped bring to a quiet end. Much of this chapter will explain why CPSR meant so much to so many leaders in computing.
In the classic 1936 film Modern Times, Charlie Chaplin is walking along the street when he notices that a red flag used on construction projects has fallen off the back of a moving truck. Chaplin the tramp picks up the flag and shouts at the truck, waving the flag to get the driver’s attention. At that moment, a large workers’ demonstration comes around the corner to line up behind him, and police arrest him as the instigator.
I was often this instigator on issues of computing and network policy. I would start out as a mere observer, benefitting from my role in computing to shout out, like Chaplin, and try to compel society to recognize the importance of the issues. But my reportage would eventually turn into advocacy, and advocacy into action.
In fact, advocates for various issues have often come to me to advance their cause, granting me a leadership status I didn’t feel I had. Similar superpowers were bestowed on me by many people who asked me to argue their case within O’Reilly: I didn’t have the influence they assumed I had at the time—and perhaps never had such influence. Also, I was sometimes surprised when people I didn’t think had taken much notice of me suddenly called and pulled me onto projects.
One such person was Gary Chapman, a noted professor in the social sciences at the University of Texas in Austin. Our paths had crossed a few times at Computer Professionals for Social Responsibility in the 1990s, but I was just a volunteer whereas he was executive director. Although I didn’t think our acquaintanceship went deep, he remembered me in 2009 and asked me to join an organization he had heard about through his work. Austin was the unlikely headquarters of an organization called Patient Privacy Rights. Chapman, knowing that I had recently taken up policy issues in health care and computing, expected that I could help PPR get more publicity.
PPR was centered in Austin because it’s the home of the founder, Dr. Deborah Peel, who occupied a stunning house enjoying a view across the whole city, along with her congenial husband and a daughter. Peel, a psychiatrist, became very concerned that large numbers of patients were withholding crucial information from their doctors, afraid it would leak out and ruin their lives. Her research uncovered an alarming laxness in data collection and sharing—areas of concern that were growing in both size and risk as clinicians moved to electronic storage and new apps were tapping into customer health information.
I asked all my friends in health IT whether I should get involved with Peel and PPR. They were a bit leery of Peel, whose single-minded passion obeyed few barriers. But my friends worried not so much by her being outspoken, as by her oversimplifications. For instance, I think that PPR, like many privacy advocates, unfairly dismisses the efficacy of de-identification techniques to protect the anonymity of research data. Peel made other misstatements as well. Still, my friends felt that her voice should be heard and suggested I work with PPR.
One of the first things I felt PPR needed was more well-grounded technical advice. Peel’s psychiatric background ensured a strong empathy for patients and clinicians, but she was not conversant with the intricacies of electronic records and data sharing. I talked to a number of people in health IT whom I respected to find a technical advisor for PPR, and found success beyond my dreams with Dr. Adrian Gropper.
Gropper had a stellar background. He had earned an MD but went from medical school right into developing health care technology. When I met him, he had joined other patient activists in calling for patients to control data about themselves, a policy as naturally intuitive as it is hated by health care providers.
Patient data generates revenue for hospitals and clinics over and over again. Holding on to it inhibits patients from seeking out competing doctors. It can be exploited to market more services to patients (services they may not need, and that therefore inflate health care costs). In recent years, the hospitals have started to run analytics, including the AI models mentioned earlier in this chapter, to improve their own efficiency without helping other institutions do the same.
Going further than most activists to make this dream a reality, Gropp started a series of companies to make devices or software that patients could use to store and securely share data. Most if not all his software was open source—another passion he shared with me. His technical sophistication in both medicine and technology greatly enriched my work and that of PPR.
PPR was broadening and deepening at the time Gropper and I joined. Soon we were to plan our first conference, an effort that brought some 20 experts in health care, government consulting, and privacy to Austin, Texas for a planning meeting. The conferences themselves were always at the Georgetown University Law Center in downtown Washington, DC and drew more than a hundred attendees, many from overseas. Georgetown gave PPR a generous deal.
The point of holding conferences at Georgetown was to draw policy-makers with real influence and power. The venue was about four blocks from the U.S. Capitol, so policy-makers could have attended practically by falling off their chairs. However, I don’t believe a single representative, aide, or other staffer ever traveled the four blocks. Still, we attracted other political figures and felt we were gaining the ear of people we needed. As a sign of how seriously the field regarded PPR, we often got presentations from the National Coordinators for Health Information Technology and from managers reporting to them, a key department in the federal government determining rules for data use in health care. I moderated some sessions, spoke at others, and constantly blogged about the events.
The early years of my association with PPR coincided with interest at O’Reilly in health IT. Thus, my advocacy was intimately tied into my editorial tasks of finding topics, authors, and reviewers. I maintained strong ties to many people from this period, and kept writing articles on privacy and other health care policy issues even after my O’Reilly career ended.
In 1995 I was invited to a strange little convocation of leaders at the intersection of copyright and technology, organized by professor Paul Jones at the Chapel Hill campus of the University of North Carolina. I had interacted a bit with Jones around these issues during my activism in Computer Professionals for Social Responsibility and related groups. I also knew a couple other attendees—author and tech freedom activist Cory Doctorow, and Pamela Jones (no relation to Paul, I assume), the founder of the free software discussion site groklaw. But I was open to meeting more folks in my areas of passion concerning policy. The most important person I met there was law professor Beth Noveck, who was just starting her assent to international renown.
At first I chatted with Noveck politely, finding little in common. But gradually, I got to see the range of her work and the all-encompassing view she brought to her world. I did not know that she had earned a doctorate in political science before getting her law degree, but the sense of inquiry and respect for history and culture implied by that achievement came through. We had lunch at the end of the conference and I was completely on board with a project she was launching, called Peer to Patent.
As the name suggests, Peer to Patent fit into the trend toward crowdsourced online participation that surfaced in the early 2000s, notably in the peer-to-peer computing technologies that I covered in a book at O’Reilly, and later also in the great expansion of user-uploaded content known as Web 2.0. The goal was to improve the quality of patents granted by bringing in domain-level expertise from practitioners in the field where the patent was being granted. Up to then, the process of awarding patents was absurdly idiosyncratic, each patent examiner—although well-educated in both law and technology—acting alone with an estimated average of 20 hours to make a decision on each application. Not every patent examiner is an Einstein. They routinely miss prior art or fail to see that practitioners would regard the patented process as obvious.
Besides the peer-to-peer connection, I quickly settled into the Peer to Patent project because I had already added patents to my study of trademarks (triggered by the domain name policy disputes I will describe later) and copyright (a necessary topic of study for anyone in publishing—and equally relevant to people interested in free software or online culture). The inevitability of my being dragged into each of these disciplines suggests that the umbrella term “intellectual property”, although considered an abomination among the free software community, has relevance because people who study one of them find reasons to study the others.
Peer to Patent was also technically interesting, because it rested heavily on some experimental ways to help people find patent applications (mostly by bringing prior art to light) while keeping discussion relevant and on track. The pilot program validated the success of these collaborative techniques. Noveck and others would build on them more and more to create open government projects during the next 15 years.
The computer industry was deeply divided over patents. Interestingly, software patents were a relatively recent addition to the canon. For many decades, software was considered a mental activity, neither a “process” nor a “machine” in patent law parlance. But in the 1970s, a crack in the defenses of software appeared with the granting of a patent, to be followed by a trickle and ultimately a torrent of patent applications. In the twenty-first century, more than half of all patents granted would be on software, with all the abuses and poor choices that one could guess would come along with such a chaotic, overwhelming flood of activity. As a responsible member of the computing community, therefore, I cared very much about how the patent system works.
While I tend to agree with free software advocates that patents should not be granted to software at all, I recognize the bind that both companies and the Patent Office are in. Even by the early 2000s, it was clear that more and more development that used to be built mechanically or electronically into new products was moving into software. This offers many benefits, such as over-the-Internet updates and accelerated testing. But of course, there are drawbacks too, such as absurd errors that no mechanical device could suffer from, and the risk of remote attack.
Notwithstanding all objections, innovation is increasingly software-based. If the patent system does not recognize that somehow, patents will shrink in relevance as software grows. And we’ll have to think about how to reward innovation without patents. Meanwhile, we have to make sure patents are granted on true advances, not cheap land grabs in various areas of product development.
Within two years of hooking up with Peer to Patent, I had ridden its momentum enough to break into two journals where I had never imagined I could earn the honor of writing: The Economist and Communications of the ACM.
The gears of the universe happened to click into place at just the right time for me and The Economist. I happened to have communicated with their tech correspondent. I wrote to him and got attention for a proposal that I’m sure would have gone right into the trash if I had pursued normal channels. He got permission from higher-ups for me to cover Peer to Patent for a special issue called their Technology Quarterly. It was for this article that I thought up the sentence, “Not every patent examiner is an Einstein,” but I deleted it under duress after the straitlaced reviewer at the Patent Office felt it to be insulting.
I lavished more obsessive detail on this article than any other piece I’ve done, at least on pieces that are more expository than creative. Not only did I contact a wide range of people at many companies and institutions, but I got a couple books on the subject (one recommended by Noveck). Furthermore, I read several weeks of the Economist to absorb their distinctive style, which includes a perplexing, ironic linguistic zigzag that the upperclass British like to call humor. I even analyzed the journal’s grammar and set the spell-checker on my GNU/Linux system to British usage so that I could spell words as they expected.
My article appeared without a byline, the normal Economist practice. What mattered to me was the thrill of introducing Peer to Patent into this influential outlet. The article came across in a big way, as a two-page spread. Noveck (who of course had reviewed my draft) saw the publication before I did and sent me an email message with the one-word subject WOW.
For the Communications of the ACM, the most prestigious journal in computing—at least among journals with a general scope—I also did significant research. I learned how to read a patent, a descent into a special hellish place all its own. Earlier, Noveck had asked me to write an article to recruit reviewers for a particularly flimsy patent application: one that vaguely tried to grab ownership of an area of user interface design and wait for someone else to do the hard work of actually solving the problem it tried to own. I analyzed the patent and explained its deceptive presentation in the article.
My article for the Economist explained the purpose and justification for Peer to Patent, on a policy and business level. In contrast—because I never write the same article twice—my article for Communications of the ACM was aimed at actually recruiting computer experts to volunteer for Peer to Patent. As part of my page-and-a-half piece, I explained my technique for reading a patent application.
Besides the extreme obfuscation and fragmentation found in most patent applications, they make reading difficult because the text refers to many figures, but the figures are not inserted in their proper places as in a good technical document. Instead, the figures are gathered in a separate place, which I guess made sense given the printing capabilities of the age when the patent system was invented. When reading the application online, I found that the trick is to open two browser windows, one for the text and one for the figures. Then a reader can see how they correspond.
Not only did Peer to Patent extend my writing to new venues, it gave me a chance to interact seriously with Wikipedia for the first time. It was clear that Peer to Patent needed a Wikipedia page, as an historic social and technical experiment. I decided one day to write up a page from scratch. I couldn’t offer much detail, not being a member of the team, but I offered valuable background under headings such as “Justification and purpose” and “Theoretical underpinnings”. After watching the new page go live, I wrote to Noveck saying I had a present for her. I remember her being both flustered and flattered by seeing the Wikipedia page. She told me her team had been talking for months about creating one, but no one could take the initiative. What can I say? Sometimes writing calls for a professional. I think the conventions of Wikipedia were simpler in those days, so I could get the editors to accept my page without much arcanery. Twenty years later, the page still showed the basic structure and some of the text I put there the very first day.
Eventually Peer to Patent wrapped up with gratifying success: the Patent Office incorporated it into their routine process. Noveck’s reputation soared with an appointment to a two-year White House position, the publication of a couple books, and the spread of her ideas worldwide. She embarked on a set of jaunts to places as far as Russia—yes, they showed interest in government transparency—and Hong Kong.
I was reminded a decade later of the craziness inherent in the patent system. A patent lawyer called me and asked for testimony related to a case he was handling. It all stemmed from my 2001 Peer-to-Peer book, which was oddly appropriate because the peer-to-peer movement, with my book at the center, had partly inspired Peer to Patent.
Here’s the story. A year or so after Peer-to-Peer was released, a patent troll tried to patent technology that had been documented in that book. At that time, other companies challenged the patent application, but the office said that the book had come out just a couple weeks too late to apply. The book’s release date was March 2001, and the troll had managed to submit the patent application just within the one-year deadline that allowed them to claim they had really invented the idea. This is how the patent system works.
Apparently, the patent was still relevant fifteen years later. A new patent lawyer was brought on the case, and he hammered on the issue with more sleuthing than the earlier lawyers. His dogged research turned up a blog posting I had written (once again, my blogging proved valuable both to me and to others) where I mentioned that we brought early copies of Peer-to-Peer to our February 2001 conference in San Francisco. This odd little detail, stated in passing, proved to clinch his case. Because the conference was open to the general public, offering 140 copies there constituted “publication”, and the couple extra weeks it subtracted from the official publication date put the book within the scope of “prior art” that could overturn the patent.
I sent him a copy of the book along with a signed affidavit affirming that the book had gone on sale in February 2001. I didn’t hear back about the court’s final decision, but the lawyer was very happy.
This little incident, besides illuminating some odd corners of patent law, shows the value of keeping old articles online indefinitely. Here, we may have righted a wrong and championed innovators just by preserving a fifteen-year-old web page. What has the computer field lost with all the pages that the O’Reilly web staff wiped out? (If you feel that I’m harping too often on the lamentable destruction of legacy, consider that anyone writing a 200-page memoir must consider the preservation of history important.)
In intensity, longevity, and perhaps social significance, the biggest tech issues I have dealt with surround telecom policy. Most people can think of no topic so boring. A few ears will prick up when I tie it to terms that have garnered a lot of discussion: “network neutrality” and “high bandwidth”. But really, I was a foot soldier in the battle to create the world we live in.
People everywhere pull out their phones before going somewhere or buying something, hardly even considering the web of fiber and cell towers that makes this interaction possible. In the 1990s, few could envision today’s communications world. And nobody could achieve these connections without ubiquitous Internet coverage.
Yet this marriage between mobile and the Internet—the subject of many tedious screeds in business and technology—is no utopia. Connection costs are high, competition is low, and availability is grossly uneven. The COVID-19 crisis has finally forced governments to admit that they’ve let their constituents down, creating digital divides geographically, racially, and economically.
The problem is that, as networking started to show promise in the 1990s, public policy-makers left all the decisions about deploying these networks up to private companies, which based them on cold business calculations. The lock on Internet service is relaxed by special subsidized accounts for low-income residents and other universal service measures, but a digital divide remains entrenched. (One FCC chair dismissed any reponsibility for this problem with a joking complaint about suffering from a “Mercedes divide”.) Now these companies are angling to lock in the digital divide with 5G phone towers, which are technologically constrained to serve dense, affluent neighborhoods and leave poorer, more rural areas in the dust.
That’s the story of this section, which I hope you will no longer see as boring. The public is waking up to the social implications of telephone technology. What’s interesting is how long it took computer technologists to discover it.
As ludicrous as it now seems, computer technologists up until the 1990s thought little about networks. Computers then were largely stand-alone systems. Connecting the computers was an afterthought. Administrators did tussle with networking, but they busied themselves with the details: first stringing cable for local area networks, and then configuring the ever-expanding list of protocols for connecting at the software level. They were kept busy fussing over the software, because every protocol was inadequate and could be fixed only by introducing a new protocol on top of it, simultaneously introducing new needs for which the new protocol was inadequate.
The administrators contracted with long-distance telecom companies, but didn’t investigate much what those companies were doing—except for a few like Barton Bruce, an eccentric wizard who wandered in and out of the computer room at O’Reilly’s Sherman Street office. At that early stage of the company, we were sharing the office with a computer consulting firm. Bruce, whose relationship to the computer consulting firm I never discovered, held mysterious seances with the Internet in our large, glassed-in server room. Whenever he wandered from that sanctum to get a drink of water or perform other needed activities, he’d stop and chat about the monopolistic practices of Tier I providers or the causes of bottlenecks at the MAE-East exchange point. The cheerful upward tilt of his busy mustache became a warning beacon. People would discreetly find a way around him and the ruminations he loved to let loose on anyone lacking an easy exit.
I awoke to the fundamental role of telecom in the mid-1990s when Congress began to discuss a major reform bill. The telecom industry had chugged along quite nicely since the 1982 consent decree broke up AT&T and subjected it to competition. MCI and Sprint jumped in to create new long-distance lines. At some point in time I cannot establish, each launched a line of luxury personal devices called mobile phones. Few companies tried to compete over the “last mile” of local phone service, because that would have required spending billions of dollars and getting thousands of local permits to string more wire in the neighborhoods they wanted to serve.
So there were reasons for legislators to rethink the telecom space, even aside from the elephant in the room that was rapidly evolving into a whale: Internet service. I won’t detail the many tiresome ways that telecom companies opposed the Internet and put stumbling blocks in its way, up until they earned enough money (mostly from traffic created by the Internet) to switch strategies and swallow the Internet. I’ve written exhaustively about that history over the years. But in the 1990s, it was critical to extend Internet service to everybody at a reasonable cost. A diverse set of issues drove the discussion that ultimately congealed into the 1996 Telecom Act.
As I delved into free software and Internet technologies, I found in telecom a whole new and bewildering area for study. The more I understood the issues driving telecom competition and its effects on freedom of the Internet, the more parallels I saw with free software.
Exploring the telecommunications infrastructure that lay underneath everything people were doing (literally underneath—in fact, laying cables in the ground was one of the hot issues) led me to see everything anew. I developed fascination for those little finance charges with meaningless names that appeared on monthly bills. The most obvious sign of my dangerous descent into wonkdom was a prurient interest that I started taking in those long discursions of Barton Bruce that everyone had been avoiding. He gave me some background into telecom and Internet service as it was in the mid-1990s. But that background was soon to utterly change.
The best gathering place for people like me, on the cusp between telecom and computer networking (or as practitioners would say, the Bellheads versus the Netheads), was Computer Professionals for Social Responsibility. An incredibly dynamic and perceptive technologist named Coralee Whitcomb ran these efforts. Whitcomb had gotten into computers after entering a Boston-area business school, Bentley College, for her undergraduate degree. Her advisor told her that she had a choice between two areas: accounting and computer science. The latter sounded more interesting, so she leapt into what turned out to be a maelstrom.
In addition to staying at Bentley—which grew from a dinky business school to a much-sought destination for learning—to teach computer science, Whitcomb tirelessly launched one social project after another. As expressed by Steve Miller, another activist with whom I worked closely in CPSR, “Coralee goes and does the things the rest of us sit around and talk about doing.” She opened a computer training center for the underprivileged, Virtually Wired, in a downtown Boston storefront. She ran a cable news program about Internet issues, where I once spoke about the Communications Decency Act (a toxic insertion forced by right-wing legislators into the Telecom Act). She organized conferences.
And Whitcomb directed CPSR’s ground-breaking telecom work. She shuttled between Boston and Washington, getting to know everybody important on telecom issues as well as the technical and legal complexities we’d have to contend with. I think CPSR had some positive effects on the final bill, certainly in the parts about universal service, and perhaps also regarding competition.
The Telecom Act recognized the value of having more competition to increase innovation and lower prices. For instance, to help new companies enter markets dominated by the old monopoly telephone companies, the law enabled the FCC to determine 14 “interconnection points” where the incumbent telephone companies had to allow exchanges with their competitors. It turned out that only one or two of these interconnection points were useful, and the incumbent companies continued to find ways to slow down interconnection.
Of the many schemes used by the incumbents to avoid competition, I’ll mention a relatively simple one: hoarding phone numbers. If you wanted to switch companies, you had to give up the phone number you had given out over the years to friends, family, and business associates. Thus, “number portability” became a rallying cry for new companies. Eventually, the government made it possible for a person to keep their phone number during a switch.
How could I let languish such a fertile and pressing concern as telecom policy? The very core of the Internet—the goal of allowing access without hindrance—was at stake. Dozens of articles I wrote over the next twenty years delved into the myriad issues around telecom, winding their away among the issues of competition, cost, universal service, and Internet freedom.
In those days we thought we could achieve all good things in the Internet space by ensuring competition. But we didn’t reckon with the sheer scale required by telecom. The large companies just got bigger and bigger, then merged, and merged again, to get enormously bigger.
Some people in my activist circle tried to adapt to the new situation by calling on the government to provide wires and Internet service as a public good. I think this can be useful in particular situations (I’m sympathetic to municipal networks), but I tend to like the old-fashioned and frequently undermined free market for technological development. I’ve seen too many government policy decisions that end up favoring powerful companies, skewing technological development, or just trying to gain control over everything.
Other activists, frustrated by the ongoing takeover of Internet service by large corporations, call for heavy-handed control to keep them honest. The term “network neutrality” running through these proposals didn’t exist back in the 1990s and early 2000s when we had a shot at really creating a healthy telecom industry. Although I could support some limited oversight, I think that technology is hard enough to get right without a company having to twist its decisions (such as traffic shaping, which is crucial to providing good performance) to meet some tangled regulation.
The days of the mom-and-pop Internet provider, friends to everyone they served, were mostly doomed. A few may still survive in isolated areas that the big providers are too aloof to serve. Many local providers put in heroic efforts to keep Internet service going, just as the stalwart George Bailey, played by Jimmy Stewart, provided banking services to the people of his town in the movie “It’s a Wonderful Life”.
One of the people who always inspired me was Brett Glass, who offered wireless Internet to the population of Laramie, Wyoming. He was never shy about leaping on a roof to install an antenna. Glass expressed contrary but carefully argued positions about a number of things, including network neutrality. He also strode boisterously into matters of free software. Glass was a big user of BSD, a clone of Unix with great historical weight that lost its momentum to its competitor Linux. His reddish beard was a warning and portent to any conference where it appeared. Many people resented his aggressive presence and criticized him for speaking his mind, but I listened carefully to his reasoning and always learning something from him.
I wonder how Coralee Whitcomb would have treated recent developments in telecom and the Internet. She did not last long enough to see current events, coming to a tragic end because she was a year late getting a necessary breast exam. After a bunch of us from CPSR came to her farewell at Bentley College, she died of cancer in 2011. Scandalously, she appears nowhere in Wikipedia, evidence of the diminished regard given to women there. She deserves a page of her own. In fact, I tried to start one—an effort backed enthusiastically by our CPSR friends and colleagues—but it proved an impossible task, because the relevant source materials that would document her many contributions to life in computing today had been recorded either in ephemeral paper form or on long-defunct computers, and weren’t retrievable.
From February 1997 through April 1999, I wrote 300 words weekly for the American Reporter. I served as the Internet correspondent for this scrappy little online newspaper, and flexed my muscles as both journalist and advocate in all the policy areas that interested me. The pace was bracing, but I was always turning new corners and finding treasures beyond them. Internet journalism was an adventure game.
The American Reporter was an early experiment in online news, more disorganized than organized by its founder, a veteran of journalism named Joe Shea. The 300-word maximum was an arbitrary discipline imposed by Shea, perhaps because he measured all his writers’ contributions by word count and promised us equity in the operation if the American Reporter ever turned a profit. Shea offered stories for sale to other media, a business plan that might have made sense in the 1990s. But I don’t think any of his contributors ever expected to make a penny. We wrote either out of our personal regard for Shea, an oddball in his own manner, or because it provided us with a forum for our unredacted views—certainly the main draw for me.
What did I find to write about each week? This was never difficult. The world was changing in every way, pressed onward by activities on the Internet, and the mainstream media was oblivious to it. I needed to tell the story, because few other outlets were doing the job. And I had feelers out on every topic: Internet speeds, pricing, threats to free speech and privacy, online organizing—you name it. My connections crossed borders and oceans, I could just as easily report on events in France, Germany, or Peru as in the U.S. I could also read a few languages well enough to check primary sources. My main task was to decide which of many controversies to highlight each week.
I never posed as a mere reporter—every piece I wrote had a definite point of view. But I insisted on precision. One security expert told me that he granted me an interview because he was angry at the misinformation that the mainstream press was promulgating about a particular issue, and saw that I was one of the few authors who took the time to get it right.
In addition to my commitment to tight deadlines, Shea also needed someone to administer the web site, which was hand-coded in an obsolete version of Perl. Knowing that language, I agreed to take this on, only to discover that the person putting the site together was both the worst designer and worst programmer the world had ever seen. The design was so rigid that new browsers would break the layout, and I spent a good deal of time accounting for the oddities of different browsers. The designer had made a weird use of frames to hold graphics and text.
After hours of dogged examination, I figured out that the bizarre code processing Shea’s articles (they had to be converted from plain text to bad HTML) hid an arcane state machine. Any variation in input, such as a missing space or line break, would cause the entire run for the day to fail. I fixed all the problems but didn’t dare try to make any improvements because of the thoroughgoing fragility of both web design and state machine. I did help Shea incorporate advertisements, counters, and new menu items. He gratefully dedicated a menu item to my writing—the only contributor to get their own place in the drop-down menu.
Despite the web site’s drain on my time, American Reporter was fun and fulfilling. My publications there led to other opportunities in both Internet activism and journalism. I stopped only because I felt the need for my commentary on Internet policy had receded.
When I began writing for the American Reporter, mainstream coverage of the Internet was characterized by TIME Magazine’s notorious 1995 cover displaying the word CYBERPORN and a child whose face, bathed in the light of a computer screen, breathed a mix of fascination and horror. News “reportage” like this fed right into the Communications Decency Act of 1996, a naked bid at censorship which the ACLU and many other organizations had to combat fiercely. As antidote to conventional approaches, I saw it crucial to offer my thoughtful research and analysis about online speech, telecom regulation, universal access, copyright law, and other serious issues affecting the Internet.
But a few years later, the media started to do their job. The New York Times (particularly in the journalism of John Markoff), the Wall Street Journal, and other major publications discovered the Internet and were lavishing on it highly professional research I couldn’t match.
Although I would continue to administer the crotchety American Reporter web site until Shea went into the hospital for his final decline, and would write occasional articles I thought his readers would enjoy, I switched to a less demanding but more visible position through my monthly Platform Independent column in Molly Holzschlag’s Web Review. That lasted until the dot-com bust put the kibosh on Web Review at the end of 2001. For Holzschlag I wrote some of the articles with the most lasting power, including some studies of peer-to-peer and a parody of Dickens’ Christmas Carol.
“The Ghosts of Internet Time” went up on Web Review just before Christmas in 1999. Here I drew on themes from Dickens’s famous story to provide a view of Internet history and issue a highly idealistic call for action to preserve its best aspects, all in 1100 words. More than 20 years later, I still see pertinent insights in the words spoken by the Ghosts of Internet Past, Present, and Future.
For some reason, “The Ghosts of Internet Time” captured the love and imagination of people around the world. A few years after its original publication—when Web Review had already folded, I think, and I was hosting the story on my personal praxagora.com web site—someone wrote to say they wanted to translate the story into another language. This opened the door to a flood of translations, all by volunteers. About 20 are still extant as I write this. Some of the volunteers were acting out of pure altruism, it seems, while others wanted the content on their web site to promote their commercial activities. I welcomed them all, and trusted that the translations into languages I didn’t know were faithful to the original.
“The Ghosts of Internet Time” was representative of a stream of articles where I pondered the future effects of technological chance. One of my early blog postings, from 2000, is titled “Dialog with an Internet Toaster”, showing that I was already thinking about the Internet of Things and the trends that O’Reilly would cover in its Solid conferences fifteen years later. I have used stories and skits many times as concrete analogies for my ideas about computing and the Internet. Another little parody I put up in 2000 suggested the theme of patients sharing data to help each other find a cure, anticipating a call for patient control over their medical data. Later short stories predicted the breakdown of journalism (“Validators”, 2007), the triumph of cloud computing (“Hardware Guy”, 2010), and universal social tracking and rating (“Demoting Halder”, 2012).
In the mid-1990s, Internet researchers were groping toward real-time, interactive communications—what we now take for granted in services such as webinars and Skype, not to mention the virtual conference tools such as Zoom that became necessities during the COVID-19 pandemic. Just as search engines went from a nice-to-have service to a part of core Internet infrastructure in the late 1990s, virtual conferencing made the same transition over just a couple weeks in March 2020, as countries everywhere imposed physical distancing on their residents.
Videoconferencing stems from the courageous and visionary innovators—yes, those strong words apply— who implemented interactive phone calls and video sessions on the Internet in the 1990s. I don’t believe this part of Internet history is told widely enough.
The obvious bottleneck to Internet phone service is low bandwidth. Also implicated is the brilliant core insight of the original Internet as created in the 1960s: packet-switching. The whole idea of the Internet rested on its radical process of breaking communications into small chunks and sending them over potentially many different paths to be reassembled at the end. This worked exquisitely for communications with no dependencies on time. When people tried to use the Internet for real-time communications, they suffered from jitter and lost packets.
Overcoming these problems required clever engineering. The emergence of the Internet as an interactive medium drew together the efforts of many inspired geniuses. I learned about the subtle interactions among these technical explorations from one of the people who shaped popular computing, Bob Frankston. He helped to invent VisiCalc, which in the infant years of the PC era showed the public what computers could do for average folks. He also introduced Microsoft to home networking, then got into telecom policy like the rest of us in the 1990s. Frankston helped me understand that low bandwidth could be overcome through improved compression and other aspects of sophisticated communication protocols. He was never a cheerleader for higher bandwidth as a solution to technical or social problems. He wanted network engineers to make smart use of the bandwidth they had.
Because Frankston lived in the Boston area, I had the privilege of meeting with him several times, sometimes for Boston-centered conferences and other activities, and sometimes at the O’Reilly summits known as FOO camps. Frankston came to rely on me to bounce off his ideas about telecom reform. With some trepidation, daring to summarize his multi-layered views on the topic, I can summarize his approach as follows: he wanted to move Internet access from a business proposition to a core part of society, like streets and sidewalks (a metaphor he regularly used). I assume, but can’t guarantee, that this meant a call for declaring Internet access a government-backed utility, as many other Internet activists have done.
Frankston regularly sent me articles for review, sensing apparently that I could help him reach a wider public. His mind tends to unique associations, a trait I could appreciate because I think I have a highly unconventional organization to my own cortex. I usually read his drafts elliptically, rotating them in my mind and pinpointing one or two key points, then asking him to hang everything off those points.
Providing telephone calls over the Internet became a kind of messianic striving in the 1990s. One of its greatest champions was an entrepreneur named Jeff Pulver, who consulted with businesses hoping to exploit those voice services. He created a series of conferences called Voice on the Net. As bandwidth improved, along with protocols, Pulver renamed his organization Video on the Net. Everyone who wanted these efforts to succeed would gather for human networking and business deal-making at these conferences. The legacy directly informs the services we enjoyed in the twenty-first century. Another enduring feature of these conferences is the Video on the Net briefcases, one of which I was still using 25 years later to lug my laptop on trips.
One group of researchers spearheading the interactive Internet felt that their innovations were so radical that they deserved the name “Internet 2”. I wrote about their research in blog postings, and once they asked me to deliver a talk over their video connections. I took this on with great enthusiasm and went down to a studio in Boston where they transmitted my talk. (The exact topic escapes me.) I exhibited great energy during the talk, accompanying my words with grand movements of my upper body, which I thought would convey energy. Viewing the video later, I realized that all I had accomplished with these movements was to introduce distracting and unseemingly jitter. I needed to learn a lot to use the Internet medium effectively.
The vision of Internet phone and video calls was politically toxic in the 1990s and early 2000s. Telephone companies, newly spun off from the AT&T breakup, derived huge revenues from charging 50 cents or a dollar for long-distance calls, and truly punitive rates for international calls. They regarded the prospect of people making free calls as an existential threat, and fought it by persuading regulators to ban or place insurmountable barriers in the way of the new, small Internet carriers.
Just to give an idea how this battle was fought, one of the big issues revolved around emergency services. It is certainly a great benefit for emergency responders to know the location of a call when someone calls an emergency number such as 911. Landlines provide the information on the spot. But computers connected to the Internet make it harder. So what happens if someone uses the Internet to call 911? Telecom companies spun enormous controversies over this possibility, trying to rule out Internet phone calls because they didn’t offer the same guarantees as land lines. This seems particularly ironic years later as the same telecom companies degrade and eliminate landlines in areas where their revenues are dropping.
And of course, the telecom companies were making money hand over fist from the Internet all this time. When telephone lies were slow, many households invested in a second line to support their Internet logins. In general, explosive increases in Internet use during the 1990s and 2000s led to explosive increases in the use of telecom lines, which in turn drove lusty telecom profits. Internet activists tried to point this out in our parries to the assaults that telecom company lobbed at Internet use. Eventually, the Internet’s offerings proved so enticing that the telecom companies had to embrace it—and ultimately try to take it over. They had also learned by then that they could radically reduce their costs by digitizing their own lines. It used to be that computers hooked up modems to send digital information over phone lines meant to carry voice traffic. Eventually the technology flipped completely: people’s telephone conversations were digitized to go over lines using the Internet’s basic concept of digital packets.
Voice and video on the net were not inevitable developments of the telephone business. They are the legacy granted us by determined experts and advocates like Frankston and Pulver. Everyone who hops on a video session for a class or a work meeting should honor these pioneers.
I joined CPSR as a novice in Internet policy, jumping into vastly complex areas such as telecom policy with the aim to learn. But I gained most of my confidence as an interpreter and explainer of Internet policy after I volunteered to help moderate a CPSR mailing list called cyber-rights.
We invited the public to join us in advocating for a set of four rights, three of which were fairly unarguable: the right to assemble in online communities, the right to speak freely, and the right to privacy online. But the fourth, the right to access, turned out to be controversial.
One might think that the right to access was kind of essential, because one could not enjoy the other rights on the Internet without access to it. Indeed, that right informed the central issue of telecom reform that CPSR took on during the mid-1990s. With the COVID-19 pandemic, institutions in 2020 have come to complete accord that every student, every unemployed person applying for aid, everyone needing any kind of connection to the greater society, needs Internet access. But in the 1990s, a significant minority of CPSR members and supporters had a libertarian bent, which is fully in line with free speech and privacy, but frowned on the subsidies and regulation implied by our call for universal access.
As if to draw a line, our web page introducing the group in 1995 started “The most important civil liberties issue facing us today is getting citizens of all races, classes, and creeds connected to the Internet. Our fight for free speech and privacy rights will remain a hollow victory if cyberspace remains mostly a bunch of well-educated white folks.” I suspect from the clarion-call insistence of the rhetoric that I wrote that paragraph.
Even though the list moderators resolutely agreed on the principle of universal access, we found ourselves eventually in dispute over how to achieve it—and that was fatal to the list.
As relevant events passed faster and faster by us—position papers, proposed regulations in many countries, technical achievements, occasional media coverage, and our own activities—I thought it helpful to write short summaries for the list. I was effectively acting as a news correspondent, and this eventually led to my being one for real. Joe Shea was on this list. He had quit his day job to try journalism on the Internet years before any mainstream publication made the move. He wanted a regular Internet feature, and recognized me as the person to do it. I have described my adventures there in an earlier part of this chapter.
We activists were all jazzed up by the outpouring of personal narrative and activist engagement on the Internet. I captured some of this breathless feeling, the anticipation of crossing a continental divide into a new land of freedom, in “The Ghosts of Internet Time”. I flaunted the Internet’s liberatory potential even more shamelessly in a 2001 short story, “The Meaning of Independence Day”. Enfolding a lecture on free speech and censorship along with a high-jinx adolescent adventure, the story ends with an ambiguous exchange between the heroine and the school principal I set up as her epic antagonist. The principal cites the Internet as the first mass communications platform that will challenge entrenched powers, and I leave the reader wondering both what the principal really desires and whether the heroine can win the battle to which he invites her.
If you’re reading these sunny reckonings amidst the wreckage of democracy caused by bots and malicious analytics run amok, and conclude that we were naive in the 1990s, let me take strong opposition to that view. My colleagues in CPSR did not believe that liberation was guaranteed. We poured our free time into policy campaigns precisely because we knew how easy it would be for our rights to slip from our hands. The poison on the Internet lies not in the contributions we celebrated from the masses, but in the vast machinery of data crunching that slots people’s attention into convenient boxes for marketing.
So let’s talk. Do I still nurture a trust in online media to bring people together and channel their strivings to save the world? Yes, I do. In fact, had I not maintained that faith, I could not have devoted the bulk of my waking moments over the past 35 years to explaining how digital technologies work, encouraging professionals and lay people alike to make the most of these technologies.
Back to the mundane grime of life as an activist, and the demise of the CPSR mailing list. Having multiple moderators was a terrible mistake. We started out joyous and positive-minded, but divergences appeared after a few months.
One moderator was a telecom professional, with a background, I believe, in law. His expertise was formidable and very useful in dissecting federal policy and planning our campaigns. But he was cautious in his policy proposals, knowing how the current industry was structured and how difficult change would be.
Another moderator was an artist with a great zeal for our cause. His approach to our universal access principle was visionary. He looked forward to ripping away the artificial barriers of current telecom behavior and policy.
Both approaches have value and integrity. Both were worth hearing. But as the two moderators’ views started to clash, debate became acrimonious. The professional poured scorn on the artist’s naïveté while the artist accused the professional of being a sell-out. Neither list member could be barred, because they had the status of moderator. The medium of email, notoriously conducive to exacerbating disagreements, did not permit a healing sit-down negotiation that might find common ground. I suppose I could have tried to take the dispute to a higher level of CPSR, but I didn’t know what that level might be—we were not a strongly hierarchical organization. I tried to cool the two combatants down, but to no avail. Members of the list drifted away, and finally, in exhaustion, we killed it.
The world of the Internet was evolving rapidly in 1994, and as a novice activist I was putting out feelers to find topics with which I could grapple, express myself, come to terms with the tectonic changes taking place, and perhaps leave a mark on events. The first issue to come up on this beat seems odd even today.
One of the colleagues with whom I’d struck up a casual friendship was a lawyer. I probably met her through Computer Professionals for Social Responsibility, but I cannot guarantee that, because those of us concerned with Internet policy were a small and tight-knit community and could run into each other anywhere. This lawyer—whose name I unfortunately forget as well—suggested I explore current controversies over domain names.
Of course, I knew a lot about domain names. I had reserved praxagora.com for my personal web site, so I understood domain names from the point of view of a proud owner. I also understood the technology, because if you enter praxagora.com into your browser you are tying into a vast and rigorously maintained system called DNS (which some say refers to Domain Name Service, others to Domain Name System) whose operation must be understood by system administrators.
In 1994, there was no comprehensive, consistent, or coherent policy regarding the allocation and management of domain names. The closest thing to a controlling body was a single person, the redoubtable Jon Postel. A company with unclear and idiosyncratic provenance called Network Solutions controlled all the important top-level domains. A lot of Internet activists wanted some policy assuring that the public interest would be represented in domain names.
I remember telling the lawyer, “Sounds like pretty small potatoes to me.” “Not at all!” she replied. If I knew that the next five years of my life would be filled with meetings, blog postings, position papers, and other political activism around this topic, I might have walked the other way right there.
I don’t expect every reader to care about domain names—although I know that some do. I met someone at a reception of a Harvard University forum about law and technology who pummeled me verbally with condemnations of how domain names were administered. But for most of us, the more interesting issue is the broader philosophical question raised by domain names.
In retrospect, the fight about domain names seems inevitable, because it’s a battle over centralization of control.
The Internet is famously decentralized. I think the story that it was designed in decentralized fashion to survive a nuclear attack is at least partly true, but in any case it resolutely lacks a controlling authority. During its early years, each technical advance would be adopted or rejected by each hub running Internet software.
Various countries ranging from the People’s Republic of China to Saudi Arabia and Russia have imposed controls, but they do so either by punishing actors who are out of favor after the fact or by inserting firewalls at large Internet service providers. The network does not in itself support central control, although once in a while a huge company such as Google could essentially say, “Do this or else”.
Two key exceptions exist, two aspects of the Internet that are fundamentally centralized: the distribution of IP addresses and the distribution of domain names.
Although technical problems exist with IP addresses—shortages in many regions of the world—these did not raise much in the way of policy issues. It is the highly visible domain names that triggered concern. This was a time before search engines were highly effective at returning relevant results, and people relied heavily on domain names just to find sites.
When someone has to make a decision that affects the whole globe, such as who gets to use the name praxagora.com, it raises the pressing question of what institution is ultimately in charge. Even more difficult is how to represent the interests of a mostly unknowing and indifferent public against the interests of actors with narrow agendas. Distributed and representative governance—that’s the dilemma of the Internet, and it first opened its fearsome jaws around domain names.
Confusion in domain names could muddy a company’s brand—and by 1994, companies understood that they needed a clear and unblemished presence in the domain name system even more than they needed a commanding physical presence out in the real world.
Most eyes were on the top-level .com domain, because it didn’t have restrictions (.mil was for the U.S. military, .gov for U.S. government agencies, and so on) and because commercial use of the Internet was on a growth pattern that would outstrip the term “exponential”. Internet-savvy speculators would cheaply buy up domain names corresponding to major brands and wait for companies to beg for them. Tens of thousands of dollars were traded, and companies were angry. There was talk of exerting trademarks and other actions that would place large companies in the driver’s seat over domain names.
My records show that my first article on the domain name controversy appeared in October 1997 in my weekly American Reporter column. My first position paper for CPSR came out in March 1998. Tensions between free speech advocates and corporations were coming to a head, and a major summit was organized to find a solution that all would accept.
Amazingly, the summit succeeded. Free speech activists acknowledged the problems faced by corporations, and the lawyers representing those corporations recognized that the domain name system should serve the public interest. I didn’t attend the summit, but I created a well-received set of principles with fellow CPSR member Harry Hochheiser. I gave our paper the name “Domain Name Resolutions”, which plays off of the technical aspects of domain names. (Turning a domain name into a server’s Internet address is called “resolution”.) The various sides of the debate engaged honestly, transparently, and positively. A solution fair to all was in our sights.
Years of questioning never revealed to me what forces intervened to wreck the DNS agreement or why. The sequence of events was as bizarre as it was relentless. Someone threw together, with no guiding precedent, a bureaucracy called the Internet Corporation for Assigned Names and Numbers, to which Jon Postel abruptly turned over control of IP addresses and domain names. He died at the tragically young age of 55 shortly thereafter, and lawyers took over the vacuum left by the engineers.
The consequences of ICANN’s hasty formation, omitting consultation with those who understood Internet names and numbers, showed up instantly. The obscure creators of ICANN stocked its board with people deliberately chosen for their lack of knowledge about the technologies they were supposed to administer. I had the historical privilege to attend the board’s first meeting, which took place publicly in Cambridge, Massachusetts. Activists who had spent years investigating DNS and advocating for sane policies challenged the ingenue board members, who were obviously out of their depths and couldn’t handle the most basic issues, such as how to run meetings that the public could engage in. The organization got only more distant from its constituents as time went on.
I should admit that ICANN actually found some support in the established Internet community. Some respected members of the Internet Engineering Task Force, which creates the standards over which Internet traffic runs, came out in ICANN’s favor. The noted commentator and investor Esther Dyson became its first chair. But I believe these stand-outs were in the minority among activists. In a few years, telecom policy would similarly divide Internet activists. Many would call for stringent restrictions on telecom companies’ discretion over traffic shaping, but came up against opposition by important leaders like Internet developer Dave Farber.
It’s noteworthy that ICANN futzed around endlessly with domain name and trademark policy, but never deigned to look into two real problems that could reasonably fall under its purview: the security of the DNS root servers, which are constantly under attack, and the starvation of Internet addresses, which called for a smooth and expeditious transition to a standard called IPv6.
Even though ICANN was clueless about their domain of operations, nobody was more expert than they at creating expensive bureaucracy. I do not remember who decided to break the simple job of registering our domain names into two parts. But ICANN became responsible for regulating two types of organizations, at least one of which was redundant. The first were the “registries”, which were responsible for maintaining and sharing the mappings between domain names and addresses. The second were “registrars”, which were responsible for charging an exorbitant fee to anybody who requested a domain name. I don’t know of any other responsibility assigned to the “registrars”, except those that they picked up over time were part of the superstructure plastered on top of DNS by ICANN.
Now and for some time after, I was writing articles about DNS and soon ICANN on an almost weekly basis. Because large corporations asserted their right to control DNS on the basis of trademark law, I learned a good deal about that corner of legislation.
ICANN pressed an even bigger issue on policy-makers from all corners: how to govern a virtual space? Nominally subordinate to an obscure sub-agency of the U.S. government, the National Telecommunications and Information Administration, ICANN effectively reported to no one. Everyone agreed that its historical relationship to the NTIA was arbitrary and should not continue, but the dilemma was how to grant some accountability to the public.
There are many facets to this conundrum, which can’t be resolved in a manner as simple as John Perry Barlow suggested in his utopian Declaration of the Independence of Cyberspace. Currently ICANN is in the astonishing position of being one of the few organizations in the world that officially has no accountability to anyone (except a fragmented internal collection of stakeholder representatives, structured as Byzantinely as everything else ICANN does). I reported heavily on the hot debates taking place around Internet governance.
Hardly anybody now remembers how close the DNS stakeholders in the mid 1990s came to creating a supple and well-run DNS system. Victory was snatched from us by obscure forces who have created the boondoggle that funds junkets for ICANN board members and creates uncertainty for all.
The hegemony is eminently visible in the unconscionable prices charged for domain names, although questionable policies extend deep into the system. Technically, a domain name consists of a row in a database with information about the owner, and an entry in a table in the memory of a server that resolves the domain name. That all costs a tiny fraction of a cent, but domain names go for $15 or more. Monopoly charging supports the kind of gargantuan, complex, arcane bureaucracy beloved by large corporations and their overcompensated attorneys. Companies that choose to take away a domain have several options for doing so, while individuals and non-profits are left defenseless. In line with classic strategies for maintaining control and creating conflicts that only the bureaucracy can manage, ICANN enforces artificial scarcity by limiting the creation of new top-level domain names such as .info.
Along with the other activists who had gathered around the domain name issue in the mid-1990s, I continued to follow ICANN’s tortuous organizational proliferation and decision-making for about a decade. Some of the activists, more tenacious than I, hounded the organization much longer. Someone even set up an organization called ICANNWatch, whose last posting dates to 2010.
Toward the end of my long dance with domain names, I received a juicy perk: an all-expenses paid trip to Paris. The locus of this visit was one of the international conferences called regularly by the Organization for Economic Co-operation and Development (OECD) to bring together all the member nations from around the world and assess their progress on various tasks. The OECD, of course, is one of those international collections of policy wonks who have an uncertain role but can spend enormous amounts of money on it. Many goals of the OECD are certainly laudable. Their Millenium Development Goals include such crucial initiatives as health, education, and environmental sustainability.
A group of Internet activists persuaded OECD leadership in the late 1990s to host an additional conference day devoted to Internet policy issues. Two of the organizers were Meryem Marzouki, a computer expert and political activist in Paris, and Marc Rotenberg, head at the time of the Electronic Privacy Information Center (EPIC). They knew me from various policy battles, and I had met Marzouki at a Computers, Freedom, and Privacy conference, so they thought of me when they needed a speaker to cover DNS. They gave me eight minutes on the schedule, a cameo role for which they offered to pay flights, hotel, and meals for three days. That gives you a sense of the cash flow at OECD.
The Paris event was a great way to connect with people from as far away as South Africa with whom I had worked online. I wrote my talk, about the importance of access by small organizations to domain names, in an English style that I hoped would be easy to translate into French. I actually overran my eight-minute allocation by three minutes, but the talk was well received. And it was just my second day in Paris, after which I was free to enjoy the city. That’s where I made my only mistake.
Having never been to a meeting of a major international policy organization (except a few minutes in the United Nations General Assembly as an elementary school student), I felt curious about the OECD. As an invited guest, here was my great opportunity to sit in on their sessions and see democracy in action. I set aside my last full day in Paris for this education in policy-making.
Naturally, the whole event was a bore. Bleary-eyed representatives from various countries intoned their progress toward various pre-set goals. I was watching it all on video in a windowless conference room. Had the organizers talked to us Internet activists, they could have done it all by email and saved their funders tens of thousands of dollars. Thus I discovered the frustration which I’m sure many people feel toward bodies such as the European Union. It’s hard to get hold of a sense of citizenship in the face of such bureaucracy; it’s easier to see a bunch of mediocre talents living high off the hog at the public’s expense.
I finished my invitation to the OECD in their cafeteria, scarfing a lunch that couldn’t boast of any camaraderie with the cuisine for which Paris is celebrated. I then headed a few blocks west for a visit to the glorious Marmottan Museum, my sole tourist indulgence before flying home.
This chapter has shown several of the critical policy issues where computer experts spoke up and made an impact on public life. Some stories featured CPSR, some involved other institutions. I’ll finish with one small intervention we made in my local chapter of CPSR, early in my activism—an intervention with portent for a massive wave of change to come.
CPSR was willing to take its message anywhere. Our Boston chapter was one of the strongest, and in 1992 we conceived of lobbying our representatives at the Massachusetts State House to go onto the Internet.
Although we’re one of the most progressive states in the country, the liberal values of Massachusetts legislators do not inspire any particular transparency and sense of public participation within their own deliberations. The institution is extremely hierarchical. For a while, an annual ritual brought the Governor, the Speaker of the House, and the Senate President together to secrete themselves in a room for a week or so and emerge with a budget at the end. They often missed their legal deadline for a budget, but the rest of the legislature and everyone else in the state just had to wait with bated breath for the termination of their conclave. When we talked to representatives about putting proposed bills on the Internet so that the public could watch the law being made, one representative said that sponsors liked to release bills on paper so they could keep all copies in their office and see which representatives came by to ask for a copy.
Four of us went to the State House one day and managed to hook up a terminal through their phone. It turned out that even their phone jacks were some strange outdated design, so that large infrastructure changes would be required to get their computers on the Internet. But we did establish a network connection. Sitting at the terminal, I took several representatives on a tour of the Internet. I showed a few basic functions such as mail, and ended with a flourish by demonstrating the most advanced tool of the time: gopher. You can look up this software if you haven’t heard of it. It was an early implementation of hypertext, a pre-Web way of connecting information from various sites through a hierarchy of menus, all of course in text. The representatives were impressed.
We also met the IT staff of the State House, who were also enthusiastic. Eventually, this vision became a reality. The Massachusetts State House web site contains detailed information on bills, hearings, and votes.
Some 25 years later, a vast movement would arise in countries around the world for online government. Advocates within and outside government would call not just for putting governments and their staff on the Internet, but for using the Internet to draw the public deeper into policy-making than ever before. O’Reilly would latch on to that movement.