Peer-to-Peer File Sharing Systems Caught Between EMI and Echelon

by Andy Oram
April 27, 2001

We’d all like more accountability on the Internet: less unsolicited bulk email, fewer denial-of-service attacks, and even some relief from stupid newsgroup postings. But most of us accept that these drawbacks are an acceptable trade-off for a degree of anonymity and privacy.

Still, some problems seem to cry out for accountability. For many people, they justify abrogating the time-honored right to anonymity.

EMI and other members of the Recording Industry Association of America (the organization leading the lawsuits against Napster) would probably place the unauthorized exchange of copyrighted files at the top of their list of problems. The designers of Echelon (a top-top-secret international program to track all traffic on the world telecommunications network) would cite use of the Internet to plan criminal activities.

Peer-to-peer forces us to rethink the nature of these problems and the proposed remedies. Furthermore, peer-to-peer systems reveal weaknesses in current legal approaches to Internet accountability. In this article, I’ll focus on the legal status of the best-known peer-to-peer application: file sharing.

Characteristics of peer-to-peer file sharing

Freenet is the poster child for anonymous file sharing, but a number of companies have emerged over the past year that offer practical applications using related models. A couple that I’ve talked to include OpenCola and XDegrees. They differ from Freenet, of course, as well as from each other, and neither would want to be thrown casually into the same bucket as the much-maligned Freenet. The designer of Freenet, Ian Clarke, has started a commercial venture himself called Uprizer, employing Freenet’s technical principles.

I mention these companies simply because their existence demonstrates that the following characteristics associated with Freenet are becoming mainstream:

In short, there are solid business and technical reasons for doing the things that some observers consider “subversive” about Freenet. We may be entering an era where large numbers of people request content from a global system rather than from a particular host. Any tie between the content and the person responsible for putting it on the system is established outside the file-retrieval protocol.

That last sentence is kind of technical; I simply mean that you can’t tell who’s responsible for a file simply by looking at its name. Currently, a URL contains the domain name or IP address of the responsible site. Under a peer-to-peer file-sharing system, authors indicate ownership by some other method such as adding digital signatures to their works. Such a development would render obsolete many laws that were recently passed and long fought over.

Is peer-to-peer file sharing, then, a cutting-edge technique with substantial practical value, or a way to evade corporate and government control? Both. As a little thought experiment, I have suggested a design for a system that uses peer-to-peer file sharing to sneak past an Echelon-like surveillance network.

Who is accountable?

It is now time to review the current state of legal liability for the content of networked computer systems.

A wide range of governments hold system owners responsible for the content they host on their systems. Some (usually those like China, Vietnam, or Saudi Arabia who are trying to shore up long-established censorship regimes) go further and make owners responsible for material that passes through their systems on the way from source to viewer. Both responsibilities clearly lead to trouble for any kind of content-caching system. Many laws have explicitly exempted caching in recognition that it is a significant Web strategy, but these laws were still not designed to take peer-to-peer file sharing into account.

Where is the problem between peer-to-peer and the law? Conventional caching systems like Akamai strike deals with companies of national scope that want content widely distributed for fast retrieval. It’s highly unlikely that the TV studios and ad agencies that use Akamai will be accused of distributing illegal or defamatory content, but any problems that come up can be resolved through the same mechanisms as in conventional media like newspapers.

Peer-to-peer systems bring in a more diverse bunch of end-users. Caching is done less consciously; files that match certain criteria tend to just end up together on systems that are near the people requesting them. Usually, the users have pre-existing trust arrangements and are not likely to be offended by content that appears on their systems. Yet the fact remains that users don’t know what is physically present on their disks.

Now consider present-day legal requirements that have little appreciation for the fleeting, Puckish nature of peer-to-peer data.

All these laws and precedents were designed for a simpler age when everybody was expected to know what was on his or her computer. As the listing shows, such an expectation is challenged even by such long-established practices as newsgroup exchange. Modern peer-to-peer caching removes this certainty entirely. The caching directly contradicts legal requirements.

Perhaps lawmakers will not find this conflict troubling. Few of us would be surprised if major entertainment firms and mainstream politicians, along with a number of meddlers of all types, would prefer shackling people to old views of computer systems and sacrificing the potential benefits of more flexible technologies.

Perhaps peer-to-peer systems can survive under current laws like the DMCA and the Bloche amendments. Site adminstrators are not responsible until the complaining party locates the system or systems that contain the content under question and officially notifies the owners. But the result could also be a succession of arbitrary and punitive actions against innocent PC users or their employers. Indeed, the EFF and several news sites are already predicting an oncoming crackdown.

Conclusion: Are content control and spying worth the suppression of useful technology?

What I have tried to demonstrate in this article is the consequences of some innovative Internet developments for anonymity and tracking. Freenet is only a harbinger of a general trend toward distributing content in such a way that it has no identifiable source, no fixed location, and no predictable reader. It has already been widely predicted that such systems will make it harder to detect and prosecute illegal content distribution; they may well also make government surveillance harder.

Current laws and emerging legal precedents clash with the new technology. Laws and precedents in the fields of intellectual property try to stifle the distribution of content by requiring every system user to be responsible for what is on his or her system. Surveillance assumes that law enforcement can pinpoint and track the location of suspects. The status of Freenet and other distributed file systems is ambiguous under these regimes.

Nor is their status likely to improve as the extent of their challenge becomes better known. While I don’t presume to think that policy-makers and legislators internationally will read this article, they will eventually find out what kinds of protection distributed file systems offer to the people they are trying to spy on. The restrictions already in place will probably be strengthened and clarified to exclude the use of the new systems.

Thus, those interested in the continual improvement and democratic spread of useful technology should prepare for a fight. The watchwords are: better surveillance or better communication?

Wire-tapping has many critics. In a major paper, for instance, the American Civil Liberties Union writes that, “The American Civil Liberties Union has historically opposed all forms of electronic surveillance by the government,” and, “Serious crimes of violence, including terrorist crimes, are almost never the targets of electronic surveillance. Electronic surveillance does, however, lead to violations of the privacy rights of vast numbers of innocent Americans.”

While it’s definitely useful to law-enforcement, many other ways are available to track criminals in the real world. You can follow the trail of weapons, of drugs, of money (isofar as it’s not digitized and hidden by cooperating banks), and of other physical evidence.

Government attempts to hold on to their surveillance capabilities are also profoundly antiquated and more than a bit pathetic, because they assume a clear cleavage in the world between the good guys and the bad guys. Governments always think they can hold on to their advanced technologies and keep them out of their adversaries’ hands. They’ve tried to do that when developing nuclear, chemical, and biological weapons, and they’re trying again with electronic snooping. But any technology they develop will eventually be used against them. In fact, digital technologies can be stolen relatively easily—the government possessing the technology doesn’t even have to err by sending busybody airplanes too close to the adversary’s territory. Thus, while there are temporary winners in the infowar race, everybody comes out a loser at the end.

Andy Oram is an editor at O’Reilly Media. This article represents his views only. It was originally published in the online magazine Web Review.