Peer-to-peer offers an opportunity to rebuild the foundations of the Internet

By Andy Oram

The following talk was delivered at an international symposium called Info-Tech2001, sponsored by the Kansai Institute of Information Systems in Osaka, Japan on November 15, 2001.

Abstract

Many companies developing computing technology have discovered that decentralized systems can efficiently perform searches, share content among collaborators, deliver streaming data, and solve large computations. While these peer-to-peer systems are often thought to create new problems for organizations and software developers, this talk shows that the systems actually just reveal existing problems that have plagued the Internet for a long time. Business and social impacts are also discussed.


Contents

  1. The term peer-to-peer
    1. Often criticized but still a useful guide to thinking in new ways
    2. My background and how I started to research the subject
    3. Topics of this talk
  2. Example showing the value of peer-to-peer
    1. Problem area to be examined
    2. An old, poor solution
    3. First stage of a better solution: a centralized solution
    4. Next stage, involving some decentralization
    5. Next stage, involving a more peer-to-peer type of solution
    6. Most sophisticated solution currently available: fully peer-to-peer
  3. Overview of illustrative and interesting peer-to-peer products
    1. File sharing
    2. Distributed databases
    3. Distributed or grid computing
    4. Collaboration
    5. Solving pieces of the infrastructure
  4. Technical problems to be solved in peer-to-peer
    1. How to find peers
    2. Delivering services
    3. Security
    4. Metadata
    5. Firewall issues
    6. Bandwidth issues
  5. Organizational problems to be solved in peer-to-peer
    1. Losing control over what is on the systems within the organization
    2. Losing control over communications among employees
  6. Possible business impacts of peer-to-peer
    1. Improving collaboration at the lowest levels between individuals in different corporations
    2. Transparency in regard to customers
    3. Intensifying competition
  7. Some social values
    1. Individuals and community
    2. Where does the value lie in information?

The term peer-to-peer

Often criticized but still a useful guide to thinking in new ways

I will speak today about the term peer-to-peer: a term that I did not welcome when it emerged; a term I did not ask to be applied to the research I was doing; a term criticized by many as too broad, too vague, and too glib. But even so, I am excited about peer-to-peer. I believe it has radically expanded the possibilities for the Internet and for telecommunications.

My background and how I started to research the subject

How can I speak about peer-to-peer systems when I myself am not a software developer and have not created any such systems? As a book editor in the computer field, I’ve managed to stay ahead of a lot of readers, who have found value in my articles about the development of the peer-to-peer movement, about companies creating peer-to-peer systems, and about related social issues. I have also edited the anthology Peer-to-Peer for my company, O’Reilly & Associates. A Japanese translation of this book will be released in early 2002. Part of the appeal of my articles and of the book is that I bring to my research a burning interest in how technologies alter the fundamental ways people live and work. This sociological interest brought me fortuitously and unexpectedly into the center of public discussions about peer-to-peer systems fairly early: the spring of the year 2000. In this speech, I will continue to combine the technical, the commercial, and the social issues in peer-to-peer.

Topics of this talk

First, I will quickly discuss the differences between the systems used by most of the computer industry at present and the new peer-to-peer systems. Some slides will help illustrate how the systems are structured. I will also offer a typical taxonomy or breakdown of peer-to-peer systems. Then I will list some of the technical problems that remain to be solved in peer-to-peer systems, followed by some of the organizational problems that businesses will face. I will try to predict—very speculatively—how peer-to-peer systems might change the climate in which businesses operate. Finally, I will describe the social values embodied in peer-to-peer.

Example showing the value of peer-to-peer

Peer-to-peer is not a module you add to your computer or a programming language you can study. It is deeply buried in systems design—in short, it is part of a network architecture. To help you understand what peer-to-peer means and how it can be useful, I will show various architectures that have varying degrees of peerlike decentralization.

Organizations choose peer-to-peer not because it’s fundamentally elegant or theoretically superior, but because it solves a problem. So I have chosen one of the most common problems faced in nearly every modern organization, and will show you a series of solutions that increasingly depend on peer-to-peer techniques.

Problem area to be examined

Suppose that several of your staff are collaborating in the preparation of a report. They all need to view successive drafts and contribute changes.

An old, poor solution

Most companies deal with this task by passing the document around in email. (View Slide 1.) Day after day, each person receives a copy of the document in his or her mailbox. Each person is constantly trying to find out what the current version is and who is working on it—whose court the ball is in, to use a tennis phrase.

First stage of a better solution: a centralized solution

Modern Internet technology provides a better solution to the problem. Suppose the company has an internal Web site—an intranet—and that the Web master puts each version of the document on the site as the document is updated. (View Slide 2.) Now everybody knows how to get the most up-to-date version. Nobody will be working from an outdated version, and everybody saves a lot of disk space because they are not storing the document.

There is a drawback, though: every time somebody wants to make a change, he or she must ask the Web master to put the new version on the Web site. This tears the poor Web master away from important work needed to keep the Web site running and quickly becomes a major annoyance. Organizations do not thrive by annoying their critical support staff. And the Web master, who is not even assigned to the project, may end up being a drag upon it because he or she will repeatedly delay putting up the document until a convenient moment arrives. This solution also does not scale. Imagine several dozen projects going on simultaneously, each project manager asking the Web master to put up a new version once or even several times a day.

Next stage, involving some decentralization

We can greatly improve the solution I just showed with some fairly simple changes. The improvement is to keep documents on the central Web site, but let each individual put up a document without having to ask the Web master. Few conventional Web sites offer this kind of feature, strangely enough, but there are many projects that try to expand the Web this way. One is a commercial venture called Zaplet, which offers a service called appmail. (View Slide 3.)

To share a document using Zaplet’s appmail, you write an email message and include a document in a manner very similar to regular email—but the document does not get sent through email. Instead, the document is placed on a central server and each recipient of the email message finds a pointer in it to this server. Responses to the email also go on this server, and you can upload new versions of the document as you edit it; thus, all versions of the document and all follow-up comments are available in one place. And when you update the document or add a comment, it goes right on the site and no email needs to be sent (unless participants specifically request email to notify them of updates).

We are now beginning to see a hint of a peer-to-peer solution. Anyone with access to email and to a Zaplet site is now a publisher; control has been decentralized. This solution rescues the beleaguered Web master, but there are still drawbacks. Your team must still have access to a server, which is a major piece of hardware and software requiring much care and expertise. You must also trust that the computer hosting the server will keep running and preserve your document. Finally, while the file is stored in an easily accessible place, you still have to use email to notify everyone when you’ve changed the document—and email is a slow medium.

Another experimental system for group editing is called WikiWikiWeb. The current system lacks security, and that’s an understatement. It lets anyone do anything he or she wants to a document, even delete it. But this is not an inherent weakness; some access controls could be layered on top of the system. A number of other experiments also provide what researchers call the Two-Way Web.

Next stage, involving a more peer-to-peer type of solution

The previous solution felt less centralized than the one before it, but still it revolved around a central server. We just automated the uploading of a file to make the process easier for everyone. But we can go beyond the Web to something that fits our purposes better. We can remove some overhead, and perhaps diminish the need for powerful, central servers using high-bandwidth connections, by keeping each file on the system where its author is working on it. This solution truly qualifies for the term “peer-to-peer.” (View Slide 4.)

NextPage is one of the most successful of these peer-to-peer companies. Its platform, NXT 3, ties together content throughout an enterprise, and outside the enterprise if desired, to create a single structure called a Content Network that users can navigate and search. It’s all decentralized, although it depends on a set of servers running NXT 3 software.

A system called Folders, developed by OpenCola, goes even further. It looks at what files you request, what files other people request, and who is similar to you in their preferences. After deciding, based on these preferences, that you’re likely to be interested in a certain file, Folders uploads it automatically to your system. Over time, without explicit intervention from you, a folder on your system grows to contain files similar to the ones you’ve requested in the past. NextPage’s newest product, Matrix, will eventually provide similar services.

A system with this kind of sophistication exemplifies the vision Douglas Englebart had back forty years ago for a computer that could augment the human intellect. The technology behind this powerful, almost magical, process is called collaborative filtering. It’s collaborative, because your preferences are automatically combined with the preferences of other people on the system. The filtering part lets you find material you’re likely to be interested in, because people similar to you have shown an interest in it.

Systems that transfer files reproduce one of the drawbacks of the old system where people sent attachments in electronic mail. Files can get out of date, although automated updates help to lessen the problem. It’s hard to keep people in sync.

Most sophisticated solution currently available: fully peer-to-peer

The goal of synchronizing a team of people lies behind Groove. People using Groove join a shared space that is private and secure. Any change made to a document in this space is instantly viewable by all; simultaneous changes can even be made by different people. (View Slide 5.) Instant messaging, voice over IP, a collaborative whiteboard, and other applications can also take place in the shared space. No central server is required; Groove can be one of the purest peer-to-peer systems.

Groove’s shared space is an illusion created by rapid updating. Behind the scenes, every system that participates in Groove stores locally all the data that exists in the shared space. This redundancy is like that of the original email system with which I began this discussion: every document is on every computer. The documents are simply managed better, so they can be updated instantly on every system and still be stored compactly. Groove illustrates impressive achievements in real-time synchronization, in security, and in managing presence (that is, finding people wherever they are).

Some parts of Groove are being incorporated into Microsoft’s .NET. They have recently announced a partnership that will make Groove’s contributions much more widespread.

Overview of illustrative and interesting peer-to-peer products

The term peer-to-peer is inordinately fertile. It can be stretched to cover so many types of systems that it can get you in trouble. While each commentator has his own categorization, I will offer a list of categories based on what I see as competitive markets. The companies I combine into a single category may use somewhat different technologies, but they are trying to solve a similar problem. Here I will merely cover a handful of the many companies doing interesting work; by listing a company I do not recommend investment or imply that it will succeed.

File sharing

File sharing is the application known best by the general public, thanks to Napster and then to the even more ground-breaking systems Gnutella and Freenet. Whereas Gnutella and Freenet stress anonymity as a goal, recent corporate offerings stress efficient data transfer. I already mentioned NextPage and OpenCola. An interesting variation on this theme is companies such as AllCast that offer content delivery networks, distributing streaming media like radio and video. (View Slide 6.) When a lot of people simultaneously log in to a broadcast, AllCast’s central server opens connections only to the first few people. These people send the data on to the next people who connect, who send the data on to still other people.

Peer-to-peer file sharing and streaming have been found to make downloading quicker and relieve bandwidth pressure on servers. Speed can be enhanced by downloading different pieces of a file from many locations at once, or by choosing a location that is close to the user geographically or in the network topology.

One interesting consequence of peer-to-peer systems is that the content becomes divorced from its location; it floats freely through the network and nobody can be sure whether they even have it on their system. Deleting it becomes difficult in some systems. This is why many large copyright holders fear file-sharing systems. Other people who want to censor content for some reason will also find these systems problematic.

Distributed databases

Distributed database vendors have goals similar to file-sharing systems, but instead of passing around files, these solutions pass around data in the form of database queries and their results. (View Slide 7.) Examples include the companies Thinkstream and Jibe. A Jibe representative tells me that the distributed database provides a key advantage by letting a company get up-to-the-minute inventory information from its vendors.

Distributed or grid computing

All peer-to-peer systems distribute CPU activity, simply because the systems’ accomplishments are based on participation by end-user sites. But distributed computing in particular focuses on this distributed CPU activity and makes it the system’s raison d’ être (reason for being). A central server breaks a large job into multiple pieces, farms the pieces out to end-user computers for processing, and collects results. (View Slide 8.) CPU time becomes a commodity offered by each system to its neighbors. The best known examples of these companies are United Devices and Entropia.

One sign that the peer-to-peer field is still scurrying around in confusion, trying to find a way to market itself, is that multiple terms circulate for a single concept. Distributed computing is sometimes called grid technology, reflecting the idea that the aggregation of computers could form a grid like the electricity grid from which communities buy power. You don’t call up a particular gas-burning plant to ask for some electricity; you just turn on your light and the electricity is there thanks to an aggregation of many power plants. The same can be done with CPUs. They form a unified resource that individuals can tap into as necessary. But by doing so they cease to become peer-to-peer systems in the most liberating sense. The more interesting peer-to-peer systems that will really transform our way of working are those that honor the individual contribution of each peer; the particular resource you provide makes a difference, and so does your unique position as its provider.

Since grid computing requires each user to run a program, security issues arise. A representative from the company Entropia reported that, in addition to encrypting content, companies prefer to restrict grid computing to their own LANs. This way they have to worry less about data corruption or about losing intellectual property rights on their data.

Collaboration

I already mentioned Groove as a collaboration system; another is Consilient. (View Slide 9.) Consilient provides structure for the normal work flow within an organization; it automatically passes information to the people who need it and helps them handle it correctly. With the systems provided by these companies, small groups can work efficiently and exchange ideas quickly. The term “frictionless” is popularly used to describe the communication promoted by such systems. They break down barriers between companies and organizations, allowing each individual to contribute to the collective good. These systems don’t care what department you’re in or what company you work for; once someone invites you into the system you are part of the team. Overcoming boundaries is important for modern commerce, but may be seen as a threat by traditional managers accustomed to handling the border traffic between companies and between departments within a company.

Solving pieces of the infrastructure

Peer-to-peer requires organizations to rethink the underpinnings of their networks and to do things in new ways. Several companies therefore try to make a living by solving some of the underlying problems in peer-to-peer; by making other developers’ work easier. Sun Microsystems is the most ambitious company so far in the peer-to-peer space, proposing JXTA as a solution to many problems. Microsoft’s .NET is flexible enough to be used as both a client/server platform and a peer-to-peer platform, maybe within a single user session. Groove, which I mentioned previously, provides a platform and encourages third parties to develop tools and applications that take advantage of its peer services (such as security, synchronization, and presence). Many other companies are promising to save programmers time and trouble in creating peer-to-peer solutions. Naturally, establishing a firm place in the market is a difficult task for everyone except Microsoft, and possibly Sun.

Technical problems to be solved in peer-to-peer

Even though a number of peer-to-peer systems are deployed and operational right now, they’d be easier to use, and possibly easier to use in combination, if some technical problems were solved.

Valuble innovations usually put new strains on organizations, just as taking up a new sport puts strains on muscles you didn’t know you had. Peer-to-peer is certainly innovative enough to raise problems in the organizations that try to adopt it. But these problems do not condemn peer-to-peer as a bad idea. The problems all existed before peer-to-peer systems became popular. Solutions could simply be put off in the past, because people found ways to work around the problems.

Adopting peer-to-peer solutions means putting your network resources to fuller use. That’s why problems that were minor irritants before turn into glaring incapabilities now. Similarly, research projects that simmered along in the academic community for years are now becoming urgent deliverables.

We have built the Internet too fast. We made some bad choices and took some wrong turns along the way. But as the Japanese automobile manufacturers forced the American automobile manufacturers to do in the 1980s, one can always try again and do a better job. On the Internet, peer-to-peer gives us a second chance. Now we can carefully re-examine the foundations of networking and rebuild the parts that were slapped together too hastily over the past couple decades.

How to find peers

The first problem is how to find the peer that has the resources you want. This problem is called resource discovery, and depends heavily on another topic, naming or identification. The relationship between the problems is like this: most end-users have frequently changing addresses on the Internet, and no identity or name other than an email address that is useful for nothing except email. If a peer-to-peer system can’t name them, it can’t find them. When millions of Japanese mobile phone users log into the Internet using their high-speed 4G connections, you’ll want a robust naming system.

So far, the solutions found for peer-to-peer involve using a specialized, central database. Instant messaging systems work this way. You have to connect to the server each time you go online so it knows your Internet address, and if the server is down there’s no communication. Some systems allow communication to continue along existing sessions after the server goes down, but new sessions cannot start. Some of the most sophisticated solutions use a hierarchical and replicated server to decrease the chance of failure; such solutions range from the classic Domain Name System we all use to new products like XDegrees. Practical solutions also know that I am Andy Oram whether I’m attaching to the network from work, from home, from a mobile phone in an airport, or any place else.

Delivering services

Any two systems that want to communicate need a protocol: they need to know what format data will arrive in and how to start and end a conversation. A lot of systems use the protocol developed for the World Wide Web, or at least the port allocated by each system to the World Wide Web, because firewalls block nearly everything else.

The Web is actually pretty flexible. Because Web access is universal, and because so much information is currently delivered over the Web, the chief method of service delivery for the next couple years will be Web services. The XML-RPC and SOAP standards make peer-to-peer possible as well as facilitating automated applications over a more traditional client/server session.

But the computer field will probably find new protocols that are more efficient and flexible than the Web for peer-to-peer. I have already mentioned Sun’s JXTA, and other standards are also being scrutinized.

Security

It could not be too long in this speech before I had to face the problem of securing and authorizing access to peer-to-peer systems. I put other things first—identification, resource discovery, delivering services—because there is nothing to secure if these problems are not solved, but security should be just as fundamental and be solved just as early if you want it to work right.

There are two problems commonly found in peer-to-peer systems. First, computer systems can become overloaded with requests or with data. This is called a denial-of-service attack, and is a well-known problem in conventional as well as peer-to-peer systems. The second problem in peer-to-peer is that you don’t know who’s sending you data and whether it’s good data or corrupt. These problems of authentication and integrity are also rampant in conventional systems, and the solution is the same in all cases: encrypted data wrapped with digital signatures. The centrality of encryption to everyday Internet commerce and communications has repeatedly caused governments to pull back from threats to limit or take control of encryption.

In addition, anyone running a peer-to-peer application is essentially running a server that interacts with other systems. If such a server has any software flaws, these could be exploited by malicious peers to breech security on the host system. So far the risk is merely theoretical; no security flaws on peer-to-peer applications have been reported. But if they become popular, such flaws will appear someday. In short, peer-to-peer is no less secure than centralized systems; it simply has more ambitious goals.

Authentication is worth spending some time on in this talk, because peer-to-peer actually offers a new approach that may untangle an old problem. The traditional Internet has been trying to solve the authentication problem for decades, by which I mean: Internet developers want to create a worldwide security system that lets you negotiate a contract for millions of dollars, digitally sign a contract, and feel that you’re on just as solid ground legally as if you’d sat in a lawyer’s penthouse and signed papers with the president of the partnering corporation. Well, bottom line is: we don’t have that system. And the path pursued by the current security community isn’t getting us there.

Authentication does exist on the Web. You see it when you start to download a plug-in or an update to your software and you see a dialog box saying, “This software has been digitally signed by Microsoft.” Fine, we may trust Microsoft to update its own software, but how do we know the dialog box is telling the truth? Who says that the signature belongs to Microsoft? We trust the dialog box because our browser checked with a certificate authority. And the biggest certificate authority by far—the Microsoft of the security world, and the editor’s choice from Network Computing magazine—is a company called VeriSign. All our browsers trust VeriSign, and if VeriSign tells us Microsoft signed something, we can trust we’re downloading software provided by Microsoft.

Well, in January 2001 VeriSign shamefacedly revealed that some anonymous correspondent had tricked it. VeriSign gave this random individual a certificate proving he or she was Microsoft. To quote the world’s leading computer security expert, Bruce Schneier, “This is a big deal.” (Bruce Schneier has been warning us for years about all sorts of weaknesses in this kind of authentication system, which is called public key infrastructure or PKI. I’ll call it the grand centralized solution—“grand centralized” because using it is like purchasing diamonds in Grand Central Station, the enormous, bustling train depot in the middle of New York City.)

The certificate is good through January of 2002. That means that during this time, we cannot trust any dialog box from the pre-eminent company in certificates, VeriSign, telling us that we can trust the pre-eminent company in software, Microsoft. Oh, we can click on a Details box and check a 32-digit hexadecimal number or the date on the certificate, but few computer users are trained to do that.

What does this whole sad affair tell us about online authentication? What happened to my million-dollar contract with the president of the company? When can I shut up my brick-and-mortar shop and live within the virtual economy?

Let no one complain to me that peer-to-peer systems are weak on security. The truth is that online security, authentication, and general trust on the Internet are big, unsolved problems. And I think the solution used by a number of peer-to-peer systems is superior to the grand centralized solution. These peer-to-peer systems depend on a Web of Trust, which is not to be confused with the World Wide Web.

A Web of Trust system means that you establish a relationship with someone in real life, or through some channel you trust. For instance, you and I may have a conversation after this talk where we come to accept that each other is honest and telling the truth about himself or herself. We can then exchange business cards or diskettes or scraps of paper that have secret key codes on them, and use these keys for future encrypted communication. If you trust me, I can introduce you to a colleague from O’Reilly & Associates or someone else I trust, and then you can trust him or her too. That’s how a Web of trust is formed, and I have heard that it’s very similar to how people meet each other in Japan. It is traditional for one person to introduce two people he knows, and after that the two people start to build their own trusting relationship.

Peer-to-peer systems still have to track and manage long-term reputations, an interesting problem on which many researchers are working. Reputation remains difficult to establish on the Internet, where you usually have to form judgements after listening to reports from dozens of people whom you don’t know. To make the problem worse, individuals can do bad things on the Internet and then disappear, leaving behind the problems they’ve caused and popping up somewhere else. How will peer-to-peer systems deal with their Alberto Fujimoris? I will be anxious to see further research and experimentation on the topic of security.

Metadata

It is time to move on, away from security and to the next problem in peer-to-peer. This problem is providing metadata, which means information about something you’re interested in.

Suppose you are looking for news articles on the Web. Usually you search for a particular subject—that’s one type of metadata. But if you know of a particularly good journalist, you might search by author—another type of metadata. You could also search by date, in order to find recent and relevant articles, or by length—because you don’t want something too superficial or too detailed—or even by ratings assigned by other readers. All these types of search items are metadata.

Metadata is the one area in peer-to-peer where everyone I’ve talked to, in every company, agrees on how to solve the problem: they all encode their data using XML. Isn’t that wonderful? With all the chaotic experimentation in peer-to-peer, there’s something standard that you can count on everyone using!

Actually, standardizing on XML doesn’t get us very far. It means we can all use the same programs to take apart and look at data; that’s a real achievement. But we still don’t have a particular XML tag that marks a document’s author. And even if we agree on a tag—we could just call it “author,” for instance—we haven’t agreed on how to list multiple authors in the document. And even after we solve that problem, we haven’t created an overarching structure that places the author tag in its proper place within other document metadata. Or solved a bunch of other problems with authorship that I won’t bother to list here.

In short, each discipline that wants to share data has to standardize on what to say; XML is just a format for saying it. Schemas are one solution for specifying data inside the document; they play an important role in Microsoft’s Hailstorm (now called .NET My Services), for instance. Resource Definition Format or RDF is a solution for specifying metadata. Various RDF schemas exist for defining different types of metadata; a team has even solved the author problem through something called the Dublin Core Metadata Initiative. But it will take a long time for these to be recognized as standards, and even longer to find convenient ways to insert metadata. You can’t leave such a boring and unrewarding activity up to the user.

If we can standardize on metadata, we can perform searches, calculations, negotiations, and all sorts of other nifty activities over a range of media from the Web to instant messaging.

Firewall issues

A few more issues deserve brief mentions. One is firewalls. They tend to block anything unanticipated by network administrators; so they will have to be upgraded to accommodate peer-to-peer systems. Otherwise, those peer-to-peer systems will have to be crippled to fit through the firewall.

Bandwidth issues

Lots of people, reading about the strain Napster placed on universities and other networks, expect bandwidth to be a peer-to-peer problem. It’s actually not a major issue, in my opinion, because most peer-to-peer systems are not bandwidth-intensive. If people like transferring large files—and if they are not legally prevented from doing so—you have to provide more bandwidth regardless of whether you use a peer-to-peer system or another type of transfer. Still, I’m happy to hear that Japanese access providers—KDDI, Yahoo Japan, NEC, and Sony—have lowered prices for ADSL, because that will encourage more experimentation with new services. If you can anticipate high bandwidth needs, some peer-to-peer systems actually alleviate the demand for bandwidth. I’ve mentioned AllCast as an example.

Organizational problems to be solved in peer-to-peer

That’s my list of interesting technical problems. But if you think a peer-to-peer system may have something to offer your company, you must not limit your briefings and preparations to technical staff—you have much bigger issues ahead of you. I will take just a moment to talk about the problems managers and organizations may face when peer-to-peer systems arrive.

Losing control over what is on the systems within the organization

First, you may start to ask what is going on within your organization’s computers. Take the case of Georgia State University in the U.S. Its administration was not happy when one of its system administrators installed a well-known grid computing system. Not only was the administrator disciplined by the university; he was arrested and prosecuted.

While absurd and draconian, this response reflected a legitimate concern: how can an organization provide security and manage its resources if it doesn’t know what’s running on its computers? Organizations must clarify their policies about what files and programs can be loaded. But most vendors of commercial peer-to-peer systems are very conscious of their customers’ worries and offer solutions that fit well with corporate policies and security systems.

In reality, most people have never known what was running on their computers. Early DOS systems loaded programs on a terminate-and-stay-resident (TSR) basis. On Windows, you might see all kinds of little icons on your taskbar representing things that you don’t need and that just take up CPU time. The Internet allows even worse abuses. For instance, hardly any users of the popular RealNetworks streaming media software know that it is monitoring their downloads and maintaining a database on them. Such hidden monitoring programs are rife on the Internet, including sometimes on peer-to-peer systems.

As I mentioned before, many file-sharing systems cause files to travel freely between computers. Nobody knows what computers are hosting each file. This has legal ramifications. Many current laws regarding copyright, hate speech, or other offensive content make two unstated and perhaps unrecognized assumptions: that the location of data can be determined, and that a system’s owner is responsible for the content loaded on his or her system. With peer-to-peer, neither assumption is viable.

Losing control over communications among employees

In a secure collaboration system like Groove, nobody can view data but the invited participants. At the same time, the value of the system lies in blink-of-the-eye communications that go on in real time, far too fast for employees to check their statements with supervisors. Can large organizations accustomed to top-down management tolerate giving their employees so much responsibility, especially when the communications involve people in other companies? Collaboration systems must not only overcome trepidation in the IT department about peer-to-peer systems in general. They also force a company to address cultural issues, such as their willingness to share information with partners and suppliers. This culture change can slow down the adoption of these technologies.

Possible business impacts of peer-to-peer

Perhaps I’ve scared you. That is unavoidable. We have to anticipate problems as soon as possible so we can prepare for them. My predications are just speculations, of course. They may or may not come to pass. And that caveat applies even more to the business impacts I’m going to talk about next.

Improving collaboration at the lowest levels between individuals in different corporations

I already mentioned that individuals in different departments of a company, and in different companies as well, will learn to work closely together. The lowering of organizational barriers could lead to devolution within large companies. Business units may seek more formal independence or even move their work from other business units of the company to outside vendors and partners.

Transparency in regard to customers

You’ve probably heard of data mining, which means combining enormous heaps of data and extracting some general statistics. Marketing experts love the idea that a company can track a customer’s behavior through data mining. What if peer-to-peer reversed the formula? Suppose customers could easily track companies’ behavior? Reputation would be measurable and transparent, somewhat as it is now on the eBay auction service but hopefully in a more robust manner.

Intensifying competition

Transparency intensifies competition. Imagine that a customer could compare all prices for a commodity in a few seconds through a single query. That could happen in the distributed database systems I described earlier. They might lead to a radical intensification of business competition. All companies could conceivably be forced to compete fiercely on prices, warranties, and any other measures provided in the system. Of course, companies differ on quality of service and other things unrelated to these measures, so we have to develop systems that preserve such wet or soft information. Since I am speculating shamelessly, I’ll suggest a few possible outcomes to this movement.

  1. All services might split off from sales. Everything physical would become a commodity, and if you wanted service you would purchase it separately.

  2. Companies might develop “web of trust” relationships with customers. People would find vendors by listening to the recommendations of friends and colleagues.

  3. Companies might represent qualities besides price in measurable ways, and promulgate reputation systems.

Some social values

I want to finish on a positive tone. Peer-to-peer systems are so varied that any social commentary becomes risky. But I find that they tend to have a certain personality—an approach to social relations that I find salutary.

Individuals and community

Peer-to-peer brings new importance to end-users. It makes them actors as well as spectators. The value of the individual is enhanced, because many systems emphasize each person’s unique contributions. In OpenCola Folders, for instance, the files you like become significant to others, and the fact that you like them also becomes significant.

However, the value of individual differences does not lead to rampant individualism. Instead, peer-to-peer raises the value of community. There is no such thing as a single peer. The value of a peer grows together with its ties to all the other peers on the system; thus, the individual gains value insofar as he or she contributes to a community and reaps the community’s benefits.

Where does the value lie in information?

Because the value of the individual lies in his or her relation to the community, peer-to-peer changes the value of information itself.

Large publishers in books, music, movies, and software are currently saying that the best way to increase the amount of information in the world is to place more control on it. That’s an illogical claim on its face. Yet these companies are pushing stringent content controls in the form of technical systems, backed up by laws that punish attempts to circumvent these systems.

The security of a control regime works like this: I, the content provider, know everything and decide everything. I can say who will read my work, and whether they read it once, twice or many times. I can determine when they accessed my site down to the hour, the minute, the second. An actual e-book license that was recently publicized allowed ten hours to read the book before more payment was required.

In peer-to-peer, the security issues normally raised are quite different. How much computing time or disk space should I offer to other people? Shall I let that person into a chat? Do I trust that computer system to offer the right data?

Peer-to-peer undermines content controls philosophically, and often in its architecture. The notion of peer-to-peer assumes that the content provider is not the only one who knows the best way to use the content; it assumes that other participants may have creative ways to use content too. In peer-to-peer, content comes from many locations and takes on a life independent of the people who originally introduced it into the system. Therefore, such systems remind us that collaboration and elaboration are where much new value emerges. To express one’s individual feelings or make a creative contribution within a community of appreciative peers—that seems to me the best possible computing environment. And that works best under peer-to-peer.


Author’s home page