The vision driving this article is a fervent belief among a far-flung set of researchers, software vendors, and system administrators: when people bring parts of their identities online, they can use the Internet more effectively. Commerce sites can recognize them, participants in forums can accept what they have to say, and people who share their interests can cluster more tightly with them.
The popular Facebook social network provides a simple example of the advantages of having an online identity. If you post a photo on Facebook, you can tag each face in it with the identity of your Facebook friend. The friends are notified that someone has posted a photo, other people can identify the subjects of the photo, and you can do a search for other photos of your friends. All this is within Facebook, but numerous systems such as LID are trying to make such capabilities universal. Microsoft (having learned from its mistakes pushing Passport) hopes to institutionalize a more commercial form of identity through its new CardSpace framework.
Because I care intently about online identity myself, I was excited to attend the Identity Mashup conference at Harvard Law school's Berkman Center, one in a series of identity conferences held there. Coming out of a technology space into this legal space was a bit of a culture shock for me, because when lawyers consider things-to speak very broadly-they look at how things can hurt people. While I might make an initial categorization of identity systems along social lines, such as:
...or along technical lines, such as:
...in contrast, lawyers categorize them as:
Fortunately, the far-thinking Berkman Center can encompass all these different categorizations at once. The conference turned out to be a wonderful mash-up of legal, technical, social, and business aspects of online identity. The value of organizing such a conference became most apparent on the third day, when the formal sessions that attracted some two hundred attendees came to an end, but over fifty people from all sorts of disciplines came to a kind of unconference with no preset agenda.
A report on such a conference fits well into my style, which is a mash-up of its own. I'll take quotes from different times and places and cite them as if one person were answering another. That reflects, to me, the reality of a conference as a pool where everyone's stream empties and where ideas float and drift imperceivably. In articles such as these, I also mix my opinions freely with what I hear on the conference floor. That saves me the draining effort that many journalists put into a pretense of maintaining objectivity.
The outline of this article is:
In everyday speech, I talk of my identity as my soul, my essence, the totality of everything I believe most important about myself. Of course, this holistic identity has nothing to do with online identity.
Or does it?
Do we really feel satisfied with the compartmentalized identity offered to us by well-meaning privacy advocates? An approach to identity where, for instance:
I join a work-related forum sharing nothing but my academic credentials and career achievements.
I join a political forum sharing some of my beliefs and practical skills, but nothing about my spiritual life or personal needs.
I visit commercial site sharing absolutely nothing at all-here I am just a name with a credit card and a shipping address.
How many of us want to deny so stringently so many parts of ourselves, so much of the time? Isn't this privacy ideal just a parody of the online "consumers without agendas" that mass marketers would like us to be? Esther Dyson, a prominent consultant and journalist, asked, "Do we want privacy or do we want attention?"
We go online not only because we have something to buy, but because we have something to say. This doesn't mean we sneer at privacy; sometimes we desperately need its protection in order to be able to say something. However, the holistic and compartmentalized aspects of identity have to exist together. Journalist David Berlind gave the notion of identity the broadest possible meaning when he said, "Everything you put on the Internet is an expression of your identity."
The value of the holistic approach came to me forcefully during the most astonishing session of the conference, a break-out session titled "Human Hybrids: Creating a Global Identity". Derrick Ashong, a political activist and facilitator for the arts, organized this session to show that it is not enough to bring Internet access to underdeveloped areas; one must simultaneously give people a chance to express themselves. Otherwise, they are prey to the images thrust on them from outside, and cannot healthily combine the parts of their identities formed online with the more traditional identities that come from their communities. Ashong calls this combination a hybrid identity, and the tools we give people could make its emergence either positive or negative.
Ashong saw his personal history-he grew up in a home without running water, eventually making it into a Ph.D. program at Harvard and onto the boards of political organizations-as an example of the resulting risk. First, a lot of people in developing areas who are lucky enough to get training end up leaving their areas instead of furthering their neighbors' development. Even if they don't leave physically, they risk a greater and greater disconnect from their culture and peers.
Technology can also be a tremendous boost to development. According to Ashong, one reason for that is that Internet access can show people a bigger world that puts immediate, impoverished environments into a more optimistic context. The point of a digital identity is "so we can hear them too, so they can contribute to the global good."
To demonstrate the positive use of technology and the Internet, Ashong patched in an audio and video feed of a teacher named Marvin Hall from Kingston, Jamaica. Hall started a Lego robotics project among school children and eventually brought a team to San Jose, California, where they won an award. It made them feel good to accomplish something in the midst of a neighborhood that Hall described as dominated by hopelessness and violence. His next project is to start a robotics learning lab for children aged 8 through 14--ages just before and during the greatest risk of being recruited by gangs.
The whole session, which was attended by only about 20 people, rotated around questions of how to help people who are outside the world of material privilege and instant information. There was considerable mistrust of the One Laptop Per Child program (which had been celebrated with great excitement at a recent conference in Brazil, as reported in my weblog on Brazil's Free Software Forum). Participants felt that it was a gigantic intervention into isolated areas with no guarantee of support and guidance to deal with the social changes it might cause. I pointed to the decades-long efforts of Dave Hughes as evidence that networking can be brought to remote areas with sensitivity to culture and local needs.
To crown the discussion, Ashong said, "Beyond a digital identity, we are creating a global and human identity."
Some privacy purists would like each exchange of goods to be isolated, with the seller knowing nothing about you except your ability to pay. In fact, digital cash was a hot research area in the 1990s. Such systems would give you digital tokens you could provide to a vendor without leaving behind any trail of the purchase; the seller would know only what bank to go to for payment.
Privacy expert Stefan Brands, who did much of the research on digital cash, pointed out that its potential collapsed with the advent of PayPal, which provides a quick solution for online purchasing but is careless of privacy and other common problems, such as phishing.
Some people like to share information with companies in order to speed up transactions and get advice. Such information--called preferences by the identity community, who go along with vendors in portraying the consumer as being in control--could range from your favorite colors to your geographic location or even what medical conditions you suffer from. All these "preferences" can help vendors display for you the most appropriate goods or services.
Think, for a minute, how much businesses would love to see your calendar. A restaurant, for instance, would love to know that you've scheduled an appointment in the same office building for 1:00 in the afternoon.
I'm sure Google has thought about it for more than a minute. Their privacy policy treats your calendar like all their other services, such as gmail. However, they currently serve no ads on the calendar. You are free to share your calendar with others; it will be interesting to see whether anyone is doing targeted searches on people's public calendar information.
You're not about to enter your calendar information for the benefit of businesses, but you might enter it for your own benefit and then be willing to share it. This is perhaps why ten million people have created a model of their bodies or homes with My Virtual Model, which offered some demos at the conference.
Where your body is concerned, My Virtual Model acts like the most recent version of the Sims game. You can stretch and shove an image to look like you, try on clothing virtually, and then make a purchase. Debbie Pazlar (Best Buy), Paul Trevithick (Parity Communications), and Gregory Saumier-Finch (My Virtual Model) collaborated at the conference on a futuristic demo that showed how a user could store preferences, including a model kitchen, and then remodel the kitchen using 3D graphics and appliances from Best Buy.
Louise Guay, president and founder of My Virtual Model, said some businesses were afraid of it at first--so afraid they might get angry when she proposed signing up for it. These businesses recognized that My Virtual Model removed friction from the process of comparing sites and products, and thus would make it harder for them to own their customers.
For the same reasons, according to business analyst John Sviokla, My Virtual Model can promote more purchases. He went so far as to call it "an enormous wealth creation mechanism." And journalist Doc Searls, who has promoted identity technology for years, believes it could replace the advertising industry with an "intention economy" that put online users in control of their own attention, and find things that interest them.
I want to come down from the airy heights of holistic identity to consider the benefits of acting a bit like turtles in the pond, who come out only on a need-to-know basis.
Why does the Veterans Administration have to know our social security numbers (and then let them slip into the hands of criminals from a stolen laptop)? Why can't the VA just know whether we've served in the armed forces?
As Christine Varney, former commissioner at the Federal Trade Commission, pointed out, we have acclimated over the past couple centuries to having different identities "in the city and on the farm". Digital technology alters the equation but still provides options.
Technology can theoretically provide strong online identity while distributing risk. You and your correspondents would choose which authority you trust to authenticate you. You could further store sensitive data in other respositories, and encrypt it so it is never known to anyone but you and the person you reveal it to. Even the authority storing the repository wouldn't have to know the data. To meet a criterion known as minimal disclosure, data exchanges could be fashioned in such a way as to answer the question "Are you over 21?" with "yes" or "no" in an authenticated way without providing your exact age.
John S. Bliss of IBM touted a system at the conference that could solve the current air flight impasse between the United States and the European Union. The United States would like to see all the information airlines have on passengers in order to compare the passenger to watch lists (which are none too accurate). The European Union insists that the request violates many privacy tenets, including minimal disclosure and preventing unauthorized reuse. IBM's solution would hide the information on both sides by mathematically reducing them to meaningless but identifiable numbers (a process called hashing). Conference attendees questioned the application of the solution, but the idea demonstrated one form of compartmentalization.
The bugaboo that unites the identity and privacy communities is the threat of a single identifier. In many countries, a national ID or social security number plays that role, and it's open to abuse in countless ways. These include forgery, identity theft, intrusive government tracking, and the kind of catastrophic information leaks we've seen in recent cases of stolen data.
Nobody who has studied the situation likes a single, centralized ID-not even Jeremy Warren, the deputy Chief Technology officer at the U.S. Department of Justice. Yet we may be unable to avoid such IDs. Governments seem convinced they solve law enforcement problems, and don't do nuance when the experts try to explain the difficulties of the systems.
Compartmentalized identities don't necessarily make it safe to store data. Data mining can still put together a composite portrait of you from many sources, and governments can force different repositories to relinquish information.
Warren thought the Department of Justice would have no problem with distributed identity information--and even with having reasonable barriers to gathering such information, so it could be done only when necessary--but expressed some anxiety about the possibility that the information could be stored outside the U.S., so that international cooperation would be required to trace routine crimes.
The kinds of governments most people at the conference worried about were well-known repressive regimes (journalist Rebecca MacKinnon warned us to look out for China), but few countries can be considered exempt. Marc Rotenberg, director of the noted Electronic Privacy Information Center advocacy organization, said that the U.S. government can force a resident to give up financial data in some circumstances. Caspar Bowden, Chief Privacy Adviser of Microsoft Europe, pointed out that even Britain's official privacy regulator, the Information Commissioner, had described UK as "sleepwalking into a Surveillance Society".
Berkman Center professor William McGeveran listed three risks of putting identity information online:
At least in the Western world, we tend to get very possessive about our identities, and talk incessantly at conferences like these about how we want control over our own identities. That's a useful starting point for discussing the preservation of privacy, but it has ignored something key along the way. Your identity--in terms of how you represent yourself in communities, in commerce, or in public forums--is not your own. It's an index into some institution that you share with the people you're giving your identity to.
Because this idea is so abstract, consider it with a non-virtual example. Suppose I come to your small town and say, "I'm the mayor's brother; will you put me up for the night?" You're not likely to accept my claim at face value; I have not established my identity.
The situation is completely different if the mayor, or your friends, or some other authority in the town, says, "He's the mayor's brother; will you put him up for the night?" Your response will depend now on how much you like the mayor, or whether he approved your recent application for a construction permit--things that the identity researchers like to call attributes of the mayor. My own identity is settled however, because someone you invest with authority has verified it.
This principle turns up in the most everyday online activities. The identification andyo means nothing, but andyo@oreilly.com is useful because the mail server at oreilly.com knows how to send mail to me using the andyo label. And this authority extends far beyond a single mail server at oreilly.com, even though it's the only server that needs to know me; successful mail delivery requires a whole network of servers connected by Internet protocols.
The fundamentals of computing respect the principle of authority over identities. Suppose I go to an online store, and I'm their 73,267th customer. Their database may automatically assign a unique identifier of 73,267 to me. And that identifier may never be seen outside the code accessing the database, but it is my identity so far as the store is concerned. Or as Brands said at the conference, "the notion of a record [in a database at some agency or company] is what we call identity".
We can layer all sorts of powerful features and describe all manner of personal attributes in an identity, and develop ever more sophisticated protocols for exchanging the data securely, but all identities come down ultimately to the authorities we entrust with them. This means that identity management is not really the management of individual identities, but the management of institutions we trust.
As you tussle out the policy issues around online identity, keep one idea in mind: your identity is an entry in the database of the authority that authenticates you. Feel better? Whether you do or not, at least you will be guided down the right paths in making policy.
As I will explain, the identity development community has come together around some strategies to move power from the authorities to the individual applicants for identity; this is called user-centric identity.
Once we recognize that managing identity means managing authority, we can understand the source of so many policy debates. Some of the themes in this section build on a pair of earlier articles of mine: From P2P to Web Services: Addressing and Coordination and From P2P to Web Services: Trust.
In the financial world, there are controversies over the accuracy and fairness of credit ratings that credit companies check. Online, we raise similar complaints when online sites collect personal information as a prerequisite for signing us up. Some researchers suggest that large numbers of users submit false information, as an underhanded protest.
These controversies raise a cluster of questions about authority: what do authorities have the right to demand of the people to whom they give identities? Just because we have all granted the right to maintain identity to a particular authority, should that authority be 100% in control? Should the authority make all the rules at its sole discretion?
In the 1990s, sevearl large computer companies--including Microsoft, IBM, Sun, and others--sought the Holy Grail of single sign-on. This technology allows a user to enter a password just once and then surf seamlessly from one site to another without having to go through the annoyance of re-entering the password. Single sign-on required a federated security model (letting one site validate a user and provide information about that user to another site), and therefore heightened needs for trust.
Single sign-on is more than a convenience to encourage users to visit more sites. It could provide an important boost to online security, because it uses forms of digital signatures that are more secure than simple password verification. It could also enforce good security practices, eliminating the two common problems that plague password systems: users choosing passwords that are easy to break, and users reusing the same password for multiple sites.
Single sign-on was technically feasible, but ran into real-world problems around trust and liability. If I let your site validate my users, how can I make sure you maintain at least as good security as I do--that you keep your Internet hosts well patched, and validate user-submitted information for fraud, and screen your employees for criminal backgrounds, and so forth?
The next step, therefore, after the heady stage of the creation of federated standards, was an enormous investment of corporate time in setting up standards for security and verification. Then, of course, institutions had to administer those standards. Then came more protocols (on top of the federated security protocols just released) to transfer information about trust and liability.
In short, the computer industry dealt with the issue of authority by trying to formalize authority in standards, institutions, and protocols. The system has made very limited headway.
Our credit ratings are a function of the companies that maintain the ratings; were the companies to go out of business and lose the expertise needed to maintain their databases, we'd lose our credit ratings. The same goes for online identities; they persist only so long as the institutions that offer them.
I don't really believe Equifax will go away (without some other responsible authority taking over its databases), so a more pertinent worry is that the government will take ownership of data that companies have promised--or at least, users have assumed--would be confidential. This fear reflects the reality that our online identities are owned by the authorities that grant them. We also fear that companies will mine our data and use it for purposes we haven't authorized.
I said earlier that somebody always has a choke point over my identity because a record of me has to be stored somewhere. Technical measures can minimize the control exercised by this choke point, but only by introducing yet another choke point.
The identity, privacy, and reputation communities have built enormously complex technical systems to support their goals, but the roots of the systems can be fairly simply described. They go back to the invention of public-key cryptography and digital signatures.
Public-key cryptography is a historic 1970s-era mathematical breakthrough that lets me encrypt data with one key and publicize a second key for others to decrypt the data. Because no one knows the first key, my use of the key is like a signature, identifying me as the data's owner. Just as you can't trust someone who phones you and says, "I'm calling from your bank," you can't trust a signed email from me unless you have independent confirmation that the signature belongs to the editor from O'Reilly.
There are several ways, varying in security and convenience, to prove who I am. I could publish a public key on a well-known web site such as oreilly.com, which one hopes would not get compromised. I could share my key with friends and with friends of friends, in the hope that eventually one would be friends with you (the web of trust). The security community prefers a system where you and I go through some major organization we both know. This site, like a bank that gives me checks and where you can take a check to cash, is a trusted third party.
Federated security systems can be wonderfully flexible and fine-grained. I may be able to submit a message to a bank with several parts, one signed by a trusted third party to prove I'm really me (that's an identity broker), and another signed by another trusted third party to prove I've paid up my mortgage. All these trusted third parties are choke points that raise the same privacy, trust, and persistence questions as the authorities I discussed earlier.
People have dignity, and sometimes even good sense. While they are sometimes wantonly careless about their privacy, they seem to have a feeling for the risks of releasing control over their identities online. They've voted with their mouse buttons and rejected services such as Microsoft's notorious Passport, which seem to want a finger in every transaction.
We are thus at a propitious moment where the major vendors have come together with the researchers around a seemingly viable plan that would preserve users' control over data. They can't violate the basic physical facts about authorities and their dominant role in identities that I laid out in the previous section. There are two roads to user-centric identity:
Technical experts tend to like technical solutions, because they consider them harder to overpower. Both are necessary. Contracts are necessary so the two sides can agree on what the technical measures accomplish, and as a fallback when technical measures fail.
The idea of hiding data may seem complex, but imagine going to a conference where you are entitled to certain goods--a copy of the proceedings, say, or a couple drinks at the reception. When you check in, somebody gives you one red ticket that entitles you to a copy of the proceedings, and two blue tickets that entitle you to free drinks. The person giving you the proceedings or drinks doesn't have to know your name or credentials. Nobody ever has to find out whether you picked up the proceedings or had any drinks.
The analogy is a loose one (and there are still ways in the ticket system for data about you to leak out), but online software systems are much more strict and mathematically rigorous in protecting privacy.
On the other extreme from the technical view I'm presenting here, identity is also viewable as a function of the community you're in. It's other people who accept or reject you, and who determine your reputation. Law professor Kim Taipale said, "Identity is a duality". Because other people demand to know things about you so they can decide how much to trust you, he called identity "a mechanism for managing risk," and continued, "We could claim that society owns your reputation more than you do." Professor Beth Noveck has spun out identity's source in community in legal and historical detail.
Perhaps we should start the exploration of identity technology by looking at how things stand for most Internet denizens now. Ironically, most of us are profoundly deluded.
When we go online to a forum on some topic that interests us, nobody knows us from Adam. We feel anonymous, and we possibly share personal information on that basis.
In fact, identifying us is pretty easy. It's just that nobody bothers to try, unless a record company decides to make an example of us for uploading MP3 files or the Chinese government decides to call us in for questioning about some posts containing the word "democracy". Consider that:
Our identity situation is the worst of both worlds: people with bad intentions can find our data, but we are isolated from the people with whom we'd like to form communities. This once again raises the tension between holistic identity and compartmentalized identity.
Because anyone with a warrant--or just an easy-going relationship with an ISP, as the NSA apparently has--can trace you through your IP address, true privacy depends on hiding even that tiny bit of identifying information. Protection from tracing your location, along with protection from traffic analysis (which can identify the parties to conversations by such measures as checking when packets are sent and received on various routers) is performed by onion routers, which are collections of cooperating machines that bounce around messages until the nefarious traffic analyzer gets dizzy. The term "onion router" comes from the practice of encrypting each message and wrapping it in another message, in order to get it through an arbitrary set of systems.
Privacy researcher Roger Dingledine came to the conference to introduce onion routing and promote the most prominent project among its current generation, Tor. He anticipated audience reaction by asking, and then answering, the question of whether onion routers facilitate crime. The answer is that criminals already know how to hide their tracks through prodigious efforts. Tor is needed by people with a legitimate need for privacy, whether Navy personnel (the U.S. Navy is one of the project's sponsors) or companies trying to keep competitors from finding out which customers their sales force is contacting.
An example of how someone determined to stay in hiding can succeed for a long time appears, by coincidence, in the most recent Atlantic Monthly (July/August 2006). A cheerleader for al-Zarqawi's Iraqi insurgency posted terror training videos and other propaganda anonymously for years, despite coordinated efforts on several continents to track him down. I'm not sure that what he did would be illegal in the United States, but it certainly was in United Kingdom-where he was finally located.
I've already explained that identity systems like trusted third parties. There are plenty of other examples of trusted third-party systems in actual use. For instance, many sites tie together different user directories and application servers through Kerberos, a version of which has now been adopted by Microsoft. And the certificates used to sign secure web sites depend on trusted third parties called certificate authorities. Unfortunately, most web users are aware of these certificates only because the system breaks down so often. Either the browser fails to keep up with changes in certificate authorities, or the server lets its certificate become invalid in some way.
Identity systems bring down a ton of logistical and liability problems on themselves when they adopt the third-party solution. Yet the competition for identity systems is intense. To help the various vendors and open-source solutions work together, the Berkman Center has sponsored a project called Higgins.
At the conference, the Higgins designers unveiled a purchasing system with the Interra Project, which directs a percentage of each purchase to a non-profit cause. I was impressed with this demo because they're really putting their money where their mouths are. Anything that distributes funds, no matter how small, had better be secure.
Many types of middleware place (usually unanticipated) constraints on the systems they promise to tie together. The identity space is constantly being reconsidered and will get banged on a lot more by innovators before they feel the problems are solved, so middleware in this space must emphatically avoid such constraints.
Higgins, according to technical lead Paul Trevithick, was carefully designed to leave things open for innovation. It does this in the usual way adopted by standards: by providing fill-in-the-blank protocols and leaving it up to application providers to specify what they want. "If the bank calls some field a Surname and the vendor calls it a Last Name," Trevithick told me, "it's up to them to work it out--as much as some of them would like us to do it for them."
Users who come in contact with Higgins will do so through its interfaces for creating accounts and authorizing the sharing of information, which the developers provided in the hope that all sites could provide a common experience. Everyone agrees that identity systems will take off only if they're fun and easy to use.
It's also widely accepted that the single sign-on systems mentioned earlier, with their complex Web Services protocols and design-by-committee deployment scenarios, will be niche applications for quite a while. However, developer Casper Biering of the Danish identity firm Netamia told me the Danish government has just adopted SAML, one of the major federated protocols, for the exchange of identity information among government agencies. This is an example of a niche that could grow.
I spoke by phone with Andre Durand, CEO of Ping Identity, which is one of the most important firms offering single sign-on systems and other federated identity applications. He says that the market so far has been largely focused on business-to-business communications, but that the broader market opportunity for identity will take off in the next couple years as end-users become more aware of its existence through efforts such as CardSpace and Higgins. He cites two recent achievements as reasons for optimism.
First--as many people at the conference have said--the vendors and large firms interested in identity have come to agree that in order to get their systems adopted, the end-user must be factored into the equation. Up to now, Durand says, the conversation has been largely fixated on only two of three crucial parties: the service provider (such as an online store or bank) and the identity provider. But now the third--and probably most important--party to this three-way dance is being introduced: the end-user.
Second, the standards have matured and and simultaneously become a lot less complex. Microsoft will use the WS-* specifications, many of which have been moved for ratification by the OASIS consortium, and other vendors will use SAML, which includes contributions from the Liberty Alliance. Vendors will help bridge the discrepancies in protocols by providing products that speak both specifications or bridge their functionality. Durand also says that CardSpace and Higgins will provide a common open-source foundation from which these and other, yet-to-be-invented identity systems can interoperate,
If most individuals and companies are not to be bothered with federated, third-party systems walled in behind complex protocols, how will identity systems spread? Who will validate identity?
Kim Cameron, one of the leading identities in the identity field and an architect at Microsoft, thinks the field can flourish without third-party validation. "Currently, 99.9% of all identity information online is self-asserted," he points out. In other words, we are already forming communities and exchanging information that matters to us with people whom we know only from what they tell us in email, web pages, or other forums. Why can't we continue this way, just making things a little easier through standards?
Perhaps a grassroots movement will make sxip, LID, or one of the other low-overhead contenders for identity into the next cool plaything, but few people know such systems exist--and they satisfy only a small portion of the field's needs.
Durand insists that at least for now, SAML and WS-Federation are here to stay, especially SAML tokens. "There's an opening for loosely-coupled social networking sites (the blogosphere, gaming sites, and so forth) to leverage the lighter-weight systems. But the bulk of our most important interactions are still between individuals and businesses, and businesses need the robustness of the federated systems. Many firms such as Ping Identity are putting in a lot of work to make these more mature identity systems easier to acquire, integrate, and use: they're open sourcing pieces of the infrastructure, building LAMP stack versions of SAML, and putting extremely light-weight interfaces such as REST in front of them. I believe projects that span both the enterprise use cases and the end-user (customer-facing) use cases have the best chance for long-term success. CardSpace and Higgins meet these criteria."
Identity and reputation exist in tandem; there's not much point to one without the other. Reputation seems to pay off. Robin Harper, VP of Linden Labs, the providers of the popular Second Life virtual world, says that trust reduces risk and therefore impels people to new behaviors. Reputation researcher Kevin McCabe says that people behave better when they know they're being rated, even if most people don't bother to check the reputations.
Reputation is a monster of a problem that makes identity exchange seem trivial by comparison. Collecting reputation information is tedious, and trusting it is perilous.
Reputation on eBay seems to do the rudimentary job of winnowing out incompetent vendors, but we have to remember that it has the backing of the much more time-tested credit card system. I have a lot more trouble seeing the point of reputation systems in forums where their function is less concrete, such as LinkedIn and Orkut.
If communities try to work together to build individuals' reputation, they immediately run into thorns:
I can't put aside this article without airing some of the most pessimistic fears aired at the Mash-up by some of its most well-informed participants, such as Stefan Brand. Brand admitted to near despair sometimes, because we could easily move into a society where RFIDs are embedded in our bodies and every move is tracked. "I'm afraid that, despite all our best efforts, our technical solutions may drive us into totalitarianism." There were many responses that tried to assuage this fear, but no one could banish it.
Perhaps our best hope was cited by Berkman Center fellow Mary Rundle, who said that we must maintain multiple sources of power that can constrain each other, so that "power cannot be used to amass more power."
Author’s home page
Other articles in chronological order
Index to other articles