B O T T O M L I N E

Frontier Justice

Internet brands of justice--like dredging up dirt on Internet "offenders" and arbitrarily canceling messages--are practices that need to be rethought.

by Joel Snyder

When you have no basis for an argument, abuse the plaintiff. --Cicero

The internet glories in its anarchy. it bristles at any attempt by authority to regulate it. Netizens call this "self-regulation," touting it as the first example of a truly open forum the world has ever seen. Often cited is John Gilmore's quote, "The Net interprets censorship as damage and routes around it." Unfortunately, this lawless freedom is as illusory and deceiving as platform shoes. The much-vaunted self-regulation of the Internet quickly can turn into vigilante justice.

Take the strange case of Marty Rimm and his cyberporn study. You probably remember the Time magazine cover story that purported to tell the world about the glut of pornography easily available online. Rimm's study focused on local adult bulletin-board systems, rather than the Internet itself, with enough Internet references thrown in to paint a deceptive picture of the amount of porn online and its whereabouts. Because Rimm's study was stamped with the imprimatur of Carnegie Mellon University, it carried a great deal of weight. Many readers, Time's editors included, were misled by Rimm's study and concluded that the Internet was overrun with porn. Rimm's conclusions became pivotal in Washington debates on the need for Internet censorship.

It soon came to light that Rimm's study had some serious flaws and that many of its most shocking conclusions weren't supported by the data Rimm cited. Academics familiar with the dry business of research began to release their own reports, revealing the inadequacies and misconceptions in Rimm's study. It was, in a word, discredited. If the topic of cyberporn was a bit unusual for an academic research project, the post-partum peer review was standard operating procedure. (Academics constantly criticize each other's work. This is what keeps them honest, open, and thorough.)

But this critical academic analysis of the study itself apparently wasn't enough: Some members of the Internet community felt that Rimm's study would be used to damage the Internet and they felt a need to retaliate. It was not sufficient that Rimm's academic credibility had been reduced to the level of a late-night infomercial. They were out for revenge.

And so it started: Rimm himself became the object of a research project aimed at collecting as much dirty laundry on him as possible. With each new discovery came postings to Web pages, Usenet newsgroups, and mailing lists.

Rimm was the man of the hour, publicly, personally, humiliated for. . . what? Punishment? Revenge? I'm not sure. But Rimm has been investigated as thoroughly as a new Supreme Court justice, and with predictably unflattering results.

Guilt And Punishment
Rimm's case is a good example of vigilante justice because his intentions fall somewhere between the ignorant and the sensational. I'd have a much more difficult time criticizing the Internet's hostile reaction to Canter and Siegel, the infamous Green Card Lawyers whose massive Usenet ad campaign made them synonymous with the word spam.

But don't misinterpret my point. I'm not defending Rimm or his study. I am worried about the arbitrary and capricious nature of the self-regulation of the Internet.

Consider someone who doesn't quite make the cover of Time, yet who nevertheless makes a public statement of some sort via a newsgroup or mailing list. With a few clicks of a mouse, any Internet vigilante can twist a misinterpreted action or quote taken out of context into a personal propaganda campaign.

Make your statement of objection eloquently enough and you can turn anyone into a sinister figure, a threat to the existence of free speech, a menace who should be attacked ad hominem and hounded from the electronic airwaves. The Internet acts as judge, jury, and executioner. There are no rules of evidence and no carefully controlled courtroom proceedings. The accused has no right to a fair trial.

Equal Time Argument
The self-regulation of the Internet is proposed as a defense against traditional government regulation. An argument used against Internet censorship is that the Net offers unique opportunities for different sides of an issue to be examined fairly.

For example, when Stanford University considered banning pornographic newsgroups, some of the opposition to the ban came from campus women's organizations. They felt the most effective way to fight pornography was to shout just as loudly and often about attitudes towards women and the effects of pornography on them.

Similarly, people have defended the right of "hate groups" to publish Web pages carrying their message for the same reasons: Only when their points are given full exposure in the light of day can intelligent readers clearly evaluate them. These free speech arguments make sense to me in the context of such large issues where there are many parties involved.

But these arguments hold no sway with the vigilantes who have persecuted intellectual opponents with zealous furor. A single person doesn't have the resources to make a fair defense against a dedicated personal attack from many sources. And such persecutions have the advantage of pandering to many people who tend to believe the worst. After all, if people seriously thought that Taco Bell could appropriate the Liberty Bell for its corporate icon on April Fools' Day, it would be no stretch of the imagination to believe any number of false and malicious claims.

If, for example, some vigilante were to take this column as a threat to something he holds dear, he could select quotes (out of context), make up a few "facts" about my background and prior crimes, and have a personal vendetta spread via a few dozen Usenet newsgroups and a similar number of mailing lists between lunch and supper. And it could take me hundreds of hours to correct the wrong--if I ever could--because things like this have a way of popping up weeks and months later.

Ruled by Fear
In the lexicon of discussion on free speech, the term "self-censorship" is a common one. It refers to an environment, usually created by badly crafted laws, where people are unsure about what is and what is not allowed.

To avoid potential punishment, they take the most conservative approach. An apt analogy is that of a bookseller who refuses to carry any book, say Catcher in the Rye, that might possibly upset the local constabulary. The presence of technologically violent vigilantes on the Internet inspires an atmosphere of self-censorship and prior restraint. The Internet is not quite the haven of free speech it is made out to be.

Fortunately, the worst instances of this type of behavior are still uncommon. There are lesser vigilantes, though, popping up all over. Consider the problem of unsolicited advertising on Usenet newsgroups.

Some enterprising programmers have built filters that automatically detect and classify mass postings of the same message and issue cancel messages. Cancelbots (short for "cancel robots") act calmly, dispassionately, and automatically, chasing down and deleting the messages deemed offensive merely on the basis of their number.

As someone who would rather not read advertisements on how to make money fast in newsgroups devoted to computer operating systems, I applaud such activities. But as an observer of the Internet, I'm concerned at the actions of these self-appointed Net cops who are able to set the criteria and bounds of acceptable behavior on the Internet.

The operators of the cancelbots defend themselves by building elaborate schemae that would let a news administrator "ignore" their cancels, thus keeping the articles intact. But this is a false defense: It requires explicit, detailed, technical, and time-consuming action on the part of Internet users and network administrators to keep up with the latest standards. The amount of action needed to be taken to avoid default censorship is enormous.

No Apologies

Operators of cancelbots take pride in the absolute mechanical nature of the cancels. This, they claim, is the only way out of the "slippery slope" arguments against any kind of regulation. They lay out the rules--creating elaborate formulae to decide whether something is outside the bounds of reasonable behavior or not--and they cancel, without apology, messages that might be on topic, interesting, and posing no harm to the public good.

By and large, cancelbots continue to exist because most people don't know they are there, and those who do would rather have one innocent message deleted than wade through 100 off-topic advertisements.

When the Internet's regulation is as dispassionate and mechanical as the cancelbots, we may find it convenient to accept a small group's ability to censor newsgroups. However, when enacting justice crosses the line into revenge, harassment, and persecution, we may be horrified. There are no easy answers to the questions posed by Internet self-regulation. In fact, there may be no good answers at all.

If this topic interests you, get a copy of Steven Miller's Civilizing Cyberspace (Addison-Wesley, 1996, ISBN 0-201-84760-4). Miller's treatment of some of the hottest Internet issues is well-written and thought provoking. *IW*


Internet World magazine Vol. 7 No. 7, (c) 1996 Mecklermedia Corporation. All rights reserved.

http://www.iworld.com/