MattK 5/12/15 LSTU E-120 Assignment 5 Does the Hammer Ring

advertisement
MattK
5/12/15
LSTU E-120
Assignment 5
Does the Hammer Ring True?
Assessing the Effectiveness of John Scalzi’s Mallet of Loving Correction
Introduction
In 1996, as a part of his famous “Declaration of the Independence of Cyberspace,” John
Perry Barlow wrote, “We are creating a world where anyone, anywhere may express his or her
beliefs, no matter how singular, without fear of being coerced into silence or conformity.”1
Although this vision of a radically free future remains a significant hope for the Internet, the state
of online expression and its freedom from coercion continue to be significant sources of doubt
for Internet users. According to a recent survey by the Pew Research Center, fully “73% of adult
internet users have seen someone be harassed in some way online and 40% have personally
experienced it…”2 Such harassment includes calling others offensive names, purposefully
embarrassing, threatening, stalking, and sexually harassing.3 These staggering statistics indicate
that there is still significant progress to be made towards Barlow’s world of free expression.4
Fostering an online space free of harassment has proven to be a complex and difficult
issue. Codified laws against harassment are often “unfairly and ineffectively” enforced,
according to the Electronic Frontier Foundation.5 To make matters worse, “laws that don’t
carefully delineate between harassment and protected speech can end up snaring protected
John Perry Barlow, “A Declaration of the Independence of Cyberspace,” Electronic Frontier Foundation, February
8, 1996, https://projects.eff.org/~barlow/Declaration-Final.html.
2
Maeve Duggan, “Online Harassment,” Pew Research Center, October 22, 2014,
http://www.pewinternet.org/2014/10/22/online-harassment/.
3
Ibid.
4
The present author acknowledges that Barlow’s declaration was primarily directed towards governments and
corporations. Much could be written on the checks on free expression carried out by these two sources, but the
current paper focuses on interpersonal harassment.
5
Nadia Kayyali and Danny O’Brien, “Facing the Challenge of Online Harassment,” Electronic Frontier
Foundation, January 8, 2015, https://www.eff.org/deeplinks/2015/01/facing-challenge-online-harassment.
1
speech while failing to limit the behavior of the harassers.”6 This said, critics of government
efforts find little better argument on the private side: “Companies’ primary focus is on revenue
and legal safety. Many would be happy to sacrifice free expression if it became too expensive.”7
Finally, that the private companies operating the major online forums for discussion and
expression (Facebook, Twitter, Google, etc.) are under no sweeping legal obligation to protect
the speech of their users signals trouble to many free speech advocates.
Two years after Barlow’s Declaration, John Scalzi began his blog, “Whatever.”8 I
contend that this site may serve as an example of an online community that creates a space for
discussion and debate but does not tolerate harassment. As a highly trafficked blog, one would
expect its comment threads to be particularly susceptible to online vitriol—a challenge faced by
many such blogs, especially those whose writers hold strong opinions. Whatever, this paper
contends, may serve as a possible step forward. In what follows, I will investigate and assess the
effectiveness of Scalzi’s “Mallet of Loving Correction,” his characterization of his moderating
authority and power on the site. I will begin by describing his website and comment policy. I will
utilize Lawrence Lessig’s four forces of regulation to make a few observations about his
moderation process: chiefly, that the primary regulating powers on Whatever are code-based and
rule-based apparatuses. Following, I will take a recent, contentious post and thread on Whatever
and assess the kinds of disagreement, interactions, and evidence of moderation present. I will
create a metric to categorize these comments for a quantitative perspective on the blog’s
effectiveness. Finally, I will conclude by commenting upon Scalzi’s methods, their results, and
the lessons that could be more broadly applied from this case.
6
Ibid.
Ibid.
8
John Scalzi, “Whatever,” http://whatever.scalzi.com/.
7
Whatever: Blog Overview and Comment Policies
John Scalzi’s blog, Whatever, is “one of the most visited personal blogs on the Web” and
is written by an avid science fiction writer and reader.9 Its short-essays and pitches for other
authors’ printed work are usually published daily and span a variety of topics that go beyond
fantasy to writing, the Internet, pop culture, and technology. Scalzi’s blog emerged as a platform
where he could express his ideas in between book projects, but became an online space for users
to comment and debate. Well aware that online discourse can be toxic, abusive, and useless,
Scalzi has no qualms about removing such content from his comment threads if it stands in the
way of healthy discourse. Scalzi is proud of the “general high quality of the comment threads on
the site”10 and credits his comment policy (the Mallet of Loving Correction) with success.
The mechanics of the Mallet can be found on the blog’s “Site Disclaimer and Comment
Policy” page. Scalzi announces prominently that “you have no right to free speech on this site.
This is my personal site, and I am not the United States government. I reserve the right to edit all
comments, and to moderate all comment threads, as I see fit.”11 From the outset, Scalzi
unblushingly notifies visitors that the site is his own property. In post linked to the comment
policy, Scalzi writes:
I am a private citizen of the United States and this blog is understood to be my private
property. I am no more obliged to let any person exercise their First Amendment rights
on it than I am obliged to allow people to assemble on my lawn against my will. When
John Scalzi, “Station Identification: Whatever,” Whatever, January 1 2012,
http://whatever.scalzi.com/2012/01/01/station-identification-whatever/.
10
John Scalzi, “Site Disclaimer and Comment Policy,” Whatever, http://whatever.scalzi.com/about/site-disclaimerand-comment-policy/
11
Ibid.
9
people come onto my site, they are obliged to play by my rules (and keep off my
lawn!).12
Though his website contains many publicly accessible fora, Scalzi is legally under no obligation
to protect speech. For the vast majority of day-to-day interactions on his website, he is the
highest authority. While most Americans typically think of online debate to be an extension of
their right to free speech, in “the US, companies generally have the legal right to choose to host,
or not to host, online speech at their discretion.”13 As a private organization, Whatever indeed
hosts online speech, but with a number of conditions.
In the comment policy,14 Scalzi lays out the rules for participation. He warns that phobic
or unrelated posts, personal attacks or threats, or potential copyright infringements are
particularly likely to be Malleted. In order to post, a user must provide an email address (which
is not published) and a name (which can be an alias), or a valid Facebook, Twitter, or Wordpress
identity. Posts are not manually moderated before appearing online, but may automatically be
screened if they contain three or more links, certain “black-listed word[s] or phrase[s],”15 or were
posted on a thread more than thirty days old. All other moderation is in real-time. Scalzi
participates in forums and may instruct users by posting in the thread that they are veering offtopic or that a specific conversation should end. He indicates that if a user does not comply, he
may delete his/her posts, put future posts into a moderation queue (where they would be
reviewed before being posted), or even ban the user. Otherwise, Scalzi claims, he does not delete
posts. As I will describe in more detail later, he seems to conduct these processes visibly, but he
John Scalzi, “Another Entry in the Annals of ‘People Who Haven’t the Slightest Idea What They’re Saying,”
Whatever, July 25 2008, http://whatever.scalzi.com/2008/07/25/another-entry-in-the-annals-of-people-who-haventthe-slightest-idea-what-theyre-saying/
13
Nadia Kayyali and Danny O’Brien, “Facing the Challenge of Online Harassment.”
14
John Scalzi, “Site Disclaimer and Comment Policy.”
15
Ibid. Scalzi does not indicate further what these words or phrases may be.
12
does not claim that he exclusively moderates publicly and that all edits are visible. Finally, Scalzi
opens and closes comments based upon when he is able to moderate. For example, when he
sleeps, he closes moderation.
Compared to many other online fora, Scalzi’s moderation is highly involved and posts are
subject to rigorous regulation. While posts appear without direct approval, most are likely
directly reviewed. The moderator’s intervention is personal and conversational on the one hand,
but unapologetic and conclusive on the other. Utilizing Lessig’s model16 can provide a
framework through which to understand the forces at work on Whatever. The website has no
fees, and thus the only economic constraint is accessing the Internet in the first place. Norms
certainly play a role as well; the generally civil threads could be argued to provide a comfortable
and welcoming place for discussion. These observations made, the most articulated and
developed of the forces on Whatever are the code and the rules (or, as Lessig would call them,
laws).
Code, Lessig tells us, is the “software and hardware that … constitute a set of constraints
on how you can behave,”17 which forces users down a number of paths that confine activity and,
for Scalzi, enhance accountability. That users must provide an email address or link their
comments to an already-existing social media profile means their posts are tied to certain,
concrete, online identities. Users may be anonymous, but as Scalzi can prevent a certain email
address or profile from posting again, they can be held accountable for their actions. This
structure also means Scalzi has direct access to at least one personal identifier. Certain automatic
filters also prevent users from posting “black-listed” words or phrases, multiple links, or on
threads more than a month old. Posts that are filtered for these reasons are not publicly visible.
Lawrence Lessig, “Code 2.0 Chapter 2: What Things Regulate,” Found online: Social Text, April 20, 2015,
https://www.socialtext.net/codev2/what_things_regulate.
17
Ibid.
16
The last ostensive, code-based regulation is Scalzi’s opening and closing of threads. If a thread is
closed, all attempted posts enter a moderation queue and are published when Scalzi re-opens the
thread. These code-based regulations create user accountability, automatically erase undesirable
content, and allow all content to be moderated. Some of these processes, however, are not
publicly visible.
Regulation utilizing rules is the second major category of Scalzi’s comment policy.
While Lessig’s analysis confines this category to established law, I contend I may safely extend
this definition to the policies or rules the proprietor of a website exercises. In this case, the rules
of Scalzi’s comment policy very much are threats of “ex post facto sanction[s].”18 In the policy
itself, he writes:
Your comment is more likely to be edited, moderated or deleted if it contains phobic
content (based on race, sex, sexual orientation, nationality, religion, etc), is a personal
attack or threat toward another commenter, is entirely unrelated to the entry topic,
features more than a “fair use” amount of someone else’s copyrighted work, has such
poor grammar and spelling that it annoys me, is an obvious piece of trollage, or if I find it
or you obnoxious and decide I’ve had enough.19
These processes differ from code, as they aren’t enforced by infrastructure. Scalzi, the authority,
indicates what counts as inappropriate behavior and the consequences of behaving
inappropriately (Scalzi also, of course, being the arbiter of transgression and punishment). While
Scalzi flags a number of specific areas that are subject to moderation (phobic content, attacks,
unrelated or copyrighted material), he also opens up his scrutiny to exclusively subjective areas
(trollage or whether the user is “obnoxious”). If a post enters any of the mentioned areas, he will
18
19
Ibid.
John Scalzi, “Site Disclaimer and Comment Policy.”
warn the writer in the thread itself. Following posts that disregard Scalzi’s warning may be
deleted or edited. Such actions also appear in the thread (i.e., “[Deleted because I’ve already said
this conversation should be tabled]”20). These rule-based regulations set boundaries on
acceptable discourse and describe the consequences for bad behavior.
Primarily through code- and rule-based methods of regulation, Scalzi moderates the
threads of his posts in order for conversation to be fruitful and civil, even when interlocutors
disagree. Although neither of these two primary methods can be said to be democratically
decided by the community, they nonetheless are the most influential forces at play in the
Whatever online community. The architecture creates a certain amount of accountability and
visibility of users such that Scalzi can quickly take care of bad actors, or close down forums
when he will be unable to consistently moderate. The rules establish a code of conduct that
Scalzi manually arbitrates. By and large, there does not seem to be visible and broad dissent
against this policy. As will be seen in the assessment below, Scalzi’s enforcement of these rules
begins as a conversation (indicating that users should stop what they’re doing) before he begins
editing or deleting comments. The greatest extent of resistance I was able to observe was the
sarcasm of a user whose comments were checked by Scalzi (see the assessment). While these
interactions are typically conducted publicly and I was unable to turn up any evidence that there
are frequent private dealings, it is still worth noting that there is no structural guarantee that
Scalzi conducts all moderation publicly. It would be possible for users to submit comments and
have them never appear on the website without explanation. This fact observed, there is no legal
obligation for there to be full disclosure of moderation process.
John Scalzi, et. al., “A Note About the Hugo Nominations This Year,” Whatever, April 4, 2015, accessed: April
18, 2015, http://whatever.scalzi.com/2015/04/04/a-note-about-the-hugo-nominations-this-year/
20
Empirical Assessment
In order to measure the effectiveness of Scalzi’s Mallet of Loving Correction, I took a
recent contentious post and thread from Whatever and assessed the comments. For the post, I
chose “A Note About the Hugo Nominations This Year,”21 which was published on April 4th and
generated 235 comments. The post consists of Scalzi’s reflections on the controversial 2015
Hugo awards (a group of annual science-fiction/fantasy literary awards). I chose this particular
post as it was recent, generated a good deal of conversation due to the controversial nature of the
topic, and its thread would be a prime online location for disagreement and potential harassment.
In order to assess the comments, I first divided all responses into two groups: negative
and other. I defined negative as: a) interpersonal (directed specifically at another person), and b)
expressing a disagreement, correction, or any kind of response negatively reacting to the other
person (including teasing or mean-spirited remarks). This division was intended to isolate all
comments that may include harassment. Then, within the negative comments, I created three
inclusive categories (categories that may overlap): 1) meaningful response (a response that
engages the arguments of the other person), 2) contains ad hominem elements (a response that
includes attacks on the person’s character or motives, separate from a response to the person’s
stated view), and 3) includes expletives. These categories include a general means to tag
comments that could be considered legitimate conversation and a means to flag comments that
could be considered attacks or harassment.
Of the 235 comments, 102 or 43% were negative. Of the negative comments, 86% were
meaningful responses (88 comments), 11% included ad hominem elements (11 comments),
and 7% included expletives (7 comments).
21
Ibid.
To accompany these statistics, it is worth noting that Scalzi warned participants three
times to cease speaking about three different topics after the argument began to circle. He
warned two users that their heated disagreement was getting too personal. He visibly deleted
three comments by one user, all alleged continuations of closed topics. This user was able to
continue participating later in the conversation. No other users protested visibly in his defense
(although the point could be made that the moderated user’s perspective was unpopular in the
thread). Scalzi closed the thread four times, three times to sleep, and the fourth time because of
anticipated travel, which was on April 7th. The thread has been closed since.
The Mallet: A Model Moderator?
In comparison with the previously mentioned Pew Research Center survey, John Scalzi’s
moderation appears to be a step in the right direction. In the examined comment thread, there
were no efforts to purposely embarrass, no physical threats, and no sexual harassments, which
were major categories of Pew’s analysis.22 Close to one-in-ten negative comments contained an
expletive or an element of an ad hominem argument, which also means that almost 90% of
disagreements were had without an expletive or attack on a person’s character or motives.
Theses statistical results seem to be very much owing to Scalzi’s moderation tactics. Personal
attacks and other kinds of harassment simply do not appear because of the always-present
moderator.
Blog writer Anil Dash’s entry titled “If Your Website’s Full of Assholes it’s Your Fault”
lays out five guidelines for running a website free from harassment:
• You should have real humans dedicated to monitoring and responding to your
community…
22
Maeve Duggan, “Online Harassment.”
• You should have community policies about what is and isn't acceptable behavior…
• Your site should have accountable identities…
• You should have the technology to easily identify and stop bad behaviors…
• You should make a budget that supports having a good community, or you should find
another line of work…23
Scalzi’s policies effectively demonstrate each of these guidelines. He makes a point of always
being present and closing comments when he may not be available. He clearly describes
behavior that counts as unacceptable. He prevents unaccountable posting by requiring an email
address or social media account. His moderation and filters easily prevent bad behavior, and can
effectively monitor the community on his budget.
Although Scalzi is under no legal obligation to make his moderating processes public, it
is still worth raising the point that no checks on his editing exist, and all participants on his
website are ultimately beholden to his opinions, definitions, and practices. Although he makes
public his methods, they are not up for debate. Whatever’s forums create a space for civil
discussion, but it must be admitted that they are not democratically moderated. Another potential
drawback, at least when considering applying these methods on a larger scale, is the amount of
attention required for implementation. Scalzi actively opens and closes comments depending
upon when he is able to pay attention to them. For larger online forums like Facebook or
YouTube, employing moderators to play the role Scalzi does on his site would likely not be
economically viable, especially if those larger providers desired to continue to offer free access.
Despite these criticisms, Scalzi’s blog is a remarkable online space that allows for
legitimate disagreement and conversation around sensitive issues that is safeguarded against the
Anil Dash, “If Your Website’s Full of Assholes It’s Your Fault,” Anil Dash, July 20, 2011,
http://dashes.com/anil/2011/07/if-your-websites-full-of-assholes-its-your-fault.html.
23
toxicity of so many online forums. Other websites with public channels could learn much from
his efforts to reduce online harassment and threats. The accountability of the website due to
code-based regulation and moderation practices set in motion by rules-based regulation create an
effective combination of forces.
Works Cited
Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” Electronic Frontier
Foundation. February 8, 1996. https://projects.eff.org/~barlow/Declaration-Final.html.
Anil Dash. “If Your Website’s Full of Assholes It’s Your Fault.” Anil Dash. July 20, 2011.
http://dashes.com/anil/2011/07/if-your-websites-full-of-assholes-its-your-fault.html.
Duggan, Maeve. “Online Harassment.” Pew Research Center. October 22, 2014.
http://www.pewinternet.org/2014/10/22/online-harassment/.
Kayyali, Nadia & O’Brien, Danny. “Facing the Challenge of Online Harassment.” Electronic
Frontier Foundation. January 8, 2015. https://www.eff.org/deeplinks/2015/01/facingchallenge-online-harassment.
Lessig, Lawrence. “Code 2.0 Chapter 2: What Things Regulate.” Found online: Social Text.
April 20, 2015. https://www.socialtext.net/codev2/what_things_regulate.
Scalzi, John. “Whatever.” http://whatever.scalzi.com/.
----. John Scalzi, et. al. “A Note About the Hugo Nominations This Year.” Whatever. April 4,
2015. Accessed: April 18, 2015. http://whatever.scalzi.com/2015/04/04/a-note-about-thehugo-nominations-this-year/.
----. “Another Entry in the Annals of ‘People Who Haven’t the Slightest Idea What They’re
Saying’.” Whatever. July 25 2008. http://whatever.scalzi.com/2008/07/25/another-entryin-the-annals-of-people-who-havent-the-slightest-idea-what-theyre-saying/.
----. “Site Disclaimer and Policy.” Whatever. http://whatever.scalzi.com/about/site-disclaimerand-comment-policy/.
----. “Station Identification: Whatever.” Whatever. January 1 2012.
http://whatever.scalzi.com/2012/01/01/station-identification-whatever/.
Download