Towards Large-Scale Argumentation

advertisement
COLLECTIVE INTELLIGENCE: CREATING A PROSPEROUS WORLD AT PEACE
Achieving collective intelligence via
large-scale argumentation
Mark Klein1
Let us define “collective intelligence” as the synergistic channeling of the
efforts of many minds towards identifying and coming to consensus over
responses to some complex challenge, i.e. as large-scale deliberation-foraction2. How well does current technology enable this? We can divide existing
deliberation support technologies into three categories: sharing tools, wherein
individuals compete to provide content of value to the wider community;
funneling tools, wherein group opinions are consolidated into an aggregate
judgment, and argumentation tools, wherein groups identify the space of issues,
options, and tradeoffs for a given challenge3.
By far the most commonly used technologies, including wikis, blogs, idea
markets, and discussion forums, fall into the sharing category. While such tools
have been remarkably successful at enabling a global explosion of idea and
knowledge sharing, they face serious shortcomings. One is poor signal-to-noise
ratios. Such tools, especially forums, are notorious for producing repetitive and
mixed-quality content. Sharing systems do not inherently encourage or enforce
any standards concerning what constitutes valid argumentation, so postings are
often bias- rather than evidence- or logic-based. Sharing systems are also
challenged when applied to controversial topics: they are all too easily hijacked
by a narrow set of “hot” issues or loud voices, leading to such phenomena as
forum “flame wars” and wiki “edit wars”. Sharing tools are thus ill-suited to
uncovering consensus.
1
Mark Klein is a Principal Research Scientist at the Center for Collective Intelligence,
Massachusetts Institute and Technology. http://cci.mit.erdu/klein/
2
Walton, D. N. and E. C. W. Krabbe (1995). Commitment in dialogue: Basic concepts
of interpersonal reasoning. Albany, NY, State University of New York Press.
3
Moor, A. d. and M. Aakhus (2006). "Argumentation Support: From Technologies to
Tools." Communications of the ACM 49(3): 93.
475
MASS COLLABORATION AND LARGE SCALE ARGUMENTATION
Funneling technologies, which include group decision support systems,
prediction markets, and e-voting, have proven effective at aggregating
individual opinions into a consensus, but provide little or no support for
identifying what the alternatives selected among should be, or what their pros
and cons are.
Argumentation tools fill this gap, by helping groups define networks of
issues (questions to be answered), options (alternative answers for a question),
and arguments (statements that support or detract from some other statement)4
Figure 1. An example argument structure.
Such tools help make deliberations, even complex ones, more systematic
and complete. The central role of argument entities implicitly encourages the
users to express the evidence and logic in favor of the options they prefer. The
results are captured in a compact form that makes it easy to understand what
has been discussed to date and, if desired, add to it without needless
4
Kirschner, P. A., S. J. B. Shum and C. S. C. Eds (2005). "Visualizing Argumentation:
Software tools for collaborative and educational sense-making." Information
Visualization 4: 59-60.
476
LARGE-SCALE ARGUMENTATION
duplication, enabling synergy across group members as well as cumulativeness
across time.
Current argumentation systems do face some important shortcomings,
however. A central problem has been ensuring that people enter their thinking
as argument structures – a time and skill-intensive activity - when the benefits
thereof often accrue mainly to other people at some time in the future. Most
argumentation systems have addressed this challenge by being applied in
physically co-located meetings where a single facilitator captures the free-form
deliberations of the team members in the form of an commonly-viewable
argumentation map5. Argumentation systems have also been used, to a lesser
extent, to enable non-facilitated deliberations, over the Internet, with physically
distributed participants6,7. With only one exception that we know of8, however,
the scale of use has been small, with on the order of 10 participants or so
working together on any given task, far less than what is implied by the vision
of collective intelligence introduced in this paper.
Towards Large-Scale Argumentation
We hypothesize that effective collective intelligence that transcends these
limitations can be achieved by creating large-scale argumentation systems, i.e.
systems that integrate sharing and argumentation technologies to enable the
systematic identification of solution ideas and tradeoffs on a large scale, and
then use funneling to help participants come to consensus about which of these
solution ideas should be implemented for a given problem. Creating such large-
5
Shum, S. J. B., A. M. Selvin, M. Sierhuis, J. Conklin and C. B. Haley (2006).
Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and
QOC. Rationale Management in Software Engineering. A. H. Dutoit, R. McCall, I.
Mistrik and B. Paech, Springer-Verlag.
6
Jonassen, D. and H. R. Jr (2005). "Mapping alternative discourse structures onto
computer conferences." International Journal of Knowledge and Learning 1(1/2): 113129.
7
T., V. Ratnakar and Y. Gil (2005). "User interfaces with semi-formal representations:
a study of designing argumentation structures." Proceedings of the 10th international
conference on Intelligent user interfaces: 130-136
8
This exception (the Open Meeting Project’s mediation of the 1994 National Policy
Review) was effectively a comment collection system rather than a deliberation system.
477
MASS COLLABORATION AND LARGE SCALE ARGUMENTATION
scale argumentations systems will require, we believe, coming up with novel
solutions to a range of key design challenges, including:
Who can edit what? This has been handled in a wide range of ways in different
collaborative systems. Wikis, for example, typically allow anyone to change
anything, where the last writer “wins”. In chat, email, and threaded discussion
systems, every post has a single author; people can comment on but not modify
submitted posts. Each scheme has different pros and cons. Which scheme is
best for large-scale argumentation?
How do we ensure a high-quality argument structure? In an open system, we
can expect that many participants will not be experts on how to structure
argument networks effectively. People may fail to properly “unbundle” their
contributions into their constituent issues, options, and arguments, may link
them to the wrong postings, or may fail to give them accurate titles. Different
people may also conceptually divide up a problem space differently, leading to
the possibility of multiple competing issue trees. The sheer volume of postings
may make this redundancy less than obvious, and no single facilitator can be
expected to ensure coherence since he/she would represent a bottleneck in a
large-scale system. Getting the structure right, however, is a critical concern. A
good structure helps make sure that the full space of issues ideas and tradeoffs
is explored, and substantially reduces the likelihood of duplication.
How do we mediate attention sharing? In a small-scale face-to-face setting, it is
relatively straightforward to guide the group en masse through a systematic
consideration of all the issues. Facilitators often play a key role in this. In a
large-scale system, however, users may follow their own agendas, important
issues may go neglected, or discussions may become balkanized, with subgroups each attending to distinct parts of the argument structure without
interacting with each other. People, in addition, typically generate ideas by
extending or re-combining ideas previously proposed by others. Our goal
should be to maximize such potential synergy by helping them encounter a
wide range of ‘fertile’ ideas. The requisite networked interaction is
straightforward to ensure in small physically co-located meetings: how can we
achieve it with large distributed settings, where communication often devolves
into a broadcast topology?
How do we enable consensus? In small-scale argumentation systems, consensus
(i.e. about which of the proposed ideas should be adopted) emerges off-line via
478
LARGE-SCALE ARGUMENTATION
the face-to-face interactions amongst the participants, but in a large-scale
system, this consensus-making needs to be mediated or at least be made
discernible by the system itself. Funneling systems can address this gap, but to
date have been applied mainly to identifying consensus (e.g. by voting) with a
relatively small number of pre-defined, mutually exclusive options. Large-scale
argumentation systems introduce new challenges because they define not just a
few options, but rather an entire (and generally vast) design space. An
argument tree with only 10 orthogonal issues and 10 (non-mutually-exclusive)
options per issue produces, for example, (2^10)^10 (over 10^30) possible
solution options. The utility functions for these vast spaces will generally be
diverse (different stakeholders will have different preferences) and nonlinear
(with multiple optima). A large-scale argumentation system must thus support,
in other words, a collective nonlinear optimization process. This is not a
‘problem’ with argumentation systems, but rather a result of their ability to
represent the inherent complexity of systemic problems.
The Collaboratorium
We have implemented and are evaluating an evolving large-scale
argumentation system, called the Collaboratorium, which explores how we can
address the issues identified above. The Collaboratorium is a web-based system
designed for concurrent use by substantial numbers (tens to, eventually,
thousands) of users. The primary interface for a user is the “Discussion forum”
(Figure 2), which allows users to create, view, edit, comment on, and organize
posts (issues, ideas, pros, and cons) in the argument structure. The
Collaboratorium incorporates functions that have proven invaluable in largescale sharing systems, including email, user home pages, watchlists, search
functions, browse histories, and so on.
479
MASS COLLABORATION AND LARGE SCALE ARGUMENTATION
Figure 2. The Collaboratorium discussion screen
The Collaboratorium design addresses the issues mentioned above as follows:
Who can edit what? The wiki “anyone can change anything model” is
powerful because it helps ensure that diverse perspectives are incorporated and
content errors are corrected. But it also has some weaknesses. Uninformed
authors can overwrite expert contributions, which can discourage content
experts from participating. Credit assignment for good articles is muddied
because of the open authorship, making it harder to identify who is expert in
which domains. And controversial topics, as we have noted, can lead to edit
480
LARGE-SCALE ARGUMENTATION
wars as multiple authors compete to give their own view pre-eminence in a
post. The forum “one author many commentors” model, by contrast,
encourages expert commentary, but the community has much less opportunity
to impact the quality of a post. The Collaboratorium explores a design choice
between these alternatives. In our current system, only the creator of a post, or
his/her assigned proxies can edit a post. Other users can submit suggestions to
be considered for incorporation by the authors. Anyone can rate a suggestion,
providing guidance on which ones are most critical to incorporate. This
approach has several important advantages in the context of large-scale
argumentation. Since each post represents just one of many possible
perspectives, it is less critical to ensure fully open authorship. Each post need
only express a single perspective as clearly as possible, enriched by community
feedback. This approach should radically reduce the likelihood of fruitless ‘edit
wars’, since users with divergent perspectives are not forced to compete for
dominance in a single post.
Ensuring a high quality argument structure: The Collaboratorium is
designed to support a continuum of formalization, allowing people to enter
content in the form that they are comfortable with, be it simple prose (in the
form of comments) or fully-structured argument maps. It also provides search
tools that help users find the issue tree branches on given topics, and provides
information on the relative activity of these branches (more active branches are
displayed in a larger font), so they can find the most-attended-to of the places
that their post could belong. Editors, a special class of users selected based on
their argument mapping skills and ability to maintain a content-neutral point of
view, are empowered to [re-]structure these entries, if necessary. This is
analogous to what often happens in Wikipedia and it’s offshoots: some people
focus on generating new content, while others specialize on checking,
correcting, and re-organizing existing content. We are also exploring the idea of
relying upon a small cadre of domain experts to create an initial argument
structure carefully designed to present the critical issues and options in an
intuitively organized way. This “skeleton” can then be fleshed out and, if
necessary, modified by the full user community.
How do these design choices help? We hypothesize that a well-defined
initial issue structure, used in conjunction with search tools, should help ensure
that users usually put posts in the right part of the issue structure. Editors can
481
MASS COLLABORATION AND LARGE SCALE ARGUMENTATION
re-structure and re-locate posts that are misplaced. Experience with sharing
systems has shown that people are strongly motivated to make those kinds of
“meta-level” contributions if this offers them entry to a visible merit-selected
class of users with special privileges. We also hypothesize that the activity
scores maintained by the system will help the user community converge on a
consensus argument structure. There are often many different ways that people
could organize a given body of ideas, and in an open system several competing
structures may appear within the same argument tree. Users will presumably
want to locate their posts, however, in the argument branch that is most active,
because that maximizes their opportunities to be seen and endorsed. This
should produce a self-reinforcing push towards consolidation in the argument
trees used in a given discussion.
Mediating attention sharing: The Collaboratorium helps mediate
community attention by maintaining an activity score for all postings, and
makes it visually salient. Our hypothesis is that making activity information
salient will create a self-reinforcing dynamic wherein “fertile” parts of the
argument tree (i.e. ones where people are generating lots of new content) are
more likely to get attention and thereby be “exploited” rapidly, much in the
same way that pheromone trails allow ants to rapidly exploit food sources.
Enabling consensus: We can generate, as we have noted, many possible
solutions from an argument structure, by combining the ideas therein in
different ways. The challenge is to identify which combination of ideas best
satisfies which goals. The Collaboratorium supports this by providing distinct
“goal”, “idea” and “proposal” branches in the argument structure for every
topic. Positions in the scenario branch are distinguished by the fact that they
represent a specified combination of ideas from the other branches in the
argument structure. The system enforces the rule that all scenarios are mutually
exclusive (i.e. represent distinct combinations of positions), so that the scenario
with the highest quality score represents the one that the community has
selected as the “winner”.
Next Steps
The Collaboratorium represents the latest of a series of argumentation systems
developed by the author over a period of 15 years (Klein 1997). The current
system has been used extensively by the developers to capture their
deliberations concerning how it should be designed, leading to an argument tree
482
LARGE-SCALE ARGUMENTATION
with roughly 400 postings. Our next steps, currently under way, are to evaluate
the Collaboratorium with larger numbers (hundreds) of users. Two different
tests are being conducted. One uses a “bare” system (without any pre-existing
argument structure) to enable a discussion about what policies can best foster
technological innovation in Italy. The second test enables a discussion about
how humankind can respond to climate change challenges, building upon a predefined argument “skeleton” developed with the help of experts on technology,
policy, and climate issues. We will analyze the effectiveness of these
interventions using such measures as breadth of participation, quality of the
solutions selected by the participants, and speed of convergence.
Contributions
The key contribution of this work involves exploring how argumentation
systems can be scaled-up to enable effective collective intelligence, by adapting
ideas that have proven highly successful with large-scale sharing and funneling
technologies. A central issue is whether - given the surprising slow pace of
adoption of small-scale argumentation systems – we will find that successful
large-scale systems are even more elusive. This is an open question for now.
One could argue, however, that in some ways large-scale systems have more
potential than small-scale systems. There is widespread disaffection with the
signal-to-noise ratio of current tools for mediating large-scale deliberations. It
seems clear that the number of distinct issues, options, and arguments in a
discussion will grow, after a certain point, much more slowly than the number
of participants. The qualitative increase in succinctness offered by an
argumentation system at large scales may thus prove quite compelling. User’s
incentives for meta-level contributions such as argument mapping almost
certainly will increase as the system scales: people have the sense that their
work has a bigger potential impact. A final point is that the kind of explicit
argumentation enabled by argumentation tools may make much more sense for
large-scale public debate than for smaller group settings where relationship
management is primary. In the latter case, implicit least-commitment
communication becomes paramount, and a tool that makes commitments
explicit can become a liability, rather than an asset.
Acknowledgements
I’d like to gratefully acknowledge the many contributions to this work made by
Marco Cioffi, Luca Iandoli, Kara Penn, and Mark Tovey.
483
MASS COLLABORATION AND LARGE SCALE ARGUMENTATION
484
Download