Media Authorship - Nick Diakopoulos

advertisement
Remix Culture: Mixing Up Authorship
Nicholas Diakopoulos
College of Computing / Georgia Institute of Technology
nad@cc.gatech.edu | www.cc.gatech.edu/~nad
Introduction
Spurred by the rise and spread of digital computing technology, the culture of remixing
existing ideas and media artifacts is now reaching a fervent reification in the modern cyberspace.
From its beginnings as a term used to describe mixing different versions of multi-track music
recordings in the 1970s, “remix” has now broadened itself to include notions of mixing other types
of media such as images, video, literary text, game assets, and even tangible items such as cars and
clothing1. But when everyone is mixing ideas and media, who is the author or creator of the final
product? The remix trend, reflected in the contemporary zeitgeist and born out of philosophical
ideas of modernism and post-modernism as well as facilitated by digital technology, represents a
shift in authority in authorship away from the auteur; a democratization of media production and
authorship. This paper will explore the evolution of the author in light of the remix trend. A range
of diverse remix artifacts will be examined through the lens of hypertext theory in order to shed light
on changing notions of the author, authority, and intertextuality.
Remix ACBs
Before delving into how the author relates to remix culture, it will first be essential to more
fully understand what “remix” means. Manovich relates a train station metaphor for remix in which
information or media is a train and each receiver of information is a train station (Manovich 2005).
At the station information gets mixed with other information and the train is loaded and sent to a
new station. But really the network as a palpable structure in electronic discourse necessitates that
we update this metaphor with airports, which have more than one path leading to and from them, in
place of train stations. Remix can then be seen as the process of traversing a path in a synchronic
network of media; of navigating a hypermedia structure.
When we speak of “remix”, a core issue to understand is “what are we remixing?”. As remix
culture expands from such diverse applications as car customization2 to collaborative book writing3,
it becomes not only the remixing of media, but also of tangible artifacts and ideas. In Writing Space
Jay Bolter notes that since ancient times philosophers have believed that thinking and writing were
inseparable. The mind can be thought of as a writing surface and the act of thinking entails
imprinting on that surface in the language of thought (Bolter 2001). Taking this philosophical notion
of thought then, remix can indeed just as easily apply to media as to ideas. There is a subtly in
distinguishing between remix ideas and remix media though. When we refer to “remix media” it
implies that the remixer started with concrete instantiations of media which were then segmented
and recombined. On the other hand, “remix ideas” may involve one or more people combining
ideas gleaned from different sources (i.e. interpretations of media) which are then potentially
instantiated in media. Collaborative authoring is typically used to describe the process of many
1
Remix Planet. http://wired-vig.wired.com/wired/archive/13.07/intro.html
Pimp My Ride. http://www.pimpmyride.com/
3
Code v.2. http://codebook.jot.com/Book
2
Diakopoulos 2
people combining their ideas in a concrete media instantiation. These different flavors of remix are
depicted in Figure 1.
As delineated in The Language of New Media, one of Lev Manovich’s central principles of new
media is that of modularity. Modularity implies the treatment of new media as a collection of
discrete samples, which can be combined into larger objects (Manovich 2001). Though the
modularity principle of new media affords remixabilty more so than traditional static media, remix
applies equally well to either new media or old (e.g. photo collage). There may however be more
hurdles placed on the remix of traditional media if it is not digitized. These constraints are removed
through the digital representation of new media and the equal treatment of media assets by the
computer. Constraints on remixability also fluctuate within the digital domain according to the
underlying nature of the media. For example, remixing a music sequence might entail considering
tempo and rhythm, whereas remixing video may involve maintaining continuity.
Figure 1. Graph representation of different modes of remix as they relate to people and media elements.
Origins of Remix
The tradition of remix is actually quite old and dates back to the oral cultures of the ancient
Mediterranean. In Orality and Literacy Walter Ong describes how the ancient rhapsodes used to weave
their stories by putting memorized snippets of stories together in a formulaic way to suit the
demands of the audience (Ong 2002). In this way the oral poet “wrote” directly to the minds of the
audience by mixing bits and pieces of culturally significant stories together in real-time (Bolter 2001).
As the tradition of orality largely lost out to that of written culture, so too did the notion of
remixing material for different audiences in the fixed domain of print. It took a long time for the
concept of mixing ideas in orality to reach the visual (print) medium, which has numerous historical
motivations, but which in part may also be because static printed media does not afford it as easily as
aural media (Bolter 2001). Remix was “discovered” with fresh vigor by the modernist artists such as
Diakopoulos 3
the dada and surrealist collagists of the early 20th century. Dadaist collagists such as Max Ernst were
noted for their “unconventional use of familiar elements” (Adamowicz 98). This was seen as a way
for breaking with traditional mimetic aesthetics and exploring the modern aesthetic of juxtaposition.
The collagists thus began the modern trend of remix media, which was extended to music in
1972 by DJ Tom Moulton, who is said to have produced the first disco remixes (Manovich 2002).
The advent of technology for the digital manipulation of sound in the early 1980s (e.g. synthesizers,
sampling, and looping), of video in the late 1980s (e.g. Avid Non-Linear Editing) and of general
purpose digital media manipulation tools such as Photoshop, has largely contributed to lowering
barriers for people to remix existing media with the computer. In addition, the networked culture
that has arisen as part of the internet allows personally collected/created media to flow freely
between people, thus decreasing barriers to remix even further. As the digital medium affords
mixing media, not just ideas, we’ve come full circle from the days of orality. Perhaps it is the case
that technology is finally catching up to society’s and culture’s needs, or in fact that through its
affordances technology is determining what culture does: remix (Williams 1974).
As remix becomes more widespread and easier due to technology it is serving to put modern
and post-modern philosophical ideals into the hands of the amateur. Notions of self-reflexiveness,
juxtaposition, and montage which draw on the modernist art ideals developed in the early 20th
century are somewhat inherent to mixing potentially disparate media material (Lunn 1982). The
post-modern aesthetic of multiple perspectives is also innate to remix insofar as remixes can be seen
as different perspectives on an existing trajectory of media. However, as people start seeing these
multiple perspectives in media it may at the same time undermine the potentially unitary perspective
of the author.
Traditional Notions of Author
There are essentially two competing conceptions of the author: the author as a lone creative
genius, and the author as collaborator. In the late medieval times the prestige of the individual began
to grow. This continued into the Enlightenment and through the romantic period eventually leading
to notions of the designer/author as “creative genius” (Barthes 1978, Fallman 2003). This romantic
instantiation of authorship eventually found itself applied to film in the auteur theory which was
developed in the 1950s by Francois Truffaut and Andre Bazin. The auteur theory is meant to apply
to the corpus of work of a director and consider that corpus as a reflection of the personal vision
and preoccupations of that director4. The major concern with the romantic notion of authorship is
that it “exalts the idea of individual effort to such a degree that it often fails to recognize, or even
suppresses, the fact that artists and writers work collaboratively with texts created by others
(Landow 1997).” Barthes has a similar criticism of romantic authorship in his essay The Death of the
Author, in which he notes that literature too is overwhelmingly centered on the author, his person,
history, tastes, and passions (Barthes 1978).
The alternate conception of the author is as a collaborator in a system of authors working
together. This paradigm of authorship is in fact the norm throughout history. Think of the myriad of
different traditional productions which rely on the creative input of multiple people: orchestra, film
production, architecture etc. (Manovich 2002). This notion is reflected in Barthes’ argument that a
text does not release a single meaning, the “message” of the author, but that a text is rather a “tissue
of citations” born of a multitude of sources in culture (Barthes 1978). In this light, the author is
simply a collaborator with other writers, citing them and reworking their ideas.
4
Auteur Theory. http://en.wikipedia.org/wiki/Auteur_theory.
Diakopoulos 4
If collaborative authorship is so dominant in production and writing, why does the romantic
notion of authorship even exist then? Of course there are various reasons for this. Manovich notes
that in modernity it is important to brand collaboratively authored media because recognizability is
so important for marketing. Branding thus transforms the collaborative view to the romantic view
for capitalistic purposes. In the case of auteur theory, the rise of the romantic notion of author is
likely a response by frustrated artists fighting for the credibility of the film director in a time when
film was not yet recognized as high art. Auteur theory can in some sense be seen as a last stand
against the rising tide of post-structuralism, which at roughly the same time in history was placing
emphasis on the reader of a text rather than the author.
The Rise of Reader-Response Theory
The beginning of interest in the reader’s part of the author-text-reader triumvirate comes
from Louise Rosenblatt and Jean-Paul Sartre in the 1930s. The conception of the reader as deserving
of critical attention then spread to post-structuralism and semiotics by the 1970s (Douglas 2000). By
the late 70s Barthes argued that “…the true locus of writing is reading.” and that “…the reader is
the very space in which are inscribed, without any being lost, all the citations a writing consists
of…” (Barthes 1978).
The post-structuralist notions of stressing the role of the receiver as a maker of reality bled
into literary criticism in the form of reader-response theory. Reader-response theory considers the
myriad of ways that different archetypical hypothetical readers may perceive a text. Some examples
are the “intended” reader, someone reading the text in the context in which it arose, and the
“informed” or “ideal” reader who has developed the requisite linguistic, semantic, and literary
competence needed to understand the text (Rabinowitz). These archetypes are meant as critical
lenses through which different interpretations arise.
In contemporary semiotics we are used to the notion of “holes” in a text, which the reader
fills by making assumptions and inferring causes and effects (Douglas 2000). The privileging of the
interpretation of the reader when considering a text comes however at the expense of the author.
Bolter notes that in the late age of print, tensions between the authority of the author and the
empowerment of the reader have become part and parcel of the writing space (Bolter 2001).
The key point here is that post-structuralism, semiotics, and reader-response theory have all
been whittling down barriers to the widespread adoption of remix culture for decades. By placing
the focus of the textual experience on the reader, the reader is facilitated to move beyond reading to
actual text production. A microcosm of this evolution of the reader can be seen in Fiske’s categories
for cultural production: semiotic production, enunciative production, and textual production (Fiske
1992, Shaw 2005). Semiotic production corresponds to the notion of reader-response in which the
reader of a media item is producing ideas or interpretations. This level of reader involvement might
be considered an example of remix ideas. Enunciative production is when people start articulating
meanings to others concerning their interpretations. Finally, textual production corresponds to
remix media in which cultural products act as the raw materials in the production of new cultural
products. But to make the jump from enunciative production to textual production requires
technical and/or artistic ability (Shaw 2005). Thus, even though reader-response theory facilitates
the importance of the reader, it is not until the reader has acquired the requisite skills necessary to
work with the media that she too can become a textual producer: a remixer. As the barriers to usage
of authoring tools approach the computer knowledge of the average computer user: BANG!
Everyone is suddenly a remixer.
Diakopoulos 5
Hypermedia and Remix
In some sense remix media bears a lot in common with hypermedia, which developed
theoretically and practically on its own through the past four decades. The basic notion that ties the
two concepts together is that remix media can be conceived of as a set of links to the original media
which have been reordered or otherwise re-edited. Hypermedia consists of a network of potential
paths that a reader may take, with potentially default paths which define a linear trajectory through
the network. Remix media is essentially a reworking of the trajectory through a collection of media,
which may also involve adding material to the trajectory that wasn’t present in the original media
trajectory.
To some extent hypermedia has also investigated the changing relationship between author
and reader which we are now considering in light of the remix trend. Through his experience
building a hypermedia site, The Dickens Web, Landow argues that, “…hypertext has no authors in the
conventional sense. … hypertext as a writing medium metamorphoses the author into an editor or
developer. Hypermedia, like cinema and video or opera, is a team production (Landow 1997).” Thus
authoring in a hypermedia can be a very collaborative process, not only with other writers whose
text may be in the network, but also with the active interpreting reader.
The conception of the reader in interactive hypermedia takes the reader beyond that of
passive interpreter in reader-response theory to that of co-author. Interactivity allows the reader of a
hypertext to choose a path through the network of interconnected media elements, thus generating a
personalized work simply through the trajectory of links chosen. The reader becomes co-author of
the work insofar as it only exists as the text that was created through their (potentially unique)
traversal (Manovich 2001). This “lean-forward” notion of the reader can be seen as a stepping stone
toward more active meaning construction such as becoming a text producer or remixer.
In exchange for the increased agency of the reader and her ability to choose a path through
the text, make annotations, or create links between existing text, the authoritativeness and autonomy
of the author is subverted (Landow 1997, Douglas 2003). Traditional notions of authority in
authorship are buttressed by the fixed changelessness of print in books which promulgates the idea
that the author has created something staying, unique, and identifiable (Landow 1997). Mass
production of identical copies from the printing press as well as resource barriers to becoming a
publisher also supports homogeneity and the authority of the author (Bolter 2001). In contrast, the
changeability of hypertext and ephemeral nature of digital media at large supports the loss of
authorial control. Furthermore, the network nature of hypertext with its fragments of reused
material disintegrates the thoughtful voice of the author (Landow 1997). Finally, since every digital
technology requires some form of platform to run on, i.e. the environment in which the software
runs, this also dictates to some degree how autonomous the hypertext may be (Douglas 2003). The
authority of the author is thus further diminished through the constraints imposed on the text by the
software environment.
The notion of intertextuality, which draws on the ideas of such theorists as Barthes, Derrida
and Foucault, treats texts as networks of associations with other texts which may be extra-physical
to the work itself (Douglas 2003). Barthes saw this intertextuality as beginning with the author as
text, a concept akin to our notion of remix ideas. Hypertext allows one to make intertextual links
explicit and at the same time allow the reader to explore the intertextuality of the text as they
perceive it (Landow 1997). In traditional literature intertextuality can be rather passive, with the
reader potentially not even noticing a tacit reference or allusion to another text. Different discourse
communities have different strategies for dealing with intertextuality. Scientific discourse, for
Diakopoulos 6
example, greatly relies on citation and building upon the ideas of others within a community. On the
other hand, we have something like a newspaper column, which may form a dialogue with other
columns addressing similar topics, but never explicitly cite these other columns.
Remix media and ideas have an intrinsically intertextual nature insofar as they cannot exist
outside their network of references to other media. Fragments can always be traced back to their
original piece of media, thus there is always at least one reference. In some forms of remix media,
intertextuality is explicit (or at least extremely obvious), whereas in others it is left unexpressed. It is
likely that different discourse communities will develop or impose their existing notions of
intertextuality on remixed media within that community. This of course relates back to how the
community views authorship and how much authority is lost in exchange for the recognition of
intertextuality.
Analysis of Remix Domains and Artifacts
In this section I will embark on developing a space of remix artifacts as they vary along some
key dimensions elucidated in the prior section on hypertext theory. Issues such as authority and
constraints imposed on the author, flavor of intertextuality, positioning between remix ideas and
media, and degree of collaboration will be analyzed in the context of a range of artifacts stemming
from audio, video, text, and games. It is hoped that by looking at a broad spectrum of these artifacts
we can begin to tease out different flavors of remix culture with implications for developing a
parameterization of the remix space. This may be useful either generatively for designing new remix
communities or analytically to see where an existing remix community is positioned.
Audio
The canonical example of remix media comes from the first disco remixes made in 1972 by
DJ Tom Moulton (Manovich 2005). This involved the relatively simple manipulation of the
weighting of different soundtracks of a multi-track music recording. Contemporary DJs routinely
mix together different tracks whether it be as complete remixes of existing songs or just as segues
between tracks in a set. This is considered standard DJ practice as long as the appropriate rights are
secured for the remixed tracks. DJ Danger Mouse recently extended the idea to apply to entire
albums of songs by mixing together the Beatle’s White Album with Jay Z’s Black Album to create the
Grey Album, though this was without the consent of the rights holders5.
Constraints on the manipulation of music such as maintaining rhythm, beat, and timbe form a
space where the abilities and skills of the DJ can be recognized as a layer on top of the originally
authored music. Thus remixes are in and of themselves usually considered to be new authored
content, with the DJ being recognized as the authority and author of the particular collection and
arrangement of samples. The choice of samples as well as their skill in arrangement are enough to
earn the moniker of “author” for the DJ. Typically the intertextuality of the remixed music is
explicitly acknowledged in the track title, though just by listening to the music someone might not
necessarily know or recognize the original tracks or samples that the remix is based on. DJs who
remix music are typically working by sampling existing tracks, thus they tend toward the end of
remix media rather than remix ideas. Also, it is not uncommon for DJs to collaborate with other
DJs on special tracks or remixes. Sometimes they even form names for themselves working in pairs
such as Gabriel and Dresden or Kruder and Dorfmeister.
5
Guide To Remix Culture. http://mutednoise.com/article.cfm?article=Guide-to-Remix-Culture:-Part-1. Accessed
11/15/2005
Diakopoulos 7
As tools and authoring environments become more user friendly we are also seeing remix music
extend beyond just DJs. In the Depeche Mode Remix Contest6 participants online are encouraged to
remix using the provided samples and loops from two Depeche Mode songs, “Dream On” and “I
Feel Loved”. Participants can remix either song or created a blend between the two which is then
judged by the contest sponsors. Depeche Mode music can be classified as a form of electronica,
which from its growth out of sampling, looping, and mixing, seems to readily lend itself to this form
of remixing. In the case of this contest, the intertextuality of the remixes is explicit insofar as it is
known a priori what songs the samples are coming from. The authority of the participant is thus
reduced compared to that of the DJ in that she has less leeway in terms of where the samples
originate from.
A different flavor of online remix contest was sponsored by Penguin Books7. They made spoken
samples from thirty different classic novels available for download and created a competition where
one or more samples should be utilized in a song composition. Excerpts from titles such as James
Bond, Charlie and the Chocolate Factory, Dracula, Spot the Dog, and Charles Dickens were
available to be cut-up, re-edited, and reversed in the songs. Again because of the rules of the contest,
the intertextuality was explicit in that samples were to come from at least one of the thirty samples.
At the same time, because the samples were spoken word and were to be incorporated into a song,
other samples such as beats and melodies could also be used. This represents a relaxation of the
constraints found in the Depeche Mode contest and thus additional creative room in which the
participant could maneuver.
A final example of remix in the audio domain comes from the Gorillaz, a virtual band whose
front consists of cartoon characters. The music tends to make a lot of cultural quotes, but not by
using sampled material. Albarn, one of the musicians behind the Gorillaz music said in a Wired
interview, “If I hadn't spent all those years learning how to play instruments, I'd be using a sampler
to put all these pieces together. Instead, I use a songwriting method that's a lot like sampling without
actually digitally sampling. Gorillaz is how I take everything I hear and filter it” (qtd. in Gaiman).
Thus the Gorillaz is operating in the realm of remix ideas, with an artist filtering those ideas and
generating new media (songs) based on them. Because the songs are new material which is not
directly based on existing media samples, the authority of the song-writers as authors is well
established. The intertextuality could be considered mostly implicit, since a perceiver would have to
be very familiar with the original pieces to recognize a similarity in the Gorillaz music.
Video
Video represents another traditional medium in which remix can be seen having an impact.
With the growth in popularity of DVDs we see more and more the ability to view slightly different
“mixes” of a film, including director and actor commentary or special director’s cuts of the film. The
highly evolved practice of video editing can, through subtle manipulation, handily create different
meanings and interpretations out of the same video material. In some sense the process of
documentary film making is a remix process in which a potentially large corpus of video footage is
mixed together and edited to create the story that the directory wants to tell. In the past the ability
and resources to find this material and the equipment needed to edit it was prohibitive for
individuals. But again, as authoring tools become easier and since the barriers to editing in the digital
domain have been significantly reduced, the average computer user is now empowered with the
ability to re-edit, remix, and refashion video material.
6
7
Depeche Mode Remix Contest. http://www.acidplanet.com/contests/depechemode/ Accessed 11/15/2005
Penguin Launch Remix Project. http://www.djmag.com/newsfeat118.php Accessed 11/15/2005
Diakopoulos 8
Following the lead of audio remix contests, online video remix contests have also begun to
appear. In the Bring Dead Art Back to Life contest8 the call was to re-edit pieces of the classic
horror film, Night of the Living Dead, together with a student movie, Amid the Dead using at least one
clip from each. Participants were encouraged “to make a music video, comic short, or other art.”
Since both movies are in the public domain this contest (like the audio contests) was legal. This
instance of remix video has very similar parameters of authority and intertextuality as that of the
Depeche Mode remix contest.
Anime Music Videos (AMVs)9 represent a very interesting instantiation of remix video since
a whole community of enthusiasts has grown up around them (Shaw 2005). AMVs are fan-produced
music videos which combine content from various anime video series that is set to popular music.
Although AMVs have been produced since the 1970s, their quality and sophistication have since
been enhanced by better digital tools and technology. AMVs represent somewhat of an analogue in
the video domain to that of the music DJ remixing audio. Samples from one anime series or a widerange of series are combined into a single work through the lens of the remixer who may have a
theme related to the music that the video is set to.
The community aspect of AMVs is interesting in that it regulates the elements of authority
and intertextuality. Though just watching the remixed video will not explicitly tell the viewer where
the source material is coming from, on the description webpage of each AMV there are fields for
both the song that it is based on as well as which anime series were used as samples for the video
material. This serves to make the intertextuality of the AMVs more explicit since enthusiasts can
follow hypertext links to other AMVs based on the same source material. Authority of authors of
AMVs in the community is tacitly acknowledge in the site FAQ: “Someone has stolen footage that is
obviously from my video, what should I do?.” The FAQ’s response is that not acknowledging the
source of one’s material is clearly rude and that it is appropriate to contact the infringing person and
ask for explicit crediting. If this doesn’t lead to an amicable settlement the case can be taken to the
site administrator who after looking at how egregious the copying is may choose to remove the
offending video from the site. The author as creator of an AMV thus has her rights secured by the
community norms.
Text
The domain of text represents a very broad space in which remixers can operate. Due to
widespread literacy and the ease of using digital text editors, the remixability of text has reached the
masses in the form of the World Wide Web already. The typical web example of remix text is that of
Wikipedia. The Wikipedia is an online, collaboratively-written encyclopedia, which after about five
years online has over 836,000 articles on a range of topics10. Though contributors are virtually
unconstrained in what they can write on, there are community structures in place, such as sysops,
beureaucrats, and stewards that enforce the values of the community (Coffin 2005). Articles on
Wikipedia are not credited to any one particular author, which mirrors Landow’s notion of the
author as editor or developer as part of a team collaborative effort (Landow 1997). The
intertextuality of the Wikipedia is explicit in that it is allows for direct hypertext links between related
entries as well as a list of external links at the end of an article, which situates the entry with respect
to other online resources. What the Wikipedia does not make explicit are non-digital resources that
may have been used as references in writing an entry. Wikipedia is a good example of both remix
8
Bring Dead Art Back To Life. http://undeadart.org/ Accessed 12/1/2005
Anime Music Videos. http://www.animemusicvideos.org/ Accessed 11/19/2005
10
Wikipedia. http://www.wikipedia.org/ Accessed 12/04/2005
9
Diakopoulos 9
ideas and remix media. Contributors may be drawing on a wide array of background knowledge and
ideas which then get combined in an entry. At the same time, since the process is collaborative, there
are multiple contributors who are mixing media (text fragments) together to form the “final” entry.
A completely different take on remix text is that of Fanfiction11, which involves enthusiastic
fans who create new textual stories in the diegesis of an original game, book, TV show, or movie.
The original author’s world is thus co-opted by these individuals who create their own space-time
samples in the diegesis. Fanfiction is a very good example of remix ideas since it is not the media of
the original piece being remixed, but rather the characters, settings, and stories being recombined in
the mind of the fan-author. The pieces are explicitly intertextual in that they are categorized with
respect to the existing piece of media that they make reference to.
Fanfiction bears resemblance to role playing games such as Dungeons & Dragons or
Shadowrun in which the story world provides a set of constraints which get filled in by the role
players. In the same way that the rules of these games provide a framework, fanfiction writers take
existing media and use it as a framework for creating new textual media which can be understood
within that framework. The idea of games providing a framework in which remixers can operate is
explored further in the next section.
Games
Games are somewhat different from text, audio, or video in that they rely more on the
notions of interaction and simulation than on traditional narrativity. In some sense this makes them
prime targets for end user remixability since the difficult traditional narrative constraints of plot and
story are relaxed. Many games are specifically designed for remixability (e.g. Doom, The Sims,
Unreal) by making elements in the game swappable. As a marketing tool, the fan communities that
arise around the after-market modifications to the games are generally a good way to maintain
enthusiasm, thus engineered remixability in this sense extends the life of the game.
The degree of remixing allowed in a game is ultimately a function of how much the
programmer/designer/author of the software wants to allow for. The author of the game thus
defines constraints which define what can and can’t be swapped into a game as well as what the
diegetical parameters of the game environment are. This points toward the role of the game designer
as a facilitator in the gamer’s construction of her own narrative out of the gaming experience
(Wright 2004). The authors of the game thus maintain a great deal of authority, but explicitly
relinquish some of that authority by allowing users to modify and explore the game space. The bears
a similarity to narrative hypertext in that a reader is inserted in the universe intended by the author,
but that the reader can only explore as much as the author allows for (Douglas 2000). Some hackers
could undermine the authority of the author further by changing the underlying engine on top of
which a game runs, but the degree of expertise necessary to do this is beyond that of a typical gamer.
The canonical example of a remixable game is Doom, however here I will talk about a
somewhat more up-to-date 3D shooting game, Unreal Tournament. Unreal tournament allows users
to not only change the graphics of characters and levels, but also has extensive online communities
such as Unreal Tournament 200X Files12 which allow for the exchange of a range of modifications
(called mods) to the game. Different categories of mods include skins for characters, different
models, mutators (changes to the behavior of e.g. weapons or gore), and maps. Mods are typically
not explicitly intertextual, though there may be themed skins for example which make reference to
11
12
Fanfiction.net. http://www.fanfiction.net/ Accessed 12/04/2005
Unreal Tournament 200X Files. http://unrealtournament2004.filefront.com Accessed 11/15/2005
Diakopoulos 10
widely known movie characters. One example model on the Unreal Tournament Files site was for
Boba Fett, a highly recognizable Star Wars character. Like the AMV example, it is the online
community and sharing aspect of the assets which makes the otherwise implicit intertextuality more
explicit. Remixer authority in Unreal Tournament is constrained by the engine and environment in
which the game runs, thus a large degree of authorial control is maintained by the game designers.
This is typical of most games where modularity and remixability are built into the game.
Who is the Author and What is her Future?
As can be seen from the wealth of examples presented in the previous section, the nature of
the author in digital media is evolving. With the increased emphasis on the reader and the prevalence
of the active reader of hypermedia on the internet, more and more readers are coming to see
themselves as potential text producers. At the same time the advent of better technology and
authoring interfaces has vastly simplified constraints and difficulties in manipulating media. This has
further fueled the reader’s potential to become a remixer.
At the core of authorship is the process of making choices about the structure and content
of media elements within the constraints of a particular medium (Diakopoulos 2005). Different
discourse communities have different notions of how much of a contribution in structure and
content results in authorship. It will thus be left to the individual remix communities at large to
define what constitutes authorship and what does not. No single definition of authorship can span
different remix communities. Key to this definition is an understanding of the granularity of the
media elements being mixed together. Rarely does one see a DJ referencing the individual beat
samples she uses, but at the level of a sequence of beats taken from an existing song this suddenly
must be acknowledged in order to maintain the validity of the DJ. Different media have different
granularities at which authorship is recognized and requires referencing.
What will the author look like in the future? As I alluded to above, the notion of author will
vary across communities, with varying amounts of authority in structure and content surrendered to
reader-authors. It may be that we begin to see the author in the sense of a game designer; one who
explicitly designs which structures and contents may be manipulated by the reader. The digital
platform affords this since every piece of complex media must run in a constrained environment.
A more appropriate question may be: will we care who the author is in the future? With
unprecedented amounts of media immersing us on a daily basis, who can keep track of who made
what? Brands will persist as condensed representations of collaborative authorship, but when the
web becomes a series of microcontent aggregated in different ways, who will even notice who was
the author of individual fragments? As the value and utility of information will be in the quality of
the aggregation, authority may in fact shift to this meta level of authorship. But in fact, if the
computer as aggregator can construct media through the constrained selection of structure and
content, perhaps it will be the true author of the future.
Diakopoulos 11
References
Elza Adamowicz. Surrealist Collage in Text and Image. Cambridge University Press. 1998.
Roland Barthes. Image, Music, Text. “The Death of the Author.” Hill and Wang. 1978.
Jay David Bolter. Writing Space. 2nd Ed. 2001. Lawrence Erlbaum Associates.
Jill Coffin. “Transfer of Open Source Principles to Diverse Collaborative Communities.”
Conference on Online Deliberation: Design, Research, and Practice / DIAC-2005, Stanford
University, Stanford, CA, USA.
Nicholas Diakopoulos and Irfan Essa. Supporting Personal Media Authoring. ACM Multimedia
Workshop on Multimedia for Human Communication (MHC). 2005.
J. Yellowlees Douglas. The End of Books – Or Books without End? 2000. University of Michigan
Press.
Daniel Fallman. Design Oriented Human-Computer Interaction. CHI 2003.
John Fiske. “The Cultural Economy of Fandom,” in The Adoring Audience: Fan Culture and
Popular Media. 1992.
Neil Gaiman. Keeping it (Un)Real. http://www.wired.com/wired/archive/13.07/gorillaz.html.
Accessed 11/17/2005.
George Landow. Hypertext 2.0: The Convergence of Contemporary Critical Theory and
Technology. Johns Hopkins University Press. 1997.
Eugene Lunn. Marxism and Modernism. University of California Press. 1982. pp. 34-42.
Lev Manovich. “Remix and Remixability.” 2005.
http://www.manovich.net/DOCS/Remixability_2.doc Accessed 11/17/2005.
Lev Manovich. The Language of New Media. The MIT Press. 2001.
Lev Manovich. “Who is the Author? Sampling / Remixing / Open Source.” 2002.
http://www.manovich.net/DOCS/models_of_authorship.doc Accessed 10/28/2005.
Walter Ong. Orality and Literacy: the technologizing of the word. Routledge. 2002.
Peter Rabinowitz. “Reader-Response Theory and Criticism.”
http://www.press.jhu.edu/books/hopkins_guide_to_literary_theory/readerresponse_theory_and_criticism.html. Accessed 11/15/2005
Ryan Shaw. Media Streams Metadata Exchange: User Study and Initial Requirements. 2005.
Raymond Williams. The Technology and the Society. In The New Media Reader. Eds. Noah
Wardrip-Fruin and Nick Montfort. 1974.
Will Wright. First Person: New Media as Story, Performance, and Game. “Can there be a form
between a game and a story?” response essay. 2004.
Download