Thayer Content Analysis

advertisement
J. TECHNICAL WRITING AND COMMUNICATION, Vol. 37(3) 267-279, 2007
CONTENT ANALYSIS AS A BEST PRACTICE
IN TECHNICAL COMMUNICATION RESEARCH
ALEXANDER THAYER
Microsoft Corporation
MARY EVANS
National Oceanic & Atmospheric Administration
ALICIA MCBRIDE
Friends Committee on National Legislation
MATT QUEEN
Consultant
JAN SPYRIDAKIS
University of Washington
ABSTRACT
Content analysis is a powerful empirical method for analyzing text, a method
that technical communicators can use on the job and in their research. Content
analysis can expose hidden connections among concepts, reveal relationships
among ideas that initially seem unconnected, and inform the decision-making
processes associated with many technical communication practices. In this
article, we explain the basics of content analysis methodology and dispel
common misconceptions, report on a content analysis case study, reveal the
most important objectives associated with conducting high quality content
analyses, and summarize the implications of content analysis as a tool for
technical communicators and researchers.
INTRODUCTION
Technical communicators have access to a multitude of research methods, but
one of the most powerful yet least understood methods is content analysis. Content
267
Ó 2007, Baywood Publishing Co., Inc.
doi:10.2190/TW.37.3.c
http://baywood.com
268 / THAYER ET AL.
analysis is a research method that empirically examines the characteristics
of messages. Neuendorf defines content analysis as “the systematic, objective,
quantitative analysis of message characteristics” [1, p. 1]. It has also been defined
as “a research technique for making replicable and valid inferences from texts
(or other meaningful matter) to the contexts of their use” [2, p. 18]. In practice, the
method involves tallying the number of specific communication phenomena in
a given text (such as the number of references to a specific person in the news or
the number of passive voice clauses in a paragraph) and then categorizing those
tallies into a taxonomy from which inferences can be made.
Krippendorff, a well-known content analysis expert, notes that the use of
content analysis “can be traced back to inquisitorial pursuits by the Church in the
17th century” [2, p. 3] and more recently to the Great Depression of the 1930s. At
that time in the United States, newspapers came under tremendous social scrutiny
as politicians searched for root causes of the economic downturn. Social scientists
became aware of “newspaper analysis” as a research method, and eventually the
term “content analysis” emerged [2, p. 6].
As with other research methods, researchers usually should formulate a testable
hypothesis or research question before undertaking a content analysis. Neuendorf
recommends that “In a scholarly content analysis, the variables should be linked
to each other in the form of research questions or hypotheses” [1, p. 107]. For
example, when a content analysis is conducted for academic research purposes,
exploratory examinations should lead to refinement of the initial research questions, development of hypotheses, and creation of reference materials necessary
to conduct a content analysis.
In contrast, content analysis is sometimes used as a systematic framework for
exploring a body of content without first formulating initial hypotheses or research
questions. Such exploratory work leads to the use of “emergent coding,” in which
researchers can develop a preliminary set of questions about a body of content
as they begin to analyze the content. For example, by examining the verbal results
of usability participants, practitioners might sense a number of common themes
among the participants and can then develop a systematic, empirical way to
explore those themes.
With regard to technical communication, content analysis represents one of
the few quantitative ways to examine the properties and composition of written
and spoken language. Content analysis can be applied to technical communication
in a multitude of ways, just a few of which we mention here:
• Usability engineers might quantify respondent comments and interpret
responses to open-ended questions. Users’ comments recorded during videotaped usability sessions would be classified as positive, neutral, or negative
with regard to the functionality of the product being tested.
• Technical communicators might develop a framework for gathering feedback
on a documentation set via user forums, message boards, and other online
TECHNICAL COMMUNICATION RESEARCH /
269
environments. These online texts would provide a set of comments that could
be classified as positive, neutral, or negative with regard to documentation
quality, completeness, and so on.
• Corporate researchers might quantify organizational levels of motivation,
job satisfaction, attitude, and morale by studying a set of employee e-mail
messages. These messages would be examined for keywords that suggest
positive or negative levels of morale, job satisfaction, productivity, and so on.
• Technical communicators might document current Web design practices. By
selecting a set of Web sites and examining specific elements of their design,
communicators could develop a taxonomy that describes current practice
in Web design.
Many excellent resources describe content analysis in great depth (see, for
example, [1-4]). Our goal in this article is to highlight some important practices
for content analysis, practices that are based both on our own work and that of
other technical communication researchers. We present a case study of our own
content analysis research derived from a recent content analysis of more than 300
English-language university Web sites; the purpose of our study was to investigate
differences in tone formality among 20 different countries. Our research design
and methodology illuminate the process of applying content analysis methods
to technical communication research. We frame this case study using the basics
of content analysis methodology, along with some important yet often overlooked
objectives associated with conducting high quality content analyses. We then
summarize the implications of content analysis as a tool for technical communicators and researchers.
BASIC CONTENT ANALYSIS TERMINOLOGY
This section examines the basic terminology associated with content analysis,
including a description of manifest and latent content analyses, units of analysis
and observation, and deductive and inductive measurement techniques.
Manifest and Latent Forms of Content Analysis
A content analysis can occur at one of two levels of analysis in terms of the
content being analyzed. Both levels can be included in a content analysis
research project.
• Manifest analysis involves simply counting words, phrases, or other “surface”
features of the text itself. Manifest content analysis yields reliable quantitative data that can easily be analyzed using inferential statistics. This is the
level of analysis we used in our case study as we tallied instances of passive
voice clauses, personal pronouns, informal punctuation, and so on.
270 / THAYER ET AL.
• Latent analysis, on the other hand, involves interpreting the underlying
meaning of the text. Latent analysis is the more difficult of the two levels of
analysis because the researcher must have a clearly stated idea about what is
being measured. For example, to measure the amount of chauvinist language
in Hemingway’s novels, it is necessary to first define “chauvinist language.”
That definition should ideally follow the work of other researchers who
have already developed proven lists of chauvinistic words and phrases. When
there is no existing research to inform a content analysis, it is necessary to
rigorously define the latent variable (in this case, chauvinist language) so
that other researchers may extend the analysis to other bodies of text.
The value of latent content analysis comes from its ability to expose previously masked themes, meanings, and cultural values within texts. Latent content
analysis is popular among qualitative researchers who test their hypotheses
on small subject pools or who use research methods that do not guarantee
repeatable results.
Content Units, Coding Methods, and Coding Schemes
Content analyses often encompass two types of content units. The unit of
analysis concerns the general idea or phenomenon being studied; in our study
of university Web sites, the unit of analysis was the World Wide Web itself.
The unit of observation concerns the specific item measured at an individual
level; in our study, the unit of observation was the “About Us” page from each
university Web site.
There are two general measurement methods for examining units of observation, a process known as “coding.” Deductive measurement requires the
development of specific coding categories before a researcher starts a content
analysis. For example, to count the number of personal pronouns in a Hemingway
novel, a researcher would need to establish a coding category for each type
of personal pronoun; the number of pronoun occurrences in the text would
be tallied using these categories. Deductive measurement is useful with an established set of coding categories, or if a clear hypothesis or research question
exists at the outset of the content analysis.
The second measurement method is inductive measurement. This method
supports the practice of emergent coding, which means that the basic research
question or hypothesis for a formal content analysis emerges from the units of
observation. It entails creating coding categories during the analysis process. For
example, to determine whether Hemingway novels are chauvinistic, a researcher
might decide to examine thousands of Hemingway’s words and sentences before
creating coding categories. Emergent coding is useful in exploratory content
analysis. For example, if the presence of social values in a text is the general
research topic, inductive category creation can help researchers determine which
social values to investigate.
TECHNICAL COMMUNICATION RESEARCH /
271
No matter which coding method suits the research situation, every analysis
ideally begins with an existing coding scheme. “Coding scheme” is another phrase
for the content categories and coding book, within which all instances of the
content being analyzed are noted. Every coding scheme consists of the following elements:
• Master code book—Provides the coders with explicit instructions and defines
each word/phrase/aspect to be analyzed; the master code book explains how
to use the code sheet.
• Code sheet—Provides the coders with a form on which they note every
instance of every word/phrase/aspect to be analyzed; the code sheet lists all
of the coding categories.
• Coding dictionary—Used with computer-based content analyses; “A set of
words, phrases, parts of speech . . . that is used as the basis for a search of
texts” [1, pp. 126-127].
There are many well-defined coding schemes (also known as “scoring
manuals”) in psychology and the social sciences [5]. However, these schemes
are not always applicable to technical communication research topics, as such
schemes typically use people as the units of analysis and observation. Instead,
technical communicators typically develop their own original schemes for their
studies, a process that requires significant attention to a number of small deaths.
The next section explains this study design process in the context of our case
study, which reveals how our technical communication research team created
a new coding scheme.
HOW TO ENSURE THE QUALITY OF A
CONTENT ANALYSIS
Using our investigation of tone formality within international English-language
Web sites as an illustrative case study, this section describes a few of the best
practices for creating an effective study design and conducting a high quality
content analysis. It is important to note that technical communicators need not
begin their content analyses from scratch. Our case study uses a similar sampling
frame and content units as several recent content analyses of Web sites [6-9].
Case Study Background and Design
We set out to discover whether the tone of international, online communication
is becoming more homogeneous as the Internet gains currency around the world.
We were interested in learning how the members of specific cultures express
themselves to the rest of the world using the Internet. We were also curious
to see whether the Internet has, in a sense, homogenized the ways in which
textual information is presented online. For example, does Malaysian Web content
272 / THAYER ET AL.
resemble British Web content? Or does Web-based information represent a
heterogeneous mixture of views from a multitude of countries?
One way to test whether a global Internet culture is emerging is to study the
tone of written English. Specifically, we examined the formality of written
English that is distributed online. As stated earlier, our unit of analysis was the
World Wide Web, and our unit of observation was the “About Us” content of
university Web sites from 20 countries. Using deductive measurement techniques,
we focused on studying Web pages from those countries where English is either
the official language or the language of instruction. The following sections
explain our sampling methodology and technique.
For our study, we chose to use human coders rather than computer-based
analyses of content. Broadly speaking, content analyses can be conducted using
human coders to parse and interpret data, or using a computer to parse data and at
least one human to interpret the data. While computer-based content analysis
is powerful, such analyses are limited in what they can economically tally and
what they can logically interpret. For example, there are no inexpensive software
applications that can quantify measures of written tone or style. Widely available
software packages such as Microsoft Office are unreliable even for identifying
something as seemingly straightforward as passive voice clauses.
Therefore, in our study we relied on human coders to examine the texts included
in our sample. Human coders introduce a number of reliability and consistency
issues into a study, issues that must be resolved successfully to ensure the quality
of the study results. These issues are examined in more detail later in the article.
Objective #1—
Choose the Content Sample Carefully
When developing a sample set of content, it is critically important to determine
the appropriate sample size. This section explains the process we underwent
when we developed our sample.
In order to conduct our content analyses, we first needed to list the set of
countries that offer university courses in English. To identify a manageable
categorization scheme for these countries, we used Kachru’s segmentation of
world Englishes to divide the countries into three categories: the “Inner Circle,”
the “Outer Circle,” and the “Expanding Circle” [10, p. 356]. This division process
resulted in 20 different countries from the Inner and Outer Circles for analysis.
The Inner Circle countries include the United States, the United Kingdom, and
Ireland (counted separately), Canada, Australia, and New Zealand [10, p. 356].
In Gilsdorf’s words, “The First—or Inner—Circle nations are those for which
English has been strongly L1 (i.e., First Language) and from which English
has spread to other countries” [11, p. 368].
The Outer Circle includes Bangladesh, Ghana, Hong Kong, India, Kenya,
Malaysia, Nigeria, Pakistan, Philippines, Singapore, South Africa, Sri Lanka,
TECHNICAL COMMUNICATION RESEARCH /
273
Tanzania, and Zambia [10, p. 356]. These countries typically feature Englishlanguage university Web sites, although instruction in these countries may or
may not be in English. Gilsdorf’s criteria state that “The Second—or Outer—
Circle includes those [countries] where English has taken strong root as an
intranational office language or co-language” [11, p. 368]. Although English is
not the official language of either South Africa or Hong Kong, this definition
holds true for these countries in terms of the language of instruction at national
universities.
The Expanding Circle comprises China, Egypt, Indonesia, Israel, Japan, Korea,
Nepal, Saudi Arabia, Taiwan, and Zimbabwe. Because the Expanding Circle
is more heterogeneous than the Inner and Outer Circles—in particular, Englishbased instruction exists in only some of the countries of the Expanding Circle—
our analysis excluded countries from the Expanding Circle.
To ensure consistent treatment of each unit of observation, this study focused
on the specific Web page through which each university presents its public face
to its audience. In addition, when we selected the unit of observation in our study
we wanted to ensure that each university Web site would present its public face
using the same Web page as every other site. Our observation suggested that
this message is most prevalent on Web pages designed to provide “About Us”
content; therefore, the “About Us” Web page was considered to be the unit
of observation in this study (see [12] and [13] for examples of similar units of
observation).
We quickly encountered a potential sampling concern: The number of universities in the United States is much greater than the number of universities in
any other country. Therefore, in order to compare Web pages from all countries,
the U.S. university population was refined to include only four-year (or above)
public and private schools with over 10,000 students enrolled and with a location
within any one of the 50 U.S. states. This method of population refinement follows
[14] and the advice of two content analysis specialists in the Department of
Communication at the University of Washington. This method is necessary not
only to ensure a relatively small population size for the U.S. universities (of which
there are over 3,400 different Web sites), but also to allow us to compare similar
universities among countries as many world universities that have Web sites also
have graduate programs and large student enrollments.
To identify the population of relevant Web sites, we relied on Förster [16].
We collected a list of URLs for the U.S. universities from Conlon, who had
compiled a more current list of U.S. university Web sites than Ulinks.com [17].
We used no additional selection criteria to narrow down the list of international
Web sites as the list of sites was about the same size as the refined list of U.S.
university Web sites.
The final element in choosing our sample was the choice of sample size. We
set a target sample size of 384 different university Web pages, yielding 192 sites
from the Inner Circle countries and 192 sites from the Outer Circle countries. We
274 / THAYER ET AL.
chose this sample size because 384 units of observation yield a sample accuracy
rate of 95%. In other words, in order to be 95% certain that the sample size
reflected the reality of the entire population, we identified 384 different units
of observation.
A sample size of 384 is considered the minimum size for an effective content
analysis; however, we recommend using a larger sample size if possible in order to
account for any units of observation that mus be removed during the study [1].
In our study, there were several Canadian Web sites that were written in French,
a discovery that resulted in a final sample size slightly less than 384.
Objective #2—
Make the Code Book as Simple and Well-Described as Possible
The code book is centrally important to every content analysis that relies on
human coders. The code book must be clear, free of repetition or overlap among
categories, and usable by more than just one person. A poorly constructed code
book will prevent researchers from obtaining reliable data as variance among
coders and overlap among categories will render the resulting data useless.
In our study, the areas of coding were generally divided into two categories:
amount of text present and the tone of the text. Three University of Washington
students, each with a background in either Technical Communication or English,
served as the coders for this project. We used a deductive, manifest coding
technique to tally textual elements such as word count, number of sentence
fragments, number of first-person pronouns, number of second-person pronouns,
and so on. However, we began our study with a code book that was much too
broad in its scope of coding categories; we were initially trying to include too
many categories.
For example, we were initially considering coding images as well as semantic
and syntactic textual elements. However, as we quickly discovered, images are
difficult to analyze in a repeatable manner using code book categories, and they
are also difficult to interpret consistently among multiple coders. Therefore,
we eliminated the consideration of images from the study and focused on
textual elements, thereby simplifying the code book entries and facilitating a
successful study.
However, even the process of coding an element as seemingly straightforward
as text requires significant detail about the elements to code, as well as an example
for each coding category. The following excerpt from our code book offers
an example of a few entries and the appropriate level of detail to include with
each entry.
1. Bulleted list included in text?—1 indicated that the text to be coded included
a bulleted list; 0 indicated the absence of bulleted lists from the coded text.
If the coder encountered a bulleted list, he or she stopped and moved on
to the next page to be coded; the researchers later went back and reformatted
TECHNICAL COMMUNICATION RESEARCH /
2.
3.
4.
5.
275
the bulleted lists into regular text that the coders could then code in an
identical manner to the pages that lacked bulleted lists.
Number of paragraphs coded (excluding headers, footers, signature blocks)—
The total number of paragraphs coded for a specific unit of observation.
Number of sentences coded (including sentence fragments)—The total
number of sentences coded for a specific unit of observation, including
sentence fragments (which were counted under the Tone category as well).
Number of words coded—The total number of words coded for a specific
unit of observation.
Number of first-person pronouns—The total number of instances of “I,”
“me,” or “my” in the text of a specific unit of observation.
As this list shows, textual elements such as bulleted lists are much more difficult
to code consistently than they might seem at first glance. By providing a concise
yet informative explanation for each code book entry, we helped the coders to
understand their tasks as clearly as possible and avoid basic coding mistakes.
However, as the next section explains, a well-defined code book is no substitute
for thorough coder training and rigorous reliability measurement.
Objective #3—
Train the Human Coders Thoroughly
We had three extremely dedicated and patient students who coded the Web
page content for our research study. However, before the coding process could
begin, the coders needed to identify textual elements such as passive voice and
clauses in a mutually consistent manner. Therefore, we conducted several training
sessions for the coders.
The training process involved meeting with the coders prior to the start of the
coding process in order to provide instruction on how to code page content and
use the coding sheet. Due to the topic of our study, these meetings were essentially
lessons in English grammar. After each meeting, we conducted intercoder reliability tests to see whether the coders were making the same decisions, or whether
their results varied when given identical content. Intercoder reliability is the
key factor that can make the difference between useful and useless data at the
end of a study; without a sufficient percentage of intercoder agreement, study
data are invalid.
The best way to test intercoder reliability prior to conducting a study is to
provide the coders with the same set of text to code. The coders analyze this text
and write their results into their own copies of the master code book. The quantity
of text should ideally be 5 to 10% of the total amount of content to be coded.
However, these percentages are difficult to achieve because of the time that coding
requires. In our study, we pre-tested the coders using the same five university
pages, an adequate number based on the survey of intercoder reliability in content
analyses provided by Neuendorf [1].
276 / THAYER ET AL.
The coders must be tested prior to the start of the coding project, during the
middle of the project, and again at the conclusion of the project. We described
these tests as the pre-test, the midpoint test, and the post-test. Each test should
involve a different set of text that is not drawn from the sample pool and that is
the same for all coders within each test. In our study, we used content from some
of the university Web sites from Inner and Outer Circle countries that were not
randomly included in the study sample.
A threshold of 70% agreement among all coders is generally regarded as the
minimum level of agreement required to produce valid data, although we preferred
to use a slightly higher rate of agreement [1]. At the start of our study, the total
percentage of intercoder agreement among all coder pair was 75.6% for the
amount of text present, and 86.0% for the tone elements of the text coded. These
levels were high enough to allow the coding process to begin.
At the midpoint of the coding process, we ran another intercoder reliability
test to ensure that reliability remained at an acceptable level. The percentage
agreement between all coder pairs was found in this test to have increased to
83.3% for the amount of text present and 89.5% for the tone coded. Finally, we
conducted a post-test of the coders that also reflected sufficiently high intercoder
reliability percentages. Testing the coders in this manner enabled agreement
tracking throughout the project, which in turn ensures the validity of the results
we derived at the end of the study.
Objective #4—
Use Robust Reliability Tests for Assessing Human Coders
Clearly, intercoder reliability is paramount to every successful content analysis
that involves human coders. The concept of reliability is simple: When two or
more people are given the same content, will they record the same data? For
example, when three people count the words in a page of text, will all three people
report the same word count? If those three people report the same word count,
the intercoder reliability is 100%; each person who counted the words produced
a result that agreed with the results of the other coders.
Unfortunately, content analyses usually involve coding decisions that are much
more complex than counting words. Therefore, even when a content analysis
involves only one human coder, it is important to establish an intercoder reliability
baseline prior to the start of a content analysis. When coders are provided with
a set of identical sample content for training purposes, it is easier to determine
whether they can make accurate coding decisions and whether they are ready to
begin coding and actual content.
The most basic measure of intercoder reliability is simple percentage agreement. This method involves tallying the number of times Coder A and Coder B
agreed with one another, and then dividing that number by the total number of
possible agreements. For example, if Coder A and B agreed on 16 out of 20
TECHNICAL COMMUNICATION RESEARCH /
277
coding categories within a code book, the intercoder reliability for these two
coders would be 80%.
We strongly recommend avoiding the simple percentage agreement method
for determining reliability because this method does not account for agreements
that occur by chance. For example, two coders might agree 60% of the time, but
50% of the agreements might have occurred because the coders guessed at their
answers. The simple percentage agreement method will not expose accidental
agreements such as these, which means the resulting data gathered during the
content analysis will produce an inaccurate view of reality.
The most effective method of reliability measurement remains up for debate [3].
Perreault and Leigh provide the basis for the reliability technique we used in
our study as we were dealing with a variety of levels of data measurement
(nominal, ordinal, and interval); their work on “interjudge reliability” [18, p. 137]
is consistently cited as among the best available [1, 3].
By cross-tabulating coder results after each pre-test, we were able to determine
which elements of the code book were causing errors between coders. We were
also able to refine our training methods between pre-tests to focus on improving
intercoder agreement on those areas that were particularly difficult for the coders
to grasp. For example, passive voice is a challenging textual element to code
correctly. If we had used simple percentage agreement to determine intercoder
reliability, we would not have known which of the 40 different coding categories
was causing the lack of agreement among our coders. But by using the Perreault
and Leigh “interjudge contingency table,” we went deeper and resolved the
intercoder reliability issues before the project began [18, p. 137].
CONCLUSIONS
As a result of our content analysis, we were able to draw conclusions about
the formality of online writing, finding that writing is more formal in Outer
Circle countries than Inner Circle countries. The implications of this finding are
broad-reaching: For technical communicators who work on international crossfunctional teams, or who have international clients, effective communication is
partially based on the formality of the communication. Stated differently, technical
communicators in the United States should probably attempt to use a more
formal tone when writing to clients and colleagues in India, South Africa, and
other Outer Circle countries.
Our analysis of the coded dataset also afforded the following findings [19, 20]:
• The tone of Web-based writing is significantly less formal among Englishlanguage university Web sites in Inner Circle countries than Outer Circle
countries.
278 / THAYER ET AL.
• Passive voice clauses are significantly more common within the Englishlanguage university Web sites of Outer Circle countries than the Web sites
of Inner Circle countries.
• Second-person and first-person plural pronouns are used significantly more
frequently in the English-language university Web sites of Inner Circle
countries than the Web sites of Outer Circle countries.
The value of this content analysis is clear: We were able to prove, using rigorous
research methodology and statistical analysis, that the worldwide use of English
online remains heterogeneous. While a global Internet culture might be emerging,
our study suggests that national identity and culture are primary factors in the
ways in which people communicate online.
In this article, we have demonstrated that, by using content analysis, technical
communicators can rely on content analysis to produce practical results and
understanding from a seemingly disparate pool of information. They can uncover
themes in their usability test results that might otherwise be impossible to discern.
They can collect seemingly unrelated comments from users and discern a coherent
pattern of praise or criticism. In short, content analysis can make the job of a
technical communicator easier and more professionally rewarding.
REFERENCES
1. K. A. Neuendorf, The Content Analysis Guidebook, Sage Publications, Thousand
Oaks, California, 2003.
2. K. Krippendorff, Content Analysis: An Introduction to Its Methodology (2nd Edition),
Sage Publications, Thousand Oaks, California, 2004.
3. M. Lombard, J. Snyder-Duch, and C. C. Bracken, Practical Resources for Assessing
and Reporting Intercoder Reliability in Content Analysis Research Projects, Temple
University Website, [Online], Available: http://www.temple.edu/sct/mmc/reliability,
2004.
4. S. J. McMillan, The Microscope and the Moving Target: The Challenge of Applying
Content Analysis to the World Wide Web, Journalism and Mass Communication
Quarterly, 77:1, pp. 80-98, 2000.
5. C. P. Smith (ed.), Motivation and Personality: Handbook of Thematic Content
Analysis, Cambridge University Press, New York, 1992.
6. C. Bauer and A. Scharl, Quantitative Evaluation of Web Site Content and Structure,
Internet Research: Electronic Networking Applications and Policy, 10:1, pp. 31-43,
2000.
7. R. Dominick, Who Do You Think You Are? Personal Home Pages and SelfPresentation on the World Wide Web, Journalism and Mass Communication
Quarterly, 76:4, pp. 646-658, 1999.
8. S. Ghose and W. Dou, Interactive Functions and Their Impacts on the Appeal of
Internet Presence Sites, Journal of Advertising Research, 38:2, pp. 29-40, 1998.
9. M. D. Mehta and D. Plaza, Content Analysis of Pornographic Images Available on
the Internet, The Information Society, 13, pp. 154-161, 1997.
TECHNICAL COMMUNICATION RESEARCH /
279
10. B. B. Kachru (ed.), The Other Tongue: English Across Cultures, University of Illinois
Press, Urbana, Illinois, 1992.
11. J. Gilsdorf, Standard Englishes and World Englishes: Living with a Polymorph
Business Language, The Journal of Business Communication, 39:3, pp. 364-379, 2002.
12. E. P. Bucy, A. Lang, R. F. Potter, and M. E. Grabe, Formal Features of Cyberspace:
Relationships between Web Page Complexity and Site Traffic, Journal of the
American Society for Information Science, 50:13, pp. 1246-1256, 1999.
13. L. Ha and E. L. James, Interactivity Reexamined: A Baseline Analysis of Early
Business Web Sites, Journal of Broadcasting and Electronic Media, 42:4, pp. 457-474,
1998.
14. J. M. Still, A Content Analysis of University Library Web Sites in English Speaking
Countries, Online Information Review, 25:3, pp. 160-164, 2001.
15. K. Förster, Universities Worldwide Website, [Online], Available: http://univ.cc, 2003.
16. ULinks Website, [Online], Available: http://ulinks.com/.
17. M. Conlon, American Universities, University of Florida Website, [Online], Available:
http://www.clas.ufl.edu/CLAS/american-universities.html, August 2003.
18. W. D. Perreault and L. E. Leigh, Reliability of Nominal Data Based on Qualitative
Judgments, Journal of Marketing Research, 26:2, pp. 135-148, 1989.
19. M. Evans, A. McBride, M. Queen, A. Thayer, and J. Spyridakis, Tone Formality
in English-Language University Websites Around the World, in Professional
Communication Conference Proceedings, Limerick, Ireland, IEEE PCS, pp. 846-850,
2005.
20. M. Evans, A. McBride, M. Queen, A. Thayer, and J. Spyridakis, The Effects of
Stylistic and Typographic Variables on Perceptions of Document Tone, in Professional Communication Conference Proceedings, Minneapolis, Minnesota, IEEE PCS,
pp. 300-303, 2004.
Other Articles On Communication By These Authors
A. Thayer, Offshoring, Outsourcing, and the Future of Technical Communication, in IPCC
2005 Proceedings, 2005.
A. Thayer and B. Kolko, Localization of Digital Games: The Process of Blending for
the Global Games Market, Technical Communication, 51:4, pp. 477-488, 2004.
A. Thayer, Material Culture Analysis and Professional Communication: An Art Historical Approach to Evaluating Documentation, IEEE Transactions on Professional
Communication, 47:2, pp. 144-147, June 2004.
Direct reprint requests to:
Alex Thayer
312 North 39th St., #A303
Seattle, WA 98103
e-mail: huevos@alumni.washington.edu
Download