Uploaded by Jaafar Nur Akasyah

Davis1989 Perceived Usefulness Perceived Ease of Use and User Acceptance

advertisement
IT Usefulness
andEaseof Use
PerceivedUsefulness,
Perceived Ease of
Use, and User
Acceptanceof
Information
Technology
dent to perceived usefulness, as opposedto
a parallel, direct determinantof systemusage.
Implications are drawnfor future researchon
user acceptance.
Keywords:User acceptance, end user
computing, user measurement
ACM
Categories:H.1.2, K.6.1, K.6.2, K.6.3
Introduction
By: Fred D. Davis
Computer and Information Systems
Graduate School of Business
Administration
University of Michigan
Ann Arbor, Michigan 48109
Abstract
Valid measurement
scales for predicting user
acceptanceof computersare in short supply.
Mostsubjective measures
usedin practice are
unvafidated, and their relationship to system
usageis unknown.The present research develops and vafidates newscales for two specific variables, perceivedusefulnessand perceived easeof use, whichare hypothesizedto
be fundamentaldeterminants of user acceptance. Definitions for these two variableswere
usedto developscale items that werepretested
for contentvalidity andthentestedfor reliability
andconstruct validity in two studies involving
a total of 152users and four application programs.The measureswererefined and streamlined, resultingin twosix-itemscaleswith refiabilities of .98 for usefulnessand.94 for ease
of use. Thescales exhibited high convergent,
discriminant,andfactorial validity. Perceived
usefulnesswassignificantlycorrelatedwithbothselfreported current usage(r=.63, Study 1) and
self-predictedfuture usage(r =.85,Study2). Perceivedeaseof usewasalso significantly correlated with current usage(r=.45, Study 1) and
future usage(r=.59, Study2). In both studies,
usefulnesshada significantly greater correlation with usagebehaviorthan did easeof use.
Regressionanalyses suggest that perceived
ease of use mayactually be a causal antece-
Information
technology
offersthe potentialfor substantially improvingwhite collar performance
(Curley, 1984; Edelman,
1981; Sharda,et al.,
1988). But performancegains are often obstructed by users’ unwillingnessto acceptand
use available systems(Bowen,1986; Young,
1984). Becauseof the persistence and importance of this problem,explaining user acceptance has beena long-standing issue in MIS
research(Swanson,
1974; Lucas,1975; Schultz
and Slevin, 1975;Robey,1979;Ginzberg,1981;
Swanson,
1987). Althoughnumerous
individual,
organizational,andtechnologicalvariableshave
beeninvestigated (Benbasatand Dexter, 1986;
Franz and Robey, 1986; Markusand BjornAnderson,1987; Robeyand Farrow,1982), research has beenconstrained by the shortage
of high-quality measures
for key determinants
of user acceptance.
Pastresearchindicates that
manymeasuresdo not correlate highly with
systemuse (DeSanctis, 1983; Ginzberg, 1981;
Schewe,1976; Srinivasan, 1985), and the size
of the usagecorrelation varies greatly fromone
study to the next dependingon the particular
measures
used(Baroudi,et al., 1986;Barki and
Huff, 1985;Robey,1979;Swanson,
1982,1987).
Thedevelopment
of improvedmeasuresfor key
theoretical constructsis a researchpriority for
the informationsystems
field.
Asidefromtheir theoretical value, better measures for predicting and explaining systemuse
wouldhavegreat practical value, both for vendors whowouldlike to assessuser demand
for
newdesign ideas, and for information systems
managers
within user organizations whowould
like to evaluatethesevendorofferings.
Unvalidated
measures
are routinely usedin practice today throughoutthe entire spectrumof
design,selection, implementation
andevaluation
activities. For example:designerswithin vendor
organizationssuchas IBM(Gould,et al., 1983),
Xerox(Brewley,et al., 1983),andDigital Equip-
MIS Quarterly/September 1989 319
IT Usefulness
andEaseof Use
mentCorporation(Good,et al., 1986)measure
user perceptions to guide the development
of
newinformationtechnologiesand products; industry publications often report user surveys
(e.g., Greenberg,
1984;RushinekandRushinek,
1986); several methodologies
for softwareselection call for subjective user inputs (e.g.,
Goslar, 1986; Klein and Beck, 1987); and contemporarydesign principles emphasizemeasuring user reactionsthroughoutthe entire design
process(Andersonand Olson 1985; Gouldand
Lewis, 1985;Johansen
and Baker, 1984; Mantel
and Teorey, 1988; Norman,1983; Shneiderman,
1987). Despitethe widespreaduse of subjective measures
in practice,little attentionis paid
to the quality of the measures
usedor howwell
they correlate with usagebehavior. Giventhe
low usagecorrelations often observedin researchstudies, those whobaseimportantbusiness decisions on unvalidated measuresmay
be getting misinformed
abouta system’sacceptability to users.
Thepurposeof this researchis to pursuebetter
measures
for predicting andexplaininguse. The
investigation focuseson two theoretical constructs, perceived usefulness and perceived
easeof use, which are theorized to be fundamentaldeterminantsof systemuse. Definitions
for theseconstructsare formulatedandthe theoretical rationalefor their hypothesized
influence
on systemuseis reviewed.New,multi-item measurementscalesfor perceivedusefulnessandperceived ease of use are developed,pretested,
andthenvalidatedin twoseparateempiricalstudies. Correlation andregressionanalysesexamine the empirical relationship betweenthe new
measures
and self-reported indicants of system
use. Thediscussion concludesby drawingimplications for future research.
Perceived Usefulnessand
Perceived Ease of Use
Whatcausespeopleto acceptor reject information technology?Among
the manyvariables that
mayinfluencesystemuse, previousresearchsuggests two determinantsthat are especially important. First, peopletend to useor not usean
applicationto the extenttheybelieveit will help
themperformtheir job better. Werefer to this
first variable as perceivedusefulness.Second,
evenif potential usersbelieve that a givenapplication is useful, theymay,at the sametime,
320 MIS Quarterly~September1989
believe that the systemsis too hardto useand
that the performance
benefits of usageare outweighed
by the effort of using the application.
Thatis, in additionto usefulness,usage
is theorized to be influencedby perceivedeaseof use.
Perceivedusefulnessis defined here as "the
degreeto which a personbelieves that using
a particular systemwouldenhancehis or her
job performance."
This follows fromthe definition of the worduseful: "capableof being used
advantageously."
Within an organizationalcontext, peopleare generally reinforced for good
performanceby raises, promotions, bonuses,
andother rewards(Pfeffer, 1982;Schein,1980;
Vroom,1964). A systemhigh in perceivedusefulness,in turn, is onefor whicha userbelieves
in the existenceof a positive use-performance
relationship.
Perceived
easeof use, in contrast,refers to "the
degreeto which a personbelieves that using
a particular systemwouldbe free of effort." This
follows fromthe definition of "ease": "freedom
fromdifficulty or greateffort." Effort is a finite
resourcethat a personmayallocate to the variousactivities for whichhe or sheis responsible
(Radnerand Rothschild, 1975). All else being
equal, weclaim, an application perceivedto be
easier to usethan anotheris morelikely to be
acceptedby users.
Theoretical Foundations
Thetheoretical importanceof perceivedusefulnessandperceivedeaseof use as determinants
of user behavioris indicatedby severaldiverse
lines of research.Theimpactof perceivedusefulness on systemutilization wassuggestedby
the workof SchultzandSlevin (1975)and Robey
(1979). Schultzand Slevin (1975)conducted
exploratoryfactor analysisof 67 questionnaire
items, which yielded seven dimensions. Of
these, the "performance"
dimension,interpreted
by the authors as the perceived"effect of the
modelon the manager’sjob performance,"was
mosthighly correlatedwith self-predicteduseof
a decisionmodel(r = .61). Usingthe Schultzand
Slevinquestionnaire,Robey
(1979)finds the performancedimensionto be mostcorrelated with
two objective measures
of systemusage(r = .79
and.76). Building on Vertinsky,et al.’s (1975)
expectancy
model,Robey(1979) theorizes that:
"A systemthat does not help people perform
their jobs is not likely to be receivedfavorably
IT Usefulness
andEaseof Use
in spite of careful implementation
efforts" (p.
537). Althoughthe perceiveduse-performance
contingency, as presented in Robey’s(1979)
model,parallels our definition of perceived
usefulness, the useof SchultzandSlevin’s (1975)
performance
factor to operationalize performanceexpectancies
is problematicfor severalreasons:the instrumentis empirically derivedvia
exploratoryfactor analysis;a somewhat
low ratio
of samplesize to items is used(2:1); four
thirteen itemshaveIoadingsbelow.5, andseveral of the itemsclearly fall outsidethe definition of expected performanceimprovements
(e.g., "Myjob will bemoresatisfying," "Others
will be moreawareof whatI amdoing," etc.).
Analternative expectancy-theoretic
model,derived from Vroom(1964), wasintroduced and
tested by DeSanctis(1983). The use-performance expectancywasnot analyzedseparately
from performance-reward
instrumentalities and
rewardvalences.Instead,a matrix-orientedmeasurementprocedurewasusedto producean overall index of "motivationalforce" that combined
these three constructs. "Force" had small but
significant correlations with usageof a DSS
within a businesssimulationexperiment
(correlations rangedfrom.04 to .26). ThecontrastbetweenDeSanctis’scorrelations andthe onesobserved by Robeyunderscorethe importanceof
measurement
in predicting andexplaining use.
Self-efficacy theory
Theimportanceof perceivedeaseof useis supported by Bandura’s(1982) extensiveresearch
on self-efficacy, defined as "judgmentsof how
well onecanexecutecoursesof action required
to dealwithprospective
situations"(p. 122).Selfefficacy is similar to perceivedeaseof useas
definedabove.Self-efficacybeliefs are theorized
to function as proximaldeterminantsof behavior. Bandura’s
theorydistinguishesself-efficacy
judgmentsfrom outcomejudgments,the latter
being concerned
with the extent to whicha behavior, oncesuccessfullyexecuted,is believed
to be linked to valuedoutcomes.
Bandura’s
"outcomejudgment"variable is similar to perceived
usefulness. Banduraarguesthat self-efficacy
and outcome
beliefs havediffering antecedents
andthat, "In anygiveninstance,behaviorwould
be best predicted by considering both selfefficacy andoutcome
beliefs" (p. 140).
Hill, et al. (1987)find that bothself-efficacyand
outcome
beliefs exert an influenceon decisions
to learn a computer
language.Theself efficacy
paradigmdoes not offer a general measureapplicable to our purposessince efficacy beliefs
are theorizedto be situationally-specific, with
measurestailored to the domainunder study
(Bandura,1982). Self efficacy researchdoes,
however,
provideoneof severaltheoretical perpectives suggestingthat perceivedeaseof use
andperceivedusefulnessfunction, as basic determinantsof user behavior.
Cost-benefitparadigm
Thecost-benefit paradigm
from behavioraldecision theory (Beachand Mitchell, 1978;Johnson
and Payne,1985;Payne,1982)is also relevant
to perceivedusefulnessand easeof use. This
researchexplains people’s choiceamongvarious decision-making
strategies (such as linear
compensatory,
conjunctive,disjunctive andelmination-by-aspects)
in termsof a cognitivetradeoff between
theeffort requiredto employ
the strategyandthe quality (accuracy)of the resulting
decision. This approachhas beeneffective for
explainingwhydecisionmakers
alter their choice
strategies in responseto changesin task complexity. Althoughthe cost-benefit approach
has
mainly concerneditself with unaideddecision
making, recent work has begunto apply the
sameform of analysis to the effectiveness of
information display formats(Jarvenpaa,1989;
Kleinmuntzand,Schkade,1988).
Cost-benefitresearchhasprimarily usedobjective measures
of accuracyandeffort in research
studies, downplaying
the distinction between
objective andsubjective accuracyand effort. Increasedemphasis
on subjectiveconstructsis warranted, however,since (1) a decision maker’s
choiceof strategy is theorized to be basedon
subjective as opposed
to objective accuracyand
effort (BeachandMitchell, 1978),and(2) other
researchsuggeststhat subjective measures
are
often in disagreement
with their ojbective counterparts (Abelsonand Levi, 1985;Adelbratt and
Montgomery,
1980; Wright, 1975). Introducing
measures
of the decision maker’sownperceived
costs andbenefits, independent
of the decision
actually made,has been suggestedas a way
of mitigatingcriticismsthat thecost/benefitframeworkis tautological (AbelsonandLevi, 1985).
Thedistinction madeherein betweenperceived
usefulnessandperceivedeaseof useis similar
to the distinction between
subjective decisionmakingperformance
and effort.
MISQuarterly~September1989 321
IT Usefulness
andEaseof Use
Adoptionof innovations
Researchon the adoption of innovations also
suggestsa prominentrole for perceivedease
of use.In their meta-analysis
of the relationship
between
the characteristicsof an innovationand
its adoption,Tornatzky
andKlein (1982)find that
compatibility, relative advantage,
and complexity havethe mostconsistentsignificant relationships acrossa broadrangeof innovationtypes.
Complexity, defined by Rogersand Shoemaker
(1971) as "the degreeto which an innovation
is perceived
as relatively difficult to understand
and use" (p. 154), parallels perceivedease
usequite closely. As TornatzkyandKlein (1982)
point out, however,
compatibilityandrelative advantagehaveboth beendealt with so broadly
andinconsistently
in the literature as to bedifficult to interpret.
Evaluationof informationreports
Past researchwithin MISon the evaluation of
information reports echoesthe distinction betweenusefulnessand easeof use madeherein.
Larcker and Lessig (1980) factor analyzedsix
itemsusedto rate four informationreports.Three
itemsload on eachof two distinct factors: (1)
perceivedimportance,whichLarcker andLessig
define as "the quality that causesa particular
informationset to acquirerelevanceto a decision maker,"and the extent to whichthe informationelementsare "a necessaryinput for task
accomplishment,"and (2) perceived usableness, whichis defined as the degreeto which
"the information format is unambiguous,
clear
or readable"(p. 123). Thesetwo dimensions
are
similar to perceivedusefulnessand perceived
easeof useas definedabove,repsectively, althoughLarckerand Lessigrefer to the two dimensions
collectively as "perceivedusefulness."
Reliabilities for the two dimensions
fall in the
rangeof .64-.77, short of the .80 minimallevel
recommended
for basic research. Correlations
with actual use of informationreports werenot
addressed
in their study.
Channeldispositionmodel
Swanson
(1982, 1987) introduced and tested
modelof "channeldisposition"for explainingthe
choiceand useof informationreports. Theconceptof channeldisposition is definedas having
322 MIS Quarterly~September1989
two components:
attributed information quality
andattributed accessquality. Potentialusersare
hypothesizedto select and use informationreports basedon an implicit psychologicaltradeoff betweeninformation quality andassociated
costs of access.Swanson
(1987) performed
exploratoryfactor analysis in order to measure
informationquality andaccessquality. A fivefactor solutionwasobtained,with onefactor correspondingto information quality (Factor #3,
"value"), andoneto accessquality (Factor #2,
"accessibility"). Inspectingthe itemsthat loadon
these factors suggestsa close correspondence
to perceivedusefulnessand easeof use. Items
suchas "important,". "relevant," "useful," and
"valuable"load strongly onthe valuedimension.
Thus,value parallels perceivedusefulness.The
fact that relevanceand usefulnessload on the
samefactor agreeswith informationscientists,
whoemphasizethe conceptual similarity between the usefulness and relevance notions
(Saracevic,1975).Severalof Swanson’s
"accessibility" items, suchas "convenient,""controllable," "easy," and "unburdensome,"
correspond
to perceivedeaseof use as definedabove.Althoughthe study wasmoreexploratorythan confirmatory, with no attemptsat constructvalidation, it doesagreewith the conceptual
distinction
betweenusefulness and ease of use. SelfreportedinformationChannelusecorrelated .20
with the value dimensionand .13 with the accessibility dimension.
Non-MISstudies
Outside the MISdomain,a marketingstudy by
Hauserand Simmie(1981) concerninguser perceptionsof alternative communication
technologies similarly derivedtwo underlyingdimensions:
easeof useandeffectiveness, the latter being
similar to the perceivedusefulnessconstructdefined above.Both easeof use andeffectiveness
wereinfluential in the formationof userpreferencesregardinga set of alternative communication technologies.Thehuman-computer
interaction (HCI) research communityhas heavily
emphasizedease of use in design (Branscomb
and Thomas,1984; Card, et al., 1983; Gould
and Lewis, 1985). For the mostpart, however,
these studies have focusedon objective measures of ease of use, such as task completion
time and error rates. In manyvendororganizations, usability testing hasbecome
a standard
phasein the product development
cycle, with
IT Usefulness
andEaseof Use
largeinvestments
in test facilities andinstrumentation. Althoughobjectiveeaseof useis clearly
relevant to user performance
given the system
is used,subjectiveeaseof useis morerelevant
to the users’ decisionwhetheror not to usethe
systemand maynot agree with the objective
measures(Carroll and Thomas,1988).
Convergence
of findings
Thereis a striking convergence
among
the wide
rangeof theoretical perspectivesand research
studies discussedabove.AlthoughHill, et al.
(1987) examined
learning a computerlanguage,
Larcker and Lessig (1980) and Swanson
(1982,
1987)dealt with evaluatinginformationreports,
and Hauser and Simmie(1981) studied communication
technologies,
all are supportiveof the
conceptual
andempiricaldistinction between
usefulness and ease of use. Theaccumulated
body
of knowledge
regardingself-efficacy, contingent
decision behaviorand adoptionof innovations
providestheoretical supportfor perceivedusefulness and ease of use as key determinants
of behavior.
Frommultiple disciplinary vantagepoints, perceived usefulnessand perceived ease of use
are indicated as fundamental
anddistinct constructsthat are influential in decisions
to useinformationtechnology.Althoughcertainly not the
only variablesof interest in explaininguserbehavior(for other variables,seeCheney,
et al.,
1986;Davis, et al., 1989;Swanson,
1988),they
doappearlikely to play a central role. Improved
measures
are needed
to gain further insight into
the nature of perceived usefulness and perceivedeaseof use, and their roles as determinants of computeruse.
Scale Development and
Pretest
A step-by-step process wasused to develop
newmulti-itemscaleshavinghigh reliability and
validity. Theconceptual
definitions of perceived
usefulnessand perceivedease of use, stated
above, were used to generate 14 candidate
itemsfor eachconstructfrompastliterature. Pretest interviews werethen conductedto assess
the semanticcontentof the items. Thoseitems
that bestfit the definitionsof the constructs
were
retained, yielding 10 itemsfor eachconstruct.
Next, a field study (Study1) of 112users concerningtwo different interactive computer
systemswasconducted
in order to assessthe reliability and constructvalidity of the resulting
scales. Thescales were further refined and
streamlinedto six items per construct. A lab
study(Study2) involving40 participantsandtwo
graphics systems was then conducted. Data
from the two studies werethen used to assess
the relationship betweenusefulness, ease of
use, andself-reported usage.
Psychometricians
emphasize
that the validity of
a measurement
scale is built in fromthe outset.
As Nunnally(1978)points out, "Ratherthan test
the validity of measures
after they havebeen
constructed,one shouldensurethe validity by
the plan and proceduresfor construction" (p.
258).Carefulselectionof the initial scaleitems
helpsto assurethe scaleswill possess"content
validity," definedas "the degreeto whichthe
score or scale being usedrepresentsthe concept about which generalizations are to be
made"(Bohrnstedt,1970,p. 9"1). In discussing
contentvalidity, psychometricians
often appeal
to the "domainsamplingmodel," (Bohrnstedt,
1970;Nunnally, 1978)which assumes
there is
a domainof contentcorresponding
to eachvariable oneis interested in measuring.Candidate
items representative of the domainof content
shouldbe selected. Researchers
are advisedto
begin by formulating conceptualdefinitions of
whatis to be measured
and preparing items to
fit the constructdefinitions(Anastasi,1986).
Following these recommendations,
candidate
items for perceivedusefulnessand perceived
easeof use weregeneratedbasedon their conceptualdefinitions, stated above,andthen pretested in order to select those itemsthat best
fit the content domains.The Spearman-Brown
Prophecy formula was used to choose the
number
of itemsto generatefor eachscale. This
formula estimates the numberof items needed
to achieve a given reliability basedon the
numberof items and reliability of comparable
existing scales.Extrapolatingfrompaststudies,
the formula suggeststhat 10 items would be
neededfor eachperceptualvariable to achieve
reliability of at least .80 (Davis,1986).Adding
four additionalitemsfor eachconstructto allow
for itemelimination, it wasdecidedto generate
14 itemsfor eachconstruct.
Theinitial item poolsfor perceivedusefulness
and perceivedeaseof use are given in Tables
MIS Quarterly~September1989 323
IT Usefulness
andEaseof Use
1 and 2, respectively. In preparingcandidate
other itemsin order to yield a morepureindicant of the conceptualvariable.
items, 37 publishedresearchpapersdealingwith
user reactions to interactive systemswerereviewedin otherto identify variousfacets of the
Pretest interviewswereperformed
to further enhancecontentvalidity by assessingthe correconstructs that should be measured(Davis,
spondence
betweencandidateitemsandthe defi1986).Theitemsare wordedin referenceto "the
electronic mail system,"whichis oneof the two
nitions of the variables they are intendedto
measure.
Itemsthat don’t representa construct’s
test applicationsinvestigatedin Study1, reported
below.Theitems within eachpool tend to have
contentvery well canbe screened
out by asking
a lot of overlapin their meaning,
whichis conindividuals to rank the degreeto whicheachitem
sistent with the fact that they are intendedas
matches
the variable’s definition, andeliminatmeasuresof the sameunderlying construct.
ing itemsreceivinglow rankings.In eliminating
Though
different individualsmayattribute slightly
items, wewant to makesure not to reducethe
different meaning
to particular item statements,
representativeness
of the item pools. Our item
pools mayhave excesscoverageof someareas
the goal of the multi-item approach
is to reduce
anyextranneous
effects of ~individualitems, alof meaning
(or substrata;see Bohrnstedt,1970)
within the content domainand not enoughof
lowing idiosyncrasies to be cancelled out by
Table1. Initial ScaleItemsfor Perceived
Usefulness
1. Myjob wouldbe difficult to performwithoutelectronicmail.
2. Usingelectronic mail gives megreater control over mywork.
3. Usingelectronic mail improvesmyjob performance.
’ 4. Theelectronic mail systemaddresses
myjob-related needs.
5. Usingelectronic mail savesmetime.
6. Electronic mail enablesmeto accomplish
tasks morequickly.
7. Electronicmail supportscritical aspectsof myjob.
8. Usingelectronic mail allows meto accomplishmoreworkthan wouldotherwisebe
possible.
9. Usingelectronicmail reducesthe time I spendon unproductive
activities.
myeffectivenesson the job.
10. Usingelectronic mail enhances
11. Usingelectronic mail improvesthe quality of the workI do.
12. Usingelectronic mail increasesmyproductivity.
13. Usingelectronic mail makes
it easierto do myjob.
14. Overall,I find the electronicmail systemuseful in myjob.
Table2. Initial ScaleItemsfor Perceived
Easeof Use
1. I often become
confusedwhenI use the electronic mail system.
2. I makeerrors frequentlywhenusingelectronic mail.
3. Interactingwith the electronicmail system
is oftenfrustrating.
4. I needto consult the user manualoften whenusingelectronic mail.
5. Interactingwith the electronicmail systemrequiresa lot of mymentaleffort.
6. I find it easyto recoverfromerrors encountered
while usingelectronicmail.
7. Theelectronicmail systemis rigid andinflexible to interact with.
8. I find it easyto get the electronicmail systemto do whatI wantit to do.
9. Theelectronic mail systemoften behavesin unexpected
ways.
10. I findit_cumbersomejto
use the electronic mail system.
11. Myinteraction with tl~e electronicmail systemis easyfor meto understand.
12. It is easyfor meto re, member
howto performtasks usingthe electronic mail system.
13, Theelectronic mail systemprovideshelpful guidancein performingtasks.
14. Overall, I find the electronicmail systemeasyto use,
324 MISQuarterly~September1989
IT Usefulness
andEaseof Use
others.By askingindividualsto rate the similarity of items to one another, wecan performa
cluster analysisto determine
the structureof the
substrata, removeitems whereexcesscoverage
is suggested,and add items whereinadequate
coverage
is indicated.
Pretest participantsconsistedof a sampleof 15
experienced computerusers from the Sloan
Schoolof Management,
MIT, including five secretaries, five graduatestudentsand five membersof the professional
staff. In face-to-faceinterviews, participantswereaskedto performtwo
tasks, prioritization andcategorization,which
weredoneseparately for usefulnessand ease
of use.For prioritization, theywerefirst given
a cardcontainingthe definition of the target construct andaskedto readit. Next,theyweregiven
13 index cards each having one of the items
for that constructwritten onit. The14thor "overall" item for eachconstruct wasomittedsince
its wordingwasalmostidentical to the label on
the definition card(seeTables1 and2). Participants wereaskedto rank the 13 cards according to howwell the meaningof eachstatement
matchedthe given definition of easeof use or
usefulness.
For the categorizationtask, participants were
askedto put the 13cardsinto threeto five categories so that the statementswithin a category
weremostsimilar in meaning
to eachother and
dissimilar in meaning
fromthose in other categories. This wasan adaptationof the "owncategories" procedureof Sherif and Sherif (1967).
Categorization
providesa simpleindicantof similarity that requiresless timeandeffort to obtain
than other similarity measurement
procedures
suchas paid comparisons.Thesimilarity data
wascluster analyzedby assigning to the same
cluster itemsthat sevenor moresubjectsplaced
in the samecategory.Theclusters are consideredto be a reflection of the domainsubstrata
for eachconstruct and serve as a basis of assessingcoverage,or representativeness,
of the
item pools.
Theresulting rank andcluster data are summarized in Tables3 (usefulness) and 4 (ease
use). Forperceivedusefulness,notice that items
fall into threemainclusters.Thefirst cluster relates to job effectiveness,the second
to productivity andtimesavings,andthe third to the importance of the system to one’s job. If we
eliminate the lowest-rankeditems(items 1, 4,
5 and 9), weseethat the three majorclusters
eachhaveat least two items. Item 2, "control
over work" wasretained since, althoughit was
rankedfairly low, it fell in the top 9 andmay
tap an importantaspectof usefulness.
Looking nowat perceived ease of use (Table
4), weagainfind three mainclusters. Thefirst
relates to physicaleffort, while the second
relates to mentaleffort. Selectingthe six highestpriority items andeliminating the seventhprovides goodcoverageof thesetwo clusters. Item
11 ("understandable") wasrewordedto read
"clear and understandable"
in an effort to pick
up someof the contentof item 1 ("confusing"),
whichhasbeeneliminated.Thethird cluster is
somewhat
moredifficult to interpret but appears
to be tappingperceptionsof howeasya system
¯ is to learn. Remembering
howto performtasks,
using the manual,and relying on systemguidanceare all phenomena
associatedwith the process of learning to usea newsystem(Nickerson,
1981; RobertsandMoran,1983).Furtherreview
of the literature suggeststhat easeof useand
easeof learning are strongly related. Roberts
and Moran(1983)find a correlation of .79 betweenobjective measuresof ease of use .and
easeof learning. Whiteside,et al. (1985)find
that ease of use and ease of learning are
strongly related andconcludethat they are congruent. Studies of howpeoplelearn newsystemssuggestthat learning and using are not
separate,disjoint activities, but instead that
peopleare motivatedto begin performingactual
workdirectly andtry to "learn by doing"as opposedto going throughuser manualsor online
tutorials (Carroll andCarrithers, 1984;Carroll,
et al., 1985;Carroll andMcKendree,.
1987).
In this study, therefore,easeof learningis regardedas one substratumof the ease of use
construct, as opposedto a distinct construct.
Sinceitems 4 and13 providea rather indirect
assessment
of easeof learning, they were replacedwith two items that moredirectly get at
easeof learning: "Learningto operatethe electronic mail systemis easyfor me," and"1 find
it takesa lot of effort to become
skillful at using
electronic mail." Items 6, 9 and2 wereeliminatedbecausethey did not cluster with other
items, andthey receivedlow priority rankings,
whichsuggeststhat they do not fit well within
the content domainfor ease of use. Together
with the "overall" itemsfor eachconstruct,this
procedure
yielded a 10-itemscale for eachconstruct to beempiricallytestedfor reliability and
constructvalidity.
MIS Quarterly~September1989 325
IT Usefulness
andEaseof Use
Table3. Pretest Results:PerceivedUsefulness
Old
Item #
1
2
3
4
5
6
7
8
9
10
111
12
13
14
Item
JobDifficult Without
Control Over Work
Job Performance
Addresses My Needs
Saves MeTime
WorkMoreQuickly
Critical to MyJob
Accomplish MoreWork
Cut UnproductiveTime
Effectiveness
Quality of Work
IncreaseProductivity
MakesJob Easier
Useful
Rank
13
9
2
12
11
7
5
6
10
1
3
4
8
NA
New
Item #
2
6
3
4
7
8
1
5
9
10
Cluster
C
A
C
B
B
C
B
B
A
A
B
C
NA
Table4. Pretest Results:PerceivedEaseof Use
Old
Item #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
NA
NA
Item
Confusing
Error Prone
Frustrating
Dependenceon Manual
MentalEffort
Error Recovery
Rigid& Inflexible
Controllable
UnexpectedBehavior
Cumbersome
Understandable
Easeof Remembering
Provides Guidance
Easyto Use
Easeof Learning
Effort to Become
Skillful
Study 1
A field studywasconducted
to assessthe reliability, convergent
validity, discriminant
validity,
andfactorial validity of the 10-itemscalesresulting fromthe pretest. A sampleof 120 users
within IBMCanada’sTorontoDevelopment
Laboratory weregiven a questionnaireasking them
to rate the usefulnessand easeof use of two
systemsavailable there: PROFS
electronic mail
and the XEDITfile editor. Thecomputingenvironmentconsisted of IBMmainframesaccessible through 327Xterminals. ThePROFS
electronic mail system is a simple but limited
messagingfacility for brief messages.(See
Panko,1988.) TheXEDITeditor is widely avail-
326 MISQuarterly/September1989
Rank
7
13
3
9
5
10
6
1
11
2
4
8
12
NA
NA
NA
New
Item #
Cluster
B
3
(replace)
7
B
C
B
5
4
A
A
1
8
6
(replace)
10
2
9
A
B
C
C
NA
NA
NA
able on IBMsystemsandoffers both full-screen
and command-driven
editing capabilities. The
questionnaire askedparticipants to rate the
extent to whichthey agreewith eachstatement
by circling a number
from oneto sevenarranged
horizontally beneathanchorpoint descriptions
"StronglyAgree,""Neutral," and"Strongly Disagree."In orderto ensuresubjectfamiliarity with
the systemsbeing rated, instructions askedthe
participants to skip over the section pertaining
to a givensystemif they neveruseit. Responses
wereobtainedfrom 112 participants, for a responserate of 93%.Of these 112, 109 were
users of electronic mail and 75 wereusers of
XEDIT.Subjectshad an averageof six months’
experiencewith the two systemsstudied. Among
IT Usefulness
andEaseof Use
the sample,10 percent weremanagers,
35 percent wereadministrative staff, and 55 percent
wereprofessionalstaff (whichincludeda broad
mix of marketanalysts, productdevelopment
analysts, programmers,
financial analysts andresearchscientists).
Refiability andvafidity
Theperceivedusefulnessscale attained Cronbachalphareliability of .97 for boththe electronic mail and XEDITsystems,while perceived
easeof useachieved
a reliability of .86for electronic mail and .93 for XEDIT.Whenobservations werepooled for the two systems,alpha
was.97 for usefulnessand.91 for easeof use.
Convergent
anddiscriminantvalidity weretested
using multitrait-multimethod (MTMM)
analysis
(Campbelland Fiske, 1959). The MTMM
matrix
containsthe intercorrelationsof items(methods)
appliedto the twodifferent test systems
(traits),
electronic mail and XEDIT.Convergent
validity
refers to whetherthe items comprisinga scale
behaveas if they are measuringa common
underlying construct.In order to demonstrate
convergent validity, items that measurethe same
trait shouldcorrelate highly with one another
(Campbelland Fiske, 1959). That is, the elementsin the monotraittriangles (the submatrix
of intercorrelations betweenitems intendedto
measure the sameconstruct for the same
system) within the MTMM
matrices should be
large. Forperceived
usefulness,the 90 monotraitheteromethod
correlationswereall significant at
the .05 level. For easeof use, 86 out of 90,
or 95.6%,of the monotrait-heteromethod
correlations weresignificant. Thus,our data supports
the convergent
validity of the twoscales.
Discriminantvalidity is concerned
with the ability of a measurement
item to differentiate between objects being measured.For instance,
within the MTMM
matrix, a perceivedusefulness
itemappliedto electronicmail shouldnot correlate too highly with the sameitem applied to
XEDIT.Failure to discriminate maysuggestthe
presenceof "common
methodvariance," which
means
that an item is measuring
methodological
artifacts unrelatedto the target construct(such
as individual differencesin the style of responding to questions
(seeCampbell,
et al., 1967;Silk,
1971)). Thetest for discriminantvalidity is that
an item shouldcorrelate morehighly with other
items intendedto measurethe sametrait than
with either the sameitem used to measurea
differenttrait or withdifferentitemsusedto measure a different trait (Campbell
andFiske, 1959).
For perceivedusefulness, 1,800suchcomparisonswereconfirmedwithout exception. Of the
1,800comparisonsfor easeof use there were
58 exceptions(3%). This representsan unusually highlevel of discriminantvalidity (Campbell
and Fiske, 1959;Silk, 1971) and implies that
the usefulnessand easeof use scales possess
a high concentrationof trait varianceandare
not strongly influenced by methodological
artifacts.
Table5 gives a summary
frequencytable of the
correlations comprisingthe MTMM
matricesfor
usefulnessandeaseof use. Fromthis table it
is possible to see the separationin magnitude
betweenmonotraitandheterotrait correlations.
Thefrequencytable also showsthat the heterotrait-heteromethod
correlationsdo not appearto
be substantially elevatedabovethe heterotraitmonomethod
correlations. This is an additional
diagnostic suggestedby Campbelland Fiske
(1959) to detect the presence of method
variance.
Thefew exceptionsto the convergentand discriminantvalidity that did occur,althoughnot extensive enoughto invalidate the ease of use
scale, all involved negatively phrasedeaseof
useitems. These"reversed"itemstendedto correlate morewith the sameitem usedto measurea differenttrait thantheydid with otheritems
of the sametrait, suggestingthe presenceof
common
methodvariance. This is ironic, since
reversedscales are typically usedin an effort
to reducecommon
methodvariance. Silk (1971)
similarly observedminordeparturesfrom convergent anddiscriminant validity for reversed
items. Thefive positively wordedeaseof use
itemshad a reliability of .92 compared
to .83
for the five negativeitems. This suggests
animprovement
in the easeof use scale maybe possible with the elimination or reversal of negatively phraseditems. Nevertheless,the MTMM
analysis supportedthe ability of the 10-item
scalesfor eachconstructto differentiate between
systems.
Factorial validity is concerned
with whetherthe
usefulnessandeaseof useitems form distinct
constructs. A principal components
analysis
using oblique rotation wasperformedon the
twenty usefulnessand ease of use items. Data
werepooledacrossthe two systems,for a total
of 184 observations.Theresults showthat the
MIS Quarterly~September1989 327
IT Usefulness
andEaseof Use
Table 5. Summary
of Multitrait-MultimethodAnalyses
Construct
Correlation
Size
- .20 to - .11
-.10 to -.01
.00 to .09
.10 to .19
.20 to .29
.30 to .39
.40 to .49
.50 to .59
.60 to .69
.70 to .79
.80 to .89
.90 to .99
# Correlations
PerceivedUsefulness
SameTrait/
Different
Diff. Method
Trait
Elec.
Same
Diff.
Mail XEDIT
Meth.
Meth.
4
14
20
7
45
4
11
26
4
45
3
2
5
6
25
27
25
7
10
gO
usefulnessand easeof use items load on distinct factors(Table6). Themultitrait-multimethod
analysisandfactor analysisbothsupportthe construct validity of the 10-itemscales.
Scalerefinement
In appliedtesting situations, it is importantto
keepscales as brief as possible, particularly
whenmultiple systemsare going to be evaluated. The usefulness and ease of use scales
wererefined and streamlined basedon results
from Study 1 and then subjected to a second
roundof empiricalvalidationin Study2, reported
below. Applying the Spearman-Brown
prophecy
formulato the .97 reliability obtainedfor perceivedusefulnessindicatesthat a six-item scale
composed
of items having comparable
reliability wouldyield a scalereliability of .94. Thefive
positive easeof useitems hada reliability of
.92. Takentogether, these findings fromStudy
1 suggestthat six items wouldbe adequateto
achievereliability levels above.9 while maintaining adequatevalidity levels. Basedon the
results of the field study,six of the 10itemsfor
eachconstruct wereselected to form modified
scales.
For the easeof use scale, the five negatively
wordeditems wereeliminated due to their apparent common
methodvariance, leaving items
2, 4, 6, 8 and 10. Item 6 ("easy to remember
328 MISQuarterly/September1989
PerceivedEaseof Use
SameTrait/
Different
Diff. Method
Trait
Elec.
Same
Diff.
Mail XEDIT
Meth.
Meth.
1
1
5
2
1
32
2
5
40
9
1
11
14
2
2
1
9
9
3
11
3
13
3
8
2
45
45
10
90
howto performtasks"), whichthe pretest indicated wasconcerned
with easeof learning, was
replaced by a reversal of item 9 ("easy to
become
skillful"), whichwasspecifically designedto moredirectly tap easeof learning.
Theseitems include two from cluster C, one
eachfromclusters A andB, andthe overall item.
(SeeTable4.) In order to improverepresentative coverageof the content.domain,an additional A item wasadded.Of the two remaining
A items (#1, Cumbersome,
and #5, Rigid and
Inflexible), item5 is readilyreversed
to form"flexible to interact with." This item wasaddedto
form the sixth item, and the order of items 5
and 8 waspermutedin order to prevent items
from the samecluster (items 4 and 5) from appearingnext to one another.
In order to select six itemsto be usedfor the
usefulness scale, an item analysis wasperformed.Correcteditem-total correlations were
computedfor each item, separately for each
systemstudied. AverageZ-scoresof these correlations wereusedto rank the items. Items 3,
5, 6, 8, 9 and 10 weretop-rankeditems. Referring to the cluster analysis (Table 3), wesee
that this set is well-representative
of the content
domain,including two itemsfrom cluster A, two
frorn cluster B and onefromcluster C, as well
as the overall item (#10). Theitems werepermutedto prevent items from the samecluster
fromappearingnext to oneanother. Theresult-
IT Usefulness
andEaseof Use
Table6. Factor Analysisof PerceivedUsefulness
and
Easeof UseQuestions:Study1
Factor1
Factor1
(Usefulness)
(Easeof Use)
Scale Items
Usefulness
1
Quality of Work
2
Control over Work
3
WorkMoreQuickly
Critical to MyJob
4
5
IncreaseProductivity
6
Job Performance
7
AccomplishMoreWork
8
Effectiveness
9
MakesJob Easier
10
Useful
Ease of Use
1
Cubersome
2
Easeof Learning
3
Frustrating
4
Controllable
5
Rigid& Inflexible
Easeof Remembering
6
7
MentalEffort
8
Understandable
9
Effortto BeSkillful
10
Easyto Use
ing six-item usefulnessandeaseof use scales
are shownin the Appendix.
Relationship
to use
Participants wereaskedto self-report their
degreeof current usageof electronic mail and
XEDITon six-position categorical scales with
boxeslabeled"Don’tuseat all," "Useless than
once eachweek," "Useabout once each week,"
"Use several times a week," "Use about once
each day," and "Use several times each day."
Usagewassignificantly correlatedwith bothperceived usefulness and perceived ease of use
for both PROFS
mail and XEDIT. PROFS
mail
usagecorrelated .56 with perceivedusefulness
and .32 with perceived ease of use. XEDIT
usagecorrelated .68 with usefulnessand .48
with easeof use. When
data werepooledacross
systems,usagecorrelated :63 with usefulness
and.45 with easeof use.Theoverall usefulnessusecorrelationwassignificantly greaterthanthe
ease of use-usecorrelation as indicated by a
test of dependentcorrelations (t181 =3.69,
p<.001) (Cohenand Cohen,1975). Usefulness
and easeof use weresignificantly correlated
with eachother for electronicmail (.56), XEDIT
.80
.86
.79
.87
.87
.93
.91
.96
.80
.74
.10
-.03
.17
- .11
.10
-.07
-.02
- .03
.16
.23
.00
.08
.02
.13
.09
.17
-.07
.29
-.25
.23
.73
.60
.65
.74
.54
.62
.76
.64
.88
.72
(.69), andoverall(.64). All correlationswere
nificant at the .001level.
Regressionanalyseswere performedto assess
the joint effects of usefulnessandeaseof use
on usage.Theeffect of usefulnesson usage,
controllingfor easeof use,wassignificant at the
.001 level for electronic mail (b=.55), XEDIT
(b=.69), and pooled(b=.57). In contrast,
effect of easeof useon usage,controlling for
usefulness,wasnon-significantacrossthe board
(b=.01 for electronic mail; b=.02 for XEDIT;
andb=.07pooled).In other words,the significant pairwise correlation betweeneaseof use
and usagevanishes whenusefulness is controlled for. Theregressioncoefficientsobtained
for each individual systemwithin each study
werenot significantly different (F3, 178= 1.95,
n.s.). As the relationship betweenindependent
variablesin a regressionapproach
perfect linear
dependence,
multicollinearity can degradethe
parameterestimatesobtained.Althoughthe correlations betweenusefulnessand ease of use
are significant, accordingto tests for multicollinearity they are not large enoughto compromisethe accuracyof the estimatedregression coefficientssincethe standarderrors of the
estimatesare low (.08 for both usefulnessand
MISQuarterly~September1989 329
IT Usefulness
andEaseof Use
easeof use), and the covariancesbetweenthe
parameter estimates are negligible (-.004)
(Johnston, 1972; Mansfieldand Helms,1982).
Basedon partial correlation analyses,the variance in usageexplainedby easeof use drops
by 98%whenusefulnessis controlled for. The
regressionandpartial correlationresults suggest
that usefulnessmediatesthe effect of easeof
useon usage,i.e., that easeof use influences
usageindirectly throughits effect onusefulness
(J.A. Davis,1985).
Study 2
A lab study wasperformedto evaluatethe sixitem usefulness’andeaseof usescales resulting from scale refinementin Study1. Study2
wasdesignedto approximateapplied prototype
testing or systemselectionsituations, an important class of situations wheremeasures
of this
kind are likely to be usedin practice. In prototype testing andsystemselection contexts,prospectiveusersare typically givena brief handson demonstration
involving less than an hour of
actually interacting with the candidatesystem.
Thus,representativeusersare askedto rate the
future usefulnessand easeof use they would
expectbasedonrelatively little experience
with
the systemsbeing rated. Weare especially interestedin the properties of the usefulnessand
ease of use scales whenthey are wordedin
a prospective senseand are basedon limited
experiencewith the target systems.Favorable
psychometricproperties under these circumstanceswouldbe encouraging
relative to their
use as early warningindicants of user acceptance (Ginzberg,1981).
Thelab studyinvolved40 voluntaryparticipants
whowere eveningMBAstudents at BostonUniversity. Theywerepaid $25for participating in
the study. Theyhad an averageof five years’
workexperienceand wereemployed
full-time in
severalindustries, including education(10 percent), government
(10 percent),financial (28
cent), health (18 percent), andmanufacturing
percent). Theyhad a rangeof prior experience
with computersin general (35 percent noneor
limited; 48 percent moderate;and 17 percent
extensive)andpersonalcomputers
in particular
(35 percent noneor limited; 48 percentmoderate; and 15 percent extensive) but wereunfamiliar with the two systemsusedin the study.
1989
330 MIS Quarterly~September
The study involved evaluating two IBM PCbasedgraphics systems:Chart-Master(by Decision Resources,
Inc. of Westport,CN)andPendraw(by Pencept,Inc. of Waltham,MA).ChartMasteris a menu-drivenpackagethat creates
numericalbusinessgraphs, suchas bar charts,
line charts, andpie charts basedon parameters
defined by the user. Throughthe keyboardand
menus,
the user inputs the data for, anddefines
the desiredcharacteristics of, the chart to be
made.Theuser can specify a wide variety of
optionsrelatingto title fonts,colors,plot orientation, cross-hatchingpattern, chart format, and
so on. Thechart can then be previewedon the
screen, saved, and printed. Chart-Masteris a
successfulcommercial
productthat typifies the
categoryof numericbusinesscharting programs.
Pendraw
is quite different fromthe typical businesscharting program.It usesbit-mapped
graphics anda "direct manipulation"interface where
users draw desired shapesusing a digitizer
tablet and an electronic "pen"as a stylus. The
digitizer tablet supplantsthe keyboardas the
input medium.
By drawingon a tablet, the user
manipulatesthe image,whichis visible on the
screenas it is being created. Pendraw
offers
capabilities typical of PC-based,bit-mapped
"paint" programs(see Panko,1988), allowing
the user to performfreehanddrawingandselect
frorn amonggeometricshapes,such as boxes,
lines, andcircles. A variety of line widths,color
selections and title fonts are available. The
digitizer is also capableof performingcharacter
recognition, convertinghand-printercharacters
into various fonts (Wardand Blesser, 1985).
Pencepthad positioned the Pendraw
product to
completewith businesscharting programs.The
manualintroducesPendrawby guiding the user
throughthe processof creating a numericbar
chart. Thus, a key marketing issue wasthe
extent to which the newproduct wouldcompete
favorablywith establishedbrands,suchas ChartMaster.
Participants were given one hour of hands-on
experience with Chart-Master and Pendraw,
using workbooksthat were designedto follow
the sameinstructional sequenceas the user
manualsfor the two products,while equalizing
the style of writing andeliminatingvaluestatements(e.g., "Seehoweasythat wasto do?").
Half of the participantstried Chart-Master
first
and half tried Pendraw
first. After using each
package,a questionnairewascompleted.
IT Usefulness
andEaseof Use
Refiability andvafidity
Cronbachalpha was.98 for perceivedusefulness and .94 for perceivedease of use. Convergentvalidity wassupported,with only two of
72 monotrait-heteromethod
correlations falling
belowsignificance.Easeof useitem 4 (flexibility), appliedto Chart-Master,
wasnot significantly
correlatedwith either items3 (clear andunderstandable)or 5 (easyto become
skillful). This
suggeststhat, contrary to conventionalwisdom,
flexibility is not alwaysassociatedwith easeof
use.As Goodwin
(1987)pointsout, flexibility can
actually impair easeof use, particularly for
novice users. With item 4 omitted, Cronbach
alpha for easeof use wouldincreasefrom .94
to .95. Despitethe two departuresto convergent validity related to easeof use item 4, no
exceptions
to the discriminant
validity criteria occurred across a total of 720 comparisons
(360
for eachscale).
Factorial validity wasassessed
by factor analyzing the 12 scale itemsusingprincipal componentsextraction andoblique rotation. Theresulting two-factorsolutionis veryconsistentwith
distinct, unidimensional
usefulnessandeachof
usescales(Table7). Thus,as in Study1, Study
2 reflects favorablyon the convergent,
discriminant, andfactorial validity of the usefulness
and
easeof usescales.
Relationship to use
Participants wereaskedto self-predict their
future use of Chart-Masterand Pendraw.The
questions were wordedas follows: "Assuming
Pendraw
wouldbe available on myjob, I predict
that I will useit ona regularbasisin thefuture,"
followed by two seven-point scales, one with
likely-unlikelyend-pointadjectives,theother, reversedin polarity, with improbable-probable
endpoint adjectives. Suchself-predictions, or "behavioral expectations,"are among
the mostaccuratepredictors available for an individual’s
future behavior(Sheppard,et al., 1988; Warshawand Davis, 1985). For Chart-Master,usefulness wassignificantly correlated with selfpredicted usage(r=.71, p<.001), but ease
use wasnot (r=.25, n.s.) (Table 8). ChartMasterhada non-significantcorrelation between
easeof use and usefulness(r=.25, n.s.). For
Pendraw,
usagewassignificantly correlatedwith
both usefulness (r=.59, p<.001) and ease
use (r=.47, p<.001). Theease of use-usefulnesscorrelation wassignificiant for Pendraw
(r = .38, p<.001).When
data werepooledacross
systems,usagecorrelated.85 (p<.001)with usefulness and.59 (p<.001)with easeof use (see
Table8). Easeof usecorrelatedwith usefulness
.56 (p<.001). Theoverall usefulness-use
correlation wassignificantly greaterthan the easeof
use-use
correlation, as indicatedby a test of dependent
correlations(t77 = 4.78, p<.001) (Cohen
and Cohen,1975).
Regression
analyses(Table9) indicate that the
effect of usefulnesson usage,controlling for
easeof use, wassignificant at the .001 level
for Chart-Master
(b = .69), Pendraw
(b = .76)
overall (b=.75). In contrast, the effect of ease
of useon usage,controlling for usefulness,was
Table 7. FactorAnalysisof PerceivedUsefulness
andEaseof UseItems: Study2
Scale Items
Usefulness
1
WorkMoreQuickly
2
Job Performance
IncreaseProductivity
3
4
Effectiveness
5
MakesJob Easier
6
Useful
Easeof Use
1
Easyto Learn
2
Controllable
3
Clear & Understandable
4
Flexible
Easyto Become
Skillful
5
6
Easyto Use
Factor1
(Usefulness)
Factor2
(Ease of Use)
.91
.98
.98
.94
.95
.88
.01
-.03
-.03
.04
-.01
.11
-.20
.19
-.04
.13
.07
.09
.97
.83
.89
.63
.91
.91
MIS Quarterly~September1989 331
IT Usefulness
andEaseof Use
Table8. CorrelationsBetween
PerceivedUsefulness,
PerceivedEaseof Use, andSelf-Reported
System Usage
Study1
ElectronicMail (n = 109)
XEDIT(n = 75)
Pooled(n = 184)
Study2
Chart-Master
(n = 40)
Pendraw
(n = 40)
Pooled(n = 80)
Davis, et al.(1989)(n=107)
Wave1
Wave2
*** p<.O01
** p<.01
Usefulness
& Usage
Correlation
Easeof Use
& Usage
Easeof Use
& Usefulness
.56"**
.68***
.63***
.32***
.48***
.45***
.56"**
.69***
.64***
.71"**
.59"**
.85"**
.25
.47"*~
.59"**
.25
.38"*
.56"**
.65***
.70***
.27**
.12
.10
.23**
* p<.05
Table9. Regression
Analysesof the Effect of Perceived
UsefulnessandPerceivedEaseof Use on
Self-ReportedUsage
IndependentVariables
Usefulness
Easeof Use
Study1
ElectronicMail (n = 109)
XEDIT(n = 75)
Pooled(n = 184)
Study2
Chart-Master
(n = 40)
Pendraw
(n = 40)
Pooled(n = 80)
Davis,et al. (1989)(n = 107)
After 1 Hour
After 14 Weeks
*** p<.O01 ** p<.01
* p<.05
=
R
.55"**
.69"**
.57"**
.01
.02
.07
.31
.46
.38
.69"**
.76"**
.75"**
.08
.17
.17"
.51
.71
.74
.62***
.71***
.20***
- .06
.45
.49
non-significant for both Chart-Master(b=.08,
n.s.) andPendraw
(b = .17, n.s.) whenanalyzed
separatelyand borderline significant whenobservationswerepooled(b= .17, p<.05). Theregressioncoefficients obtainedfor Pendraw
and
Chart-Master
werenot significantly different (F3,
74 = .01 4, n.s.). Multicollinearity
is ruledoutsince
the standarderrors of the estimatesare low(.07
for both usefulnessand easeof use) and the
covariancesbetweenthe parameterestimates
are negligible (-.004).
throughusefulness.Partial correlation analysis
indicates that the variancein usageexplained
by ease of use drops by 91%whenusefulness
is controlledfor. Consistentwith Study1, these
regressionandpartial correlationresults suggest
that usefulnessmediatesthe effect of easeof
use on usage.Theimplications of this are addressedin the following discussion.
Hence,as in Study1, the significant pairwise
correlations betweenease of use and usage
dropdramaticallywhenusefulnessis controlled
for, suggesting that ease of use operates
Thepurposeof this investigation wasto develop
and validate newmeasurement
scales for perceived usefulnessand perceivedease of use,
two distinct variableshypothesized
to be deter-
332 MISQuarterly~September1989
Discussion
IT Usefulness
andEaseof Use
minantsof computerusage.This effort wassuccessfulin severalrespects.Thenewscaleswere
found to havestrong psychometricproperties
andto exhibit significant empiricalrelationships
with self-reported measures
of usagebehavior.
Also, several newinsights weregenerated
about
the nature of perceivedusefulnessand easeof
use, and their roles as determinantsof user
acceptance.
Thenewscales were developed,refined, and
streamlinedin a several-stepprocess.Explicit
definitions werestated,followedby a theoretical
analysisfroma variety of perspectives,including: expectancy
theory; self-efficacy theory; behavioraldecisiontheory;diffusionof innovations;
marketing;andhuman-computer
interaction, regardingwhyusefulnessand easeof useare hypothesizedas importantdeterminantsof system
use.Based
onthe stateddefinitions, initial scale
items were generated.To enhancecontent validity, thesewerepretestedin a smallpilot study,
and several itemswereeliminated. Theremaining items, 10for eachof the twoconstructs,were
testedfor validity andreliability in Study1, a
field study of 112 users and two systems(the
PROFS
electronic mail system and the XEDIT
file editor). Itemanalysiswasperformed
to eliminatemoreitemsandrefine others,further str~amlining andpurifyingthe scales.Theresultingsixitem scales weresubjectedto further construct
validation in Study2, a lab study of 40 users
and two systems:Chart-Master(a menu-driven
businesscharting program)andPendraw
(a bitmapped
paint program
with a digitizer tablet as
its input device).
Thenewscales exhibited excellent psychometric characteristics. Convergent
anddiscriminant
validity werestrongly supportedby multitraitmultimethod
analysesin both validation studies.
Thesetwo data sets also providedstrong supportfor factorialvalidity: thepatternof factorIoadings confirmedthat a priori structureof the two
instruments,with usefulnessitemsloadinghighly
on onefactor, easeof useitems loadinghighly
onthe otherfactor, andsmallcross-factorIoadings. Cronbach
alphareliability for perceived
usefulness was.97 in Study1 and .98 in Study2.
Reliability for easeof usewas.91 in Study1
and.94 in Study2. Thesefindings mutuallyconfirm the psychometric
strength of the newmeasurementscales.
As theorized, both perceived usefulness and
easeof useweresignificantlycorrelatedwithselfreportedindicants of systemuse. Perceived
use-
fulnesswascorrelated.63 with self-reportedcurrent usein Study1 and .85 with self-predicted
use in Study2. Perceivedeaseof use wascorrelated .45 with usein Study1 and.69 in Study
2. Thesamepattern of correlations is found
whencorrelations are calculatedseparatelyfor
eachof the two systemsin eachstudy (Table
8). Thesecorrelations,especiallythe usefulnessuselink, compare
favorablywith other correlations betweensubjective measuresand selfreporteduse foundin the MISliterature. Swanson’s (1987) "value" dimensioncorrelated .20
with use,whilehis "accessibility" dimension
correlated .13 with self-reporteduse.Correlations
between
"userinformationsatisfaction" andselfreporteduse of .39 (Barki and Huff, 1985)and
.28 (Baroudi, et al., 1986)havebeenreported.
"Realismof expectations"has beenfoundto be
correlated .22 with objectively measured
use
(Ginzberg,1981)and.43 with self-reported use
(Barki andHuff, 1985)."Motiviationalforce" was
correlated.25 with systemuse, objectivelymeasured (DeSanctis, 1983). Among
the usagecorrelationsreportedin the literature, the.79 correlation between"performance"and use reported
by Robey(1979)standsout. Recallthat Robey’s
expectancymodelwasa key underpinningfor
the definition of perceivedusefulnessstated in
this article.
Oneof the mostsignificant findings is the relative strengthof the usefulness;usage
relationship compared
to the easeof use-usagerelationship. In both studies, usefulness was
significantly morestrongly linked to usagethan
waseaseof use.Examining
the joint direct effect
of the two variableson usein regressionanalyses, this difference wasevenmorepronounced:
the usefulness-usagerelationship remained
large, while the easeof use-usage
relationship
wasdiminishedsubstantially (Table 8). Multicollinearity has beenruled out as an explanation for the results usingspecific tests for the
presence
of multicollinearity. In hindsight,the
prominenceof perceived usefulness makes
senseconceptually: users are driven to adopt
an applicationprimarily because
of the functions
it performsfor them, and secondarily for how
easyor hardit is to get the systemto perform
those functions. For instance, users are often
willing to copewith somedifficulty of usein a
systemthat providescritically needed
functionality. Althoughdifficulty of usecandiscourage
adoption of an otherwise useful system, no
amountof ease of use can compensatefor a
MISQuarterly~September1989 333
IT Usefulness
andEaseof Use
systemthat doesnot performa useful function.
Theprominence
of usefulnessover easeof use
hasimportantimplicationsfor designers,particularly in the human
factors tradition, whohave
tendedto overemphasize
easeof use and overlook usefulness(e.g., Branscomb
and Thomas,
1984;Chin, et al., 1988;Shneiderman,
1987).
Thus,a majorconclusionof this study is that
perceivedusefulnessis a strong correlate of
user acceptanceand should not be ignored by
those attempting to design or implementsuccessful systems.
Froma causal perspective, the regression resuits suggestthat easeof usemaybe an antecedentto usefulness, rather than a parallel,
direct determinantof usage.The significant
painNisecorrelation betweenease of use and
usageall but vanisheswhenusefulnessis controlled for. This, coupled
with a significant ease
of use-usefulness
correlation is exactlythe pattern one wouldexpect if usefulnessmediated
betweenease of use and usage (e.g., J.A.
Davis,1985).Thatis, the results are consistent
with an ease of use --> usefulness--> usage
chainof causality. Theseresults ~eld both for
pooled observations and for each individual
system(Table 8). Thecausalinfluence of ease
of use on usefulness makessense conceptually, too. All else being equal, the easier a
systemis to interact with, the less effort needed
to operateit, andthe moreeffort one canallocate to other activities (RadnerandRothschild,
1975), contributing to overall job performance.
Goodwin
(1987)also arguesfor this flow of causality, concludingfromher analysisthat: "There
is increasingevidencethat the effective functionality of a systemdepends
on its usability"
(p. 229).Thisintriguing interpretationis preliminary and shouldbe subjectedto further experimentation.If true, however,it underscores
the
theoretical importanceof perceivedusefulness.
This investigationhaslimitations that shouldbe
pointedout. Thegenerality of the findings remainsto be shownby future research.Thefact
that similar findings wereobserved,with respect
to both the psychometric
propertiesof the measures andthe pattern of empirical associations,
acrosstwodifferent userpopulations,twodifferent systems,andtwo different researchsettings
(lab andfield), providessomeevidencefavoring
externalvalidity.
In addition, a follow-upto this study, reported
by Davis,et al. (1989)founda very similar pat-
334 MIS Quarterly/September 1989
tern of results in a two-wavestudy (Tables
and9). In that study, MBA
studentsubjectswere
askedto fill out a questionnaire
after a one-hour
introduction to a wordprocessingprogram,and
again 14 weekslater. Usageintentions were
measuredat both time periods, and selfreported usagewasmeasured
at the later time
period. Intentions weresignificantly correlated
with usage(.35 and .63 for the two points in
time, respectively).Unlikethe results of Studies
1 and2, Davis,et al. (1989)founda significant
direct effect of easeof useon usage,controlling
for usefulness,after the one-hourtraining session (Table9), althoughthis evolvedinto a nonsignificant effect as of 14 weekslater. In general, though,Davis,et al. (1989)foundusefulnessto be moreinfluential than easeof usein
driving usagebehavior,consistentwith the findings reported above.
Furtherresearchwill shedmorelight on the generality of thesefindings.Another
limitation is that
the usage measures employed were selfreported as opposedto objectively measured.
Not enoughis currently knownabout howaccurately self-reportsreflect actual behavior.Also,
since usagewasreported on the samequestionnaire usedto measureusefulnessand ease
of use,the possibility of a haloeffect shouldnot
be overlooked.Future researchaddressingthe
relationship betweenthese constructs and objectively measured
useis neededbefore claims
about the behavioral predictiveness can be
made
conclusively.Theselimitations notwithstanding, the results represent a promising step
towardthe establishmentof improvedmeasures
for twoimportantvariables.
Research
impfications
Futureresearchis neededto addresshowother
variablesrelate to usefulness,easeof use, and
acceptance.Intrinsic motivation, for example,
has receivedinadequateattention in MIStheories. Whereasperceived usefulness is concernedwith performance
as a consequence
use,
intrinsic motivationis concerned
with the reinforcementandenjoymentrelated to the process
of performinga behaviorper se, irrespective of
whateverexternal outcomesare generatedby
suchbehavior(Deci, 1975). Althoughintrinsic
motivationhasbeenstudiedin the designof computer games
(e.g., Malone,1981),it is just beginning to be recognizedas a potential mechanism underlying user acceptanceof end-user
IT Usefulness
andEaseof Use
systems(Carroll andThomas,
1988). Currently,
the role of affective attitudes is also an open
issue. Whilesometheorists arguethat beliefs
influencebehavioronly via their indirect influence on attitudes (e.g., Fishbein and Ajzen,
1975),others viewbeliefs andattitudes as codeterminants
of behavioralintentions(e.g., Triandis, 1977),andstill othersviewattitudes as
antecedentsof beliefs (e.g., Weiner, 1986).
Counterto FishbeinandAjzen’s(1975)position,
both Davis(1986)andDavis,et al. (1989)found
that attitudes do not fully mediatethe effect of
perceivedusefulnessandperceivedeaseof use
on behavior.
It shouldbe emphasized
that perceivedusefulness and ease of use are people’s subjective
appraisalof performance
andeffort, respectively,
anddo not necessarilyreflect objectivereality.
In this study,beliefs are seenas meaningful
variablesin their ownright, whichfunction as behavioral determinants,and are not regardedas
surrogatemeasures
of objective phenomena
(as
is oftendonein MISresearch,e.g., Ives, et al.,
1983; Srinivasan, 1985). Several MISstudies
haveobserveddiscrepanciesbetweenperceived
and actual performance
(Cats-Baril andHuber,
1987; Dickson,et al., 1986; Gallupeand DeSanctis, 1988;Mclntyre, 1982;Sharda,et al.,
1988).Thus,evenif an applicationwouldobjectively improveperformance,
if usersdon’t perceiveit as useful,they’reunlikelyto useit (Alavi
and Henderson,1981). Conversely,people may
overrate the performancegains a systemhas
to offer and adopt systemsthat are dysfunctional. Giventhat this studyindicatesthat people
act accordingto their beliefs aboutperformance,
future researchis needed
to understand
whyperformance
beliefs are often in disagreement
with
objectivereality. Thepossibility of dysfunctional
impacts generatedby information technology
(e.g., Kottemannand Remus,1987) emphasizes
that useracceptance
is not a universalgoaland
is actually undesireable
in caseswheresystems
fail to providetrue performance
gains.
More research is neededto understand how
measuressuch as those introduced here performin applieddesignandevaluationsettings.
Thegrowingliterature on designprinciples (Anderson and Olson, 1985; Gould and Lewis,
1985; Johansenand Baker, 1984; Mantel and
Teorey,1988;Shneiderman,
1987)calls for the
use of subjective measures
at various points
throughoutthe development
and implementation
process, from the earliest needsassessment
throughconceptscreeningand prototype testing to post-implementation
assessment.
Thefact
that the measures
performedwell psychometrically bothafter brief introductionsto the target
system(Study 2, and Davis, et al., 1989)and
after substantialuserexperience
with the system
(Study1, andDavis,et al., 1989)is promising
concerningtheir appropriatenessat various
points in the life cycle. Practitionersgenerally
evaluatesystemsnot only to predict acceptability but also to diagnosethe reasonsunderlying
lack of acceptanceand to formulate interventions to improveuseracceptance.
In this sense,
research on howusefulness and ease of use
canbe influencedby variousexternally controllable factors, suchas the functionalandinterface characteristicsof the system(Benbasat
and
Dexter,1986;Bewley,et al., 1983;Dickson,et
al., 1986), development
methodologies
(Alavi,
1984), training and education (Nelson and
Cheney,1987), and user involvementin design
(Baroudi, et al. 1986;Franzand Robey,1986)
is important. Thenewmeasures
introducedhere
canbe usedby researchersinvestigating these
issues.
Althoughthere has beena growingpessimism
in the field aboutthe ability to identify measures
that are robustly linked to user acceptance,
the
view taken here is muchmoreoptimistic. User
reactions to computersare complexand multifaceted.Butif the field continuesto systematically investigate fundamental
mechanisms
driving user behavior,cultivating better andbetter
measures
andcritically examining
alternativetheoretical models,sustainableprogressis within
reach.
Acknowledgements
This researchwassupportedby grants from the
MIT Sloan School of Management,
IBMCanada
Ltd., and TheUniversity of MichiganBusiness
School. The author is indebted to the anonymousassociate editor and reviewersfor their
manyhelpful suggestions.
References
Abelson,R.P. andLevi, A. "DecisionMakingand
DecisionTheory," in TheHandbook
of Social
Psychology,third edition, G. LindsayandE.
Aronson(eds.), Knopf, NewYork, NY, 1985,
pp. 231-309.
MIS Quarterly~September1989 335
IT Usefulness
andEaseof Use
Adelbratt, T. andMontgomery,
H. "Attractiveness
of Decision Rules," Acta Psychologica(45),
1980, pp. 177-185.
Alavi, M. "An Analysis of the Prototyping Approachto Information SystemsDevelopment,"
Communicationsof the ACM(27:6), June
1984,pp. 556-563.
Alavi, M. and Henderson,
J.C. "An Evoluiionary
Strategyfor Implementing
a DecisionSupport
System," Management
Science (27:11), November1981, pp. 1309-1323.
Anastasi,A. "EvolvingConcepts
of Test Validation," AnnualReviewof Psychology(37),
1986,pp. 1-15.
Anderson,N.S. and Olson,J.R. (eds.) Methods
for Designing Software to Fit Human
Needs
and Capabilities: Proceedingsof the Workshop on Software Human
Factors, National
Academy
Press, Washington,D.C., 1985.
Bandura,A. "Self-Efficacy Mechanism
in Human
Agency,"AmericanPsychologist(37:2), February 1982,pp. 122-147.
Barki, H. and Huff, S. "Change,Attitude to
Change,and Decision Support SystemSuccess," InformationandManagement
(9:5), December1985, pp. 261-268.
Baroudi,J.J., Olson,M.H.and Ives, B. "AnEmpirical Studyof the Impactof UserInvolvementon SystemUsageand Information Satisfaction," Communications
of the ACM
(29:3),
March1986, pp. 232-238.
Beach,L.R. and Mitchell, T.R. "A Contingency
Modelfor the Selection of DecisionStrategies," Academy
of Management
Review(3:3),
July 1978, pp. 439-449.
Benbasat,
I. andDexter, A.S. "AnInvestigation
of the Effectiveness of Color and Graphical
PresentationUnderVaryingTimeConstraints,
MISQuarterly(10:1), March1986,pp. 59-84.
Bewley,W.L., Roberts,T.L., Schoit, D. andVerplank, W.L., "Human
Factors Testing in the
Designof Xerox’s8010’Star’ Office Workstationi" CHI ’83 Human
Factors in Computing
Systems, Boston, December12-15, 1983,
ACM,NewYork, NY, pp. 72-77.
Bohmstedt,
G.W."Reliability andValidity Assessmentin Attitude Measurement,"
in Attitude
Measurement,G.F. Summers(ed.), RandMcNally,Chicago,IL, 1970,pp. 80-99.
Bowen,W. "The PunyPayoff from Office Computers," Fortune,May26, 1986,pp. 20-24.
Branscomb,L.M. and Thomas,J.C. "Ease of
Use: A SystemDesignChallenge," IBMSystemsJournal(23), 1984,pp. 224-235.
Campbell,D.T. and Fiske, D.W. "Convergent
andDiscriminantValidationby the Multitrait-
336 MISQuarterly~September1989
MultitmethodMatrix," PsychologicalBulletin
(56:9), March1959, pp. 81-105.
Campbell,D.T., Siegman,C.R. and Rees,M.B.
"Direction-of-Wording
Effects in the Relationships BetweenScales," PsychologicalBulletin (68:5), November
1967,pp. 293-303.
Card,S.K., Moran,T.P. andNewell,A. ThePsychology of Human-Computer
Interaction,
Erlbaum,Hillsdale, NJ, 1984.
Carroll, J.M.andCarrithers, C. "TrainingWheels
in a UserInterface," Communications
of the
ACM
(27:8), August1984, pp. 800-806.
¯ Carroll, J.M. and McKendree,J. "Interface
DesignIssues for Advice-GivingExpert Systems," Communicationsof the ACM(30:1,
January1987, pp. 14-31.
Carroll, J.M., Mack,R.L., Lewis,C.H., Grishkowsky, N.L. andRobertson,S.R. "ExploringExploring a WordProcessor," Human-Computer
Interaction (1), 1985,pp. 283-307.
Carroll, J.M. and Thomas,J.C. "Fun," SIGCHI
Bulletin (19:3), January1988,pp. 21-24.
Cats-Baril, W.L.and Huber,G.P."DecisionSupport Systems
for Ill-Structured Problems:An
Empirical Study," DecisionSciences(18:3),
Summer
1987, pp. 352-372.
Cheney,
P.H., Mann,R.I. andAmoroso,
D.L. "OrganizationalFactorsAffecting the Success
of
End-UserComputing," Journal of Management Information Systems (3:1), Summer
1986,pp. 65-80.
Chin, J.P., Diehl, V.A. and Norman,K.L. "Developmentof an Instrument for Measuring
UserSatisfaction of the Human-Computer
Interface," CH1’88Human
Factors in Computing Systems,Washington,D.C., May15-19,
1988, ACM,NewYork, NY, pp. 213-218.
Cohen,J. and Cohen,P. Applied Multiple Regression/ CorrelationAnalysisfor the Behavioral Sciences,Erlbaum,Hillsdale, NJ, 1975.
Curley, K.F. "Are ThereanyReal Benefits from
Office Automation?"BusinessHorizons(4),
July-August1984,pp. 37-42.
Davis, F.D. "A TechnologyAcceptanceModel
for Empirically Testing NewEnd-UserInformation Systems:Theoryand Results," doctoral dissertation, MITSloanSchoolof Management,Cambridge,MA,1986.
P.R.
Davis, F.D., Bagozzi, R.P. and Warshaw,
UserAcceptanceof ComputerTechnology~A
Comparison
of TwoTheoretical Models,"ManagementScience (35:8), August 1989, pp.
982-1003.
Davis, J.A. TheLogic of CausalOrder, Sage,
BeverlyHills, CA,1985.
Deci, E.L. Intrinsic Motivation, Plenum,New
IT Usefulness
andEaseof Use
York, NY, 1975.
DeSanctis,G. "Expectancy
Theoryas an. Explanation of VoluntaryUseof a DecisionSupport
System,"PsychologicalReports(52), 1983,
pp. 247-260.
Dickson,G.W.,DeSanctis,G. and McBride,D.J.
"Understanding
the Effectivenessof Computer
Graphicsfor DecisionSupport:A Cumulative
ExperimentalApproach,"Communications
of
the ACM
(29:1), January1986,pp. 40-47.
Edelmann,F. "Managers,ComputerSystems,
and Productivity," MISQuarterly(5:3), September1981,pp. 1-19.
Fishbein,M.andAjzen,I. "Belief, Attitude, Intention and Behavior: An Introduction to
Theoryand Research,"Addison-Wesley,
Reading, MA1975.
Franz, C.R. andRobey,D. "OrganizationalContext, UserInvolvement,andthe Usefulness
of
Information Systems," Decision Sciences
(17:3), Summer
1986, pp. 329-356.
Gallupe,R.B., DeSanctis,G. and Dickson,G.W.
"Computer-Based
Support for GroupProblem
Finding:AnEmpiricalInvestigation,"MISQuarterly (12:2), June1988,pp. 277-296.
Ginzberg,M.J. "Early Diagnosisof MISImplementation Failure: PromisingResults and Unanswered Questions," Management
Science
(27:4), April 1981,pp. 459-478.
Good,M., Spine,T.M., Whiteside,J. andGeorge
P. "User-DerivedImpactAnalysis as a Tool
for Usability Engineering,"CH1"86
Human
Factors in Computing
Systems,Boston,April 1317, 1986,ACM,NewYork, NewYork pp. 241 246.
Goodwin,
N.C."FunctionalityandUsability," Communicationsof the ACM
(30:3), March1987,
pp. 229-233.
Goslar,M.D."Capability Criteria for Marketing
Decision SupportSystems,"Journal of ManagementInformation Systems(3:1), Summer
1986,pp. 81-95.
Gould, J., Conti, J. and Hovanyecz,
T. "Como
posingletters with a Simulated
ListeningTypewriter," Communications
of the ACM
(26:4),
April 1983,pp. 295-308.
Gould,J.D. and LewisC. "Designingfor Usability: KeyPrinciples andWhatDesigners
Think,"
Communicationsof the ACM(28:3), March
1985,pp. 300-311.
Greenberg,K. "ExecutivesRateTheir PCs,"PC
World, September1984, pp. 286-292.
Hauser,J.R. andSimmie,P. "Profit Maximizing
PerceptualPositions: AnIntegratedTheoryfor
the Selectionof ProductFeaturesandPrice,"
Management
Science (27:1), January 1981,
pp. 33-56.
Hill, T., Smith, N.D., and Mann,M.F."Role of
Efficacy Expectationsin Predictingthe Decision to Use AdvancedTechnologies: The
Caseof Computers,"Journal of AppliedPsychology, (72:2), May1987,pp. 307-313.
Ives, B., Olson,M.H.andBaroudi,J.J. "Themeasurement
of UserInformationSatisfaction,"Communications of the ACM(26:10), October
1983,pp. 785-793.
Jarvenpaa,S.L. "The Effect of Task Demands
andGraphicalFormaton InformationProcessing Strategies," Management
Science(35:3),
March1989, pp. 285-303.
Johansen,R. & Baker E., "User NeedsWorkshops: A NewApproachto Anticipating User
Needsfor Advanced
Office Systems,"Office
Technologyand People(2), 1984, pp. 103119.
Johnson,E.J. and Payne,J.W. "Effort and Accuracy in Choice," ManagementScience
(31:4), April 1985,pp. 395-414.
Johnston, J. Econometric Methods, McGrawHill, NewYork, NY, 1972.
Klein, G. and Beck, P.O. "A Decision Aid for
Selecting Among
Information SystemsAlternatives," MISQuarterly(11:2), June1987,pp.
177-186.
Kleinmuntz,D.N. and Schkade,D.A. "The Cognitive Implicationsof InformationDisplaysin
Computer-Supported
Decision-Making,"University of Texasat Austin, GraduateSchool
of Business,Department
of Management
Working Paper87/88-4-8, 1988.
Kottemann,J.E. and Remus,W.E. "Evidence
andPrinciples of FunctionalandDysfunctional
DSS,"OMEGA
(15:2), March1987, pp. 135143.
Larcker, D.F. andLessig, V.P. "PerceivedUsefulness of Information:A Psychometric
Examination," Decision Sciences(11:1), January
1980,pp. 121-134.
Lucas, H.C. "Performanceand the Useof an
Information System," Management
Science
(21:8), April 1975,pp. 908-919.
Malone,T.W."Towarda Theoryof Intrinsically
MotivatingInstruction," CognitiveScience(4),
1981,pp. 333-369.
Mansfield,E.R. and Helms,B.P. "DetectingMulticollinearity," TheAmerican
Statistician(36:3),
August1982, pp. 158-160.
Mantei, M.M.and Teorey, T.J. "Cost/Benefit
Analysis for Incorporating Human
Factorsin
the SoftwareLifecycle," Communications
of
the ACM
(31:4), April 1988,pp. 428-439.
MISQuarterly~September1989 337
IT Usefulness
andEaseof Use
Markus, M.L. and Bjorn-Anderson, N. "Power
OverUsers: It’s Exerciseby SystemProfessionals," Communications
of the ACM
(30:6),
June 1987, pp. 498-504.
Mclntyre, S. "An Experimental Study of the
Impact of Judgement-Based Marketing
Models," Management
Science (28:1), January 1982,pp. 17-23.
Nelson, R.R. and Cheney,P.H. "Training End
Users:An Exploratory Study," MISQuarterly
(11:4), December
1987, pp. 547-559.
Nickerson,R.S. "WhyInteractive Computer
Systems Are SometimesNot Used by People
WhoMight Benefit from Them,"International
Journal of Man-Machine
Studies (15), 1981,
pp. 469-483.
Norman,D.A. "Design Principles for HumanComputerInterfaces," CHI ’83 Human
Factors in ComputingSystems,Boston, December 12-15, 1983, ACM,NewYork, NY, pp. 110.
Nunnally,J. Psychometric
Theory,McGraw-Hill,
NewYork, NY, 1978.
Panko, R.R. End-User Computing: Management, Appfications, and Technology,Wiley,
NewYork, NY, 1988.
Payne,J. W. "ContingentDecision Behavior,"
Psychological
Bulletin, (92:2), 1982,pp. 382402.
Pfeffer, J. Organizations and Organization
Theory, Pitman,Boston, MA,1982.
Radner,R. and Rothschild, M. "Onthe Allocation of Effort," Journal of Economic
Theory
(10), 1975,pp. 358-376.
Roberts,T.L. and Moran,T.P. "TheEvaluation
of Text Editors: Methodology
andEmpiricalResults," Communications
of the ACM(26:4),
April 1983,pp. 265-283.
Robey,D. "User Attitudes and Management
Information SystemUse," Academy
of ManagementJournal(22:3), September
1979,pp. 527538.
Robey,D. and Farrow,D. "User Involvementin
Information SystemDevelopment:A Conflict
Modeland Empirical Test," Management
Science(28:1), January1982, pp. 73-85.
Rogers, E.M. and Shoemaker,F.F. Communication of Innovations: A Cross-CulturalApproach, Free Press, NewYork, NY, 1971.
Rushinek,A. and Rushinek,S.F. "WhatMakes
Users Happy?"Communicationsof the ACM
(29:7), July 1986,pp. 594-598.
Saracevic, T. "Relevance:A Reviewof and a
Framework
for the Thinkingon the Notion in
Information Science,"Journal of the American Society for Information Science,Novem-
338 MISQuarterly~September1989
ber-December
1975; pp. 321-343.
Schein,E.H.Organizational
Psychology,
third edition, Prentice-Hall,
Englewood
Cliffs, NJ, 1980.
Schewe,C.D. "The Management
Information
System
User: AnExploratoryBehavioralAnalysis," Academy
of Management
Journal(19:4),
December
1976, pp. 577-590.
Schultz, R.L. and Slevin, D.P. "Implementation
andOrganizationalValidity: An EmpiricalInvestigation," in Implementing
OperationsResearch~Management
Science, R.L. Schultz
and D.P. Slevin (eds.), AmericanElsevier,
NewYork, NY, 1975, pp. 153-182.
Sharda,R., Barr, S.H., andMcDonnell,
J.C. "Decision Support System Effectiveness: A
Reviewand Empirical Test," Management
Science (34:2), February1988, pp. 139-159.
Sheppard,
B.H., Hartwick, J. and Warshaw,
P.R.
"The Theory of ReasonedAction: A MetaAnalysis of Past Researchwith Recommendations for Modifications and Future Research," Journal of ConsumerResearch
(15:3), December
1988, pp. 325-343.
Sherif, M. and Sherif, C.W."The OwnCategories Approach
in Attitude Research,"
in Readings in Attitude Theoryand Measurement,
M.
Fishbein (ed.), Wiley, NewYork, NY, 1967,
pp. 190-198.
Shneiderman,
B. Designingthe UserInterface,
Addison-Wesley,Reading,MA,1987.
Silk, A.J. "ResponseSet and Measurement
of
Self-DesignatedOpinionLeadership,"Public
OpinionQuarterly (35), 1971,pp. 383-397.
Srinivasan, A. "Alternative Measures
of System
Effectiveness:Associations
andImplications,"
MISQuarterly(9:3), September
1985,pp. 243253.
Swanson,E.B. "Management
Information Systems: AppreciationandInvolvement,"ManagementScience(21:2), October1974,pp. 178188.
Swanson,
E.B. "MeasuringUserAttitudes in MIS
Research: A Review," OMEGA
(10:2), March
1982, pp. 157-165.
Swanson,
E.B. "Information ChannelDisposition
and Use," Decision Sciences(18:1), Winter
1987, pp. 131-145:
Swanson,E.B. Information SystemImplementation: Bridging the GapBetweenDesignand
Utilization, Irwin, Homewood,
IL, 1988.
Tornatzky,L.G. andKlein, K.J. "InnovationCharacteristics and InnovationAdoption-Implementation: A Meta-Analysisof Findings," IEEE
Transactions on Engineering Management
(EM-29:1),February1982,pp. 28-45.
Triandis, H.C.Interpersonal Behavior,Brooks/
IT Usefulness
andEaseof Use
Cole, Monterey,CA, 1977.
Vertinsky,I., Barth, R.T. andMitchell, V.F. "A
Study of OR/MS
Implementationas a Social
ChangeProcess," in ImplementingOperations Research~Management
Science, R.L.
Schultz and D.P. Slevin (eds.), American
Elsevier, NewYork, NY, 1975, pp. 253-272.
Vroom,V.H. Workand Motivation, Wiley, New
York, NY, 1964.
Ward,J.R. and Blesser, B. "Interactive Recognition of HandprinterCharactersfor Computer
Input," IEEEComputerGraphicsand Applications, September
1985,pp. 24-37.
Warshaw,
P.R. and Davis, F.D. "Disentangling
BehavioralIntention and BehavioralExpectation," Journalof Experimental
SocialPsychology (21), May1985, pp. 213-228.
Weiner,B. "Attribution, Emotion,
andActior~,"in
Handbook
of Motivation and Cognition, R.M.
SorrentinoandE.T. Higgins(eds.), Guilford,
NewYork, NY, 1986, pp. 281-312.
Whiteside,J., Jones,S., Levy,P.S. andWixon,
D. "User PerformanceWith Command,
Menu,
and Iconic Interfaces," CHI’85 Proceedings,
SanFrancisco, April 14-18, 1985, ACM,New
York, NY, pp. 185-191.
Wright, P. "Consumer
ChoiceStrategies: Simplifying vs. Optimizing,"Journalof Marketing
Research(14:1), February 1975, pp. 429433.
Young, T.R. "The Lonely Micro," Datamation
(30:4), April 1984,pp. 100-114.
About the Author
FredD. Davisis assistant professorat the University of MichiganSchoolof BusinessAdministration. His doctoral researchat the Sloan
Schoolof Management,
MIT, dealt with predicting andexplaining user acceptance
of computer
technology.His current researchinterests includecomputersupportfor decision making,motivational determinantsof computer
acceptance,
intentions and expectationsin human
behavior,
andbiasedattributions of the performance
impactsof informationtechnology.
MIS Quarterly~September1989 339
IT UsefulnessandEaseof Use
Appendix
Final Measurement
Scales for Perceived Usefulnessand
PerceivedEaseof Use
Perceived Usefulness
Using CHART-MASTER
in my job would enable me to accomplish tasks more quickly.
likely
I __
I __
slightly neither slightly
extremely quite
quite extremely
unlikely
Using CHART-MASTER
would improve my job performance.
likely
__
extremely quite
slightly
neither
slightly
quite extremely
unlikely
Using CHART-MASTER
in my job would increase my productivity.
likely
__
I
extremely quite
unlikely
Slightly
neither slightly
quite extremely
Using CHART-MASTER
would enhance my effectiveness
likely
__
extremely quite
I.__
slightly
unlikely
neither
slightly
Using CHART-MASTER
would makeit
likely
__
extremely quite
I__
slightly
on the job.
quite extremely
easier to do my job.
I
__
slightly
neither
unlikely
quite extremely
I would find CHART-MASTER
useful in my job.
likely
extremely
quite
unlikely
slightly
neither
slightly
quite
extremely
Perceived Easeof Use
Learning to operate CHART-MASTER
would be easy for me.
likely
extremely quite
slightly
neither slightly
__ I __
quite extremely
unlikely
I would find it easy to get CHART-MASTER
to do what I want it to do.
likely
unlikely
extremely quite
My interaction
slightly
neither
slightly
quite extremely
with CHART-MASTER
would be clear and understandable.
likely
unlikely
extremely quite
slightly
neither
slightly
quite extremely
I would find CHART-MASTER
to be flexible to interact with.
likely
extremely quite
slightly
neither
slightly
It would be easy for me to becomeskillful
unlikely
quite extremely
at using CHART-MASTER.
likely
extremely
quite
slightly
neither
slightly
quite
extremely
unlikely
I would find CHART-MASTER
easy to use.
likely
extremely
quite
slightly
340 MIS Quarter/y/September 1989
neither
slightly
quite
extremely
unlikely
Download