Trust in Technology Measurement

advertisement
Trust in Technology: Its Components and Measures
Trust in a Specific Technology: An Investigation of its Components
and Measures
D. H. MCKNIGHT
Eli Broad College of Business, Michigan State University, U. S. A
M. CARTER
College of Business and Behavioral Sciences, Clemson University, U. S. A.
J. B. THATCHER
College of Business and Behavioral Sciences, Clemson University, U. S. A.
AND
P.F. CLAY
College of Business, Washington State University, U. S. A.
_________________________________________________________________________________________
Trust plays an important role in many Information Systems (IS)-enabled situations. Most IS research employs trust as a measure of interpersonal
or person-to-firm relations, such as trust in a Web vendor or a virtual team member. Although trust in other people is important, this paper
suggests that trust in the information technology (IT) itself also plays a role in shaping IT-related beliefs and behavior. To advance trust and
technology research, this paper presents a set of trust in technology construct definitions and measures. We also empirically examine these
construct measures using tests of convergent, discriminant, and nomological validity. This study contributes to the literature by providing: a) a
framework that differentiates trust in technology from trust in people, b) a theory-based set of definitions necessary for investigating different
kinds of trust in technology, and c) validated trust in technology measures useful to research and practice.
Categories and Subject Descriptors: K.8.m PERSONAL COMPUTING—Miscellaneous
General Terms: Human Factors
Additional Key Words and Phrases: Trust, Trust in Technology, Construct Development
_________________________________________________________________________________________
1. INTRODUCTION
Trust is commonly defined as an individual’s willingness to depend on another party because of the characteristics
of the other party [Rousseau et al. 1998]. This study concentrates on the latter half of this definition, the
characteristics or attributes of the trustee, usually termed ‘trust’ or ‘trusting beliefs.’ Research has found trust to be
not only useful, but also central [Golembiewski and McConkie 1975] to understanding individual behavior in
diverse domains such as work group interaction [Jarvenpaa and Leidner 1998; Mayer et al. 1995] or commercial
relationships [Arrow, 1974]. For example, Jarvenpaa and Leidner [1998] report swift trust influences how “virtual
peers” interact in globally distributed teams. Trust is crucial to almost any type of situation in which either
uncertainty exists or undesirable outcomes are possible [Fukuyama 1995; Luhmann 1979].
Within the Information Systems (IS) domain, as in other fields, trust is usually examined and defined in
terms of trust in people without regard for trust in the technology itself. IS trust research primarily examines how
trust in people affects IT-acceptance. For example, trust in specific Internet vendors [Gefen et al. 2003; Kim 2008;
Lim et al. 2006; McKnight et al. 2002; Stewart 2003] has been found to influence Web consumers’ beliefs and
behavior [Clarke 1999]. Additionally, research has used a subset of trust in people attributes—i.e., ability,
benevolence, and integrity—to study trust in web sites [Vance et al., 2008] and trust in online recommendation
agents [Wang and Benbasat 2005]. In general, Internet research provides evidence that trust in another actor (i.e., a
web vendor or person) and/or trust in an agent of another actor (i.e. a recommendation agent) influences individual
decisions to use technology. Comparatively little research directly examines trust in a technology, that is, in an IT
artifact.
_____________________________________________________________________________________________________________________
Authors’ addresses: D. H. McKnight, Department of Accounting and Information Systems, Eli Broad College of Business, Michigan State
University, East Lansing, MI 48824, U. S. A.; E-mail: mcknight@bus.msu.edu; M. Carter, Department of Management, College of Business and
Behavioral Sciences, Clemson University, Clemson, SC 29634, U. S. A.; E-mail: mscarte@clemson.edu; J. B. Thatcher, Department of
Management, College of Business and Behavioral Sciences, Clemson University, Clemson, SC 29634, U. S. A.; E-mail: jthatch@clemson.edu;
P.F. Clay, Department of Entrepreneurship and Information Systems, College of Business, Washington State University, Pullman, WA 99164,
U. S. A.; E-mail: pclay@wsu.edu
Trust in Technology: Its Components and Measures
To an extent, the research on trust in recommendation agents (RA) answers the call to focus on the IT artifact
(Orlikowski and Iacono 2001). RAs qualify as IT artifacts since they are automated online assistants that help users
decide among products. Thus, to study an RA is to study an IT artifact. However, RAs tend to imitate human
characteristics and interact with users in human-like ways. They may even look human-like. Because of this, RA
trust studies have measured trust in RAs using trust-in-people scales. Thus, the RA has not actually been studied
regarding its technological trust traits, but rather regarding its human trust traits (i.e., an RA is treated as a human
surrogate).
The primary difference between this study and prior studies is that we focus on trust in the technology itself
instead of trust in people, organizations, or human surrogates. The purpose of this study is to develop trust in
technology definitions and measures and to test how they work with a nomological network. This helps address the
problem that IT trust research focused on trust in people has not profited from additionally considering trust in the
technology itself. Just as the Technology Acceptance Model’s (TAM) perceived usefulness and ease of use
concepts directly focus on the attributes of the technology itself, so our focus is on the trust-related attributes of the
technology itself. This study more directly examines the IT artifact than past studies, answering Orlikowski and
Iacono’s call. Our belief is that by focusing on trust in the technology, we can better determine what it is about
technology that makes the technology itself trustworthy, irrespective of the people and human structures that
surround the technology. This focus should yield new insights into the nature of how trust works in a technological
context.
To gain a more nuanced view of trust’s implications for IT use, MIS research needs to examine how users’
trust in the technology itself relates to value-added post-adoption use of IT. In this study, technology is defined as
the IT software artifact, with whatever functionality is programmed into it. By focusing on the technology itself,
trust researchers can evaluate how trusting beliefs regarding specific attributes of the technology relate to individual
IT acceptance and post-adoption behavior. By so doing, research will help extend understanding of individuals’
value-added technology use after an IT “has been installed, made accessible to the user, and applied by the user in
accomplishing his/her work activities” [Jasperson et al., 2005].
In order to link trust to value-added applications of existing workplace IT, this paper advances a conceptual
definition and operationalization of trust in technology. In doing so, we explain how trust in technology differs from
trust in people. Also, we develop a model that explains how trust in technology predicts the extent to which
individuals continue using that technology. This is important because scant research has examined how technologyoriented trusting beliefs relate to behavioral beliefs that shape post-adoption technology use [Thatcher et al., 2011].
Thus, to further understanding of trust and individual technology use, this study addresses the following research
questions: What is the nomological network surrounding trust in technology? What is the influence of trust in
technology on individuals’ post-adoptive technology use behaviors?
In answering these questions, this study draws on IS literature on trust to develop a taxonomy of trust in
technology constructs that extend research on trust in the context of IT use. By distinguishing between trust in
technology and trust in people, our work affords researchers an opportunity to tease apart how beliefs towards a
vendor, such as Microsoft or Google, relate to cognitions about features of their products. By providing a literaturebased conceptual and operational definition of trust in technology, our work provides research and practice with a
framework for examining the interrelationships among different forms of trust and post-adoption technology use.
2. THEORETICAL FOUNDATION
IS research has primarily examined the influence of trust in people on individual decisions to use technology. One
explanation for this is that it seems more “natural” to trust a person than to trust a technology. In fact, people present
considerable uncertainty to the trustor because of their volition (i.e., the power to choose)—something that
technology usually lacks. However, some researchers have stretched this idea so far as to doubt the viability of the
trust in technology concept: “People trust people, not technology” [Friedman et al. 2000: 36]. This extreme position
assumes that trust exists only when the trustee has volition and moral agency, i.e., the ability to do right or wrong. It
also assumes that trust is to be defined narrowly as “accepted vulnerability to another’s…ill will (or lack of good
will) toward one.” [Friedman et al. 2000: 34]. This view suggests that technology, without its own will, cannot fit
within this human-bound definition of what trust is. However, the literature on trust employs a large number of
definitions, many of which extend beyond this narrow view (see [McKnight and Chervany 1996]).
This paper creates trust in technology definitions and constructs that are more palatable to apply to
technology than the interpersonal trust constructs used in other papers that study trust in technology. Our position is
that trust situations arise when one has to make oneself vulnerable by relying on another person or object, regardless
of the trust object’s will or volition. Perhaps the most basic dictionary meaning of trust is to depend or rely on
Trust in Technology: Its Components and Measures
another [McKnight and Chervany 1996]. Thus, if one can depend on an IT’s attributes under uncertainty, then trust
in technology is a viable concept. For instance, a business person can say, “I trust Blackberry®’s email system to
deliver messages to my phone.” Here the trustor relies on the Blackberry device to manage email and accepts
vulnerabilities tied to network outages or device failures. Hence, similar to trust in people, trust in an IT involves
accepting vulnerability that it that may or may not complete a task.
Different Types of Trust
Researchers (e.g. [Lewicki and Bunker 1996; Paul and McDaniel 2004]) suggest different types of trust develop as
trust relationships evolve. Initial trust rests on trustor judgments before they experience the trustee. The online trust
literature has often focused on initial trust in web vendors (see Appendix A). This research ([e.g. Gefen et al. 2003;
McKnight et al. 2002; Vance et al. 2008) finds that initial trust in web vendors influences online purchase intentions.
One form of initial trust is calculus-based trust [Lewicki and Bunker 1996], in which the trustor assesses the costs
and benefits of extending trust. This trust implies the trustor makes a rational decision about the situation before
extending trust [Coleman 1990]. By contrast, we use the social-psychological trust that is about perceptions
regarding the trustee’s attributes.
Once familiar with a trustee, trustors form knowledge-based or experiential trust. Knowledge-based trust
means the trustor knows the other party well enough to predict trustee behavior in a situation [Lewicki and Bunker
1996]. This assumes a history of trustor - trustee interactions. In contrast to initial trust, which may erode quickly
when costs and benefits change, knowledge-based trust is more persistent. Because the trustor is familiar with the
eccentricities of a trustee, they are more likely to continue the relationship even when circumstances change or
performance lapses [Lewicki and Bunker 1996].
In recent years, limited IS trust research (e.g. [Pavlou 2003; Lippert 2007; Thatcher et al., 2011]) has
investigated knowledge-based trust in technology. These studies provide evidence that it is technology knowledge
that informs post-adoptive use behaviors, not cost vs. benefit assessments. The fact that other IS constructs based on
cost/benefit assessments (e.g. perceived usefulness and perceived ease of use) have been shown to have less
predictive power in a post-adoptive context [Kim and Malhotra 2005] supports this view. Thus, developing a
knowledge-based trust in technology construct may provide insight into post-adoptive technology use. Further, even
though some examine trust based on technology attributes, they typically do not use trust in technology measures
(Appendix A). Rather, they either use trust in people measures or non-trust-related measures like website quality,
which is a distinct construct [McKnight et al. 2002]. This underscores the need for trust in technology constructs and
measures.
Contextual Condition
Whether they involve people or technology, trust situations feature risk and uncertainty (Table 1). Trustors lack total
control over outcomes because they depend on either people or a technology to complete a task [Riker 1971].
Depending on another requires that the trustor risks that the trustee may not fulfill expected responsibilities,
intentionally or not. That is, under conditions of uncertainty, one relies on a person who may intentionally (i.e., by
moral choice) not fulfill their role. Alternatively, one relies on a technology which may not demonstrate the
capability (i.e., without intention) to fulfill its role. For example, when an individual trusts a cloud-based
application, such as Dropbox, to save data, one becomes exposed to risk and uncertainty tied to transmitting data
over the Internet and storing confidential data on a server. Regardless of the source of failure, technology users
assume the risk of incurring negative consequences if an application fails to act as expected [Bonoma 1976], which
is similar to the risks the trustor incurs if a human trustee fails to prove worthy of interpersonal trust. Hence, both
trust in people and trust in technology involve risk.
Contextual Condition
Object of Dependence
Nature of the Trustor’s
Expectations
(regarding the Object of
Dependence)
Table I: Conceptual Comparison—Trust in People versus Trust in Technology
Trust in People
Trust in Technology
Risk, Uncertainty, Lack of total control
Risk, Uncertainty, Lack of total user control
People—in terms of moral agency and both volitional
Technologies—in terms of amoral and non-volitional
and non-volitional factors
factors only
1. Do things for you in a competent way. (ability
1. Demonstrate possession of the needed functionality
[Mayer et al. 1995])
to do a required task.
2. Are caring and considerate of you; are benevolent
2. Are able to provide you effective help when needed
towards you; possess the will and moral agency to
(e.g., through a help menu).
help you when needed. (benevolence [Mayer et al.
1995])
3. Are consistent in 1.-2 above. (predictability
3. Operate reliably or consistently without failing.
[McKnight et al. 1998])
Trust in Technology: Its Components and Measures
Object of Dependence
Trust in people and trust in technology differ in terms of the nature of the object of dependence (Table I, row 2).
With the former, one trusts a person (a moral and volitional agent); with the latter, one trusts a specific technology (a
human-created artifact with a limited range of capabilities that lacks volition [i.e., will] and moral agency). For
example, when a technology user selects between relying on a human copy-editor or a word processing program,
their decision reflects comparisons of the copy editor’s competence and their willingness (reflecting volition) to take
time to carefully edit the paper versus the word processing program’s ability (reflecting no volition) to reliably
identify misspelled words or errors in grammar. Further, while a benevolent human copy editor may catch the
misuse of a correctly spelled word and make appropriate changes, a word processing program can only be expected
to do what it is programmed to do. Because technology lacks volition and moral agency, IT-related trust necessarily
reflects beliefs about a technology’s characteristics rather than its will or motives, because it has none. This does not
mean trust in technology is devoid of emotion, however. Emotion arises whenever a person’s plans or goals are
interrupted (Berscheid 1993). Because we depend on less than reliable technology for many tasks, technology can
interrupt our plans and raise emotion. For this reason, trust in technology will often reflect positive/negative
emotions people develop towards a technology.
Nature of Trustor’s Expectations
When forming trust in people and technology, individuals consider different attributes of the object of dependence
(see Table I, bottom section). Trust (more accurately called trusting beliefs) means beliefs that a person or
technology has the attributes necessary to perform as expected in a situation [Mayer et al. 1995]. Similar to trust in
people, users’ assessments of attributes reflect their beliefs about technology’s ability to deliver on the promise of its
objective characteristics. Even if an objective technology characteristic exists, users’ beliefs about performance may
differ based on their experience or the context for its use. When comparing trust in people and technology, users
express expectations about different attributes:
 Competence vs. Functionality – With trust in people, one assesses the efficacy of the trustee to fulfill a promise
in terms of their ability or power to do something for us [Barber 1983]. For example, an experienced lawyer
might develop the capability to argue a case effectively. With technology (Table I, Nature of Trustor’s
Expectations entry 1.), users consider whether the technology delivers on the functionality promised by
providing features sets needed to complete a task [McKnight 2005]. For example, while a payroll system may
have the features necessary to produce a correct payroll for a set of employees, trust in a technology’s
functionality hinges on that system’s capability to properly account for various taxes and deductions. The
competence of a person and the functionality of a technology are similar because they represent users’
expectations about the trustee’s capability.
 Benevolence vs. Helpfulness – With people, one hopes they care enough to offer help when needed [Rempel et
al. 1985]. With technology (Table I, entry 2.), users sense no caring emotions because technology itself has no
moral agency. However, users do hope that a technology’s help function will provide advice necessary to
complete a task, [McKnight 2005]. Evaluating helpfulness is important, because while most software has a help
function, there may be substantial variance in whether users perceive the advice offered effectively enables task
performance. Consequently, trusting beliefs in helpfulness represent users’ beliefs that the technology provides
adequate, effective, and responsive help.
 Predictability/Integrity vs. Reliability – In both cases (Table I, entry 3.), we hope trustees are consistent,
predictable or reliable [Giffin 1967; McKnight 2005]. With people, predictability refers to the degree to which
an individual can be relied upon to act in a predictable manner. This is risky due to peoples’ volition or freedom
to choose. Although technology has no volition, it still may not function consistently due to built-in flaws or
situational events that cause failures. By operating continually (i.e., with little or no downtime) or by responding
predictably to inputs (i.e. printing on command), a technology can shape users’ perceptions of consistency and
reliability.
Note that the above expectations are perceptual, rather than objective, in nature. Having delimited a role for the
knowledge-based trust in technology construct and described similarities and differences between trust in people and
trust in technology, we turn to developing definitions of different types of trust in technology. In each case, the trust
in technology definition corresponds to a trust in people definition in order to be based on the trust literature.
Trust in Technology: Its Components and Measures
Table II: Comparison of Concept and Construct Definitions
Trust in People
Trust in Technology
Study
Label
Definition
Label
Definition
General Trusting Beliefs in People and Technology
Mayer et
Propensity to
A general willingness to trust others.
1. Propensity to
The general tendency to be willing to depend
al. 1995
trust
Trust General
on technology across a broad spectrum of
situations and technologies.
McKnight Disposition to
[The] extent [to which one] demonstrates a Technology
et al. 1998 trust
consistent tendency to be willing to
depend on others across a broad spectrum
of situations and persons.
McKnight
et al. 1998
Faith in
humanity
Others are typically well-meaning and
reliable.
McKnight
et al. 1998
Trusting stance
Irrespective of whether people are reliable 3. Trusting Stanceor not, one will obtain better interpersonal General
outcomes by dealing with people as
Technology
though they are well-meaning and reliable.
Trusting Beliefs in a Context or Class of Technologies
McKnight Situational
The belief that success is likely because
et al. 1998 Normality
the situation is normal, favorable, or wellordered.
McKnight
et al. 1998
Structural
Assurance
4. Situational
NormalityTechnology
The belief that success is likely because
5. Structural
contextual conditions like promises,
Assurancecontracts, regulations and guarantees are in Technology
place.
Trust in Specific Trustees or Technologies
Mayer et
Trust
Reflects beliefs that the other party has
al. 1995
suitable attributes for performing as
expected in a specific situation...
irrespective of the ability to monitor or
control that other party.
Mayer et
Factor of
That group of skills, competencies, and
al. 1995
Trustworthiness: characteristics that enable a party to have
Ability
influence within some specific domain.
McKnight Trusting Belief and
Competence
Chervany
2001-2002
2. Faith in General One assumes technologies are usually
Technology
consistent, reliable, functional, and provide
the help needed.
Reflects beliefs that a specific technology has
the attributes necessary to perform as
expected in a given situation in which
negative consequences are possible.
7. Trusting beliefspecific
technologyFunctionality
The belief that the specific technology has
the capability, functionality, or features to do
for one what one needs to be done.
One has the ability to do for the other
person what the other person needs to have
done. The essence of competence is
efficacy.
Factor of
The extent to which a trustee is believed to 8. Trusting beliefTrustworthiness: want to do good to the trustor, aside from specific
Benevolence
an egocentric profit motive.
technologyHelpfulness
McKnight Trusting Belief - One cares about the welfare of the other
and
Benevolence
person and is therefore motivated to act in
Chervany
the other person’s interest….does not act
2001-2002
opportunistically toward the other...
Mayer et
al. 1995
The belief that success with the specific
technology is likely because one feels
comfortable when one uses the general type
of technology of which a specific technology
may be an instance.
The belief that success with the specific
technology is likely because, regardless of
the characteristics of the specific technology,
one believes structural conditions like
guarantees, contracts, support, or other
safeguards exist in the general type of
technology that make success likely.
6. Trust in a
specific
technology
Mayer et
al. 1995
McKnight Trusting Belief and
Predictability
Chervany
2001-2002
Regardless of what one assumes about
technology generally, one presumes that one
will achieve better outcomes by assuming the
technology can be relied on.
One’s actions are consistent enough that
another can forecast what one will do in a
given situation.
9. Trusting beliefspecific
technologyReliability
The belief that the specific technology
provides adequate and responsive help for
users.
The belief that the specific technology will
consistently operate properly.
Factor of
The extent to which a trustee adheres to a
Trustworthiness: set of principles that the trustor finds
Integrity
acceptable.
3. DEFINITIONS AND RESEARCH MODEL
Rooted in the trust in people definitions offered by Mayer et al. [1995] and McKnight et al. [1998] (Table II), we
operationalize trust in technology constructs as components of three sets of concepts: a) propensity to trust general
technology b) institution-based trust in technology, a structural concept, and c) trust in a specific technology,
Trust in Technology: Its Components and Measures
referring to a person’s relationship with a particular technology (e.g., Microsoft Excel). The trust literature suggests
a causal ordering among trust constructs, such that one’s propensity to trust directly influences institution-based trust
and indirectly shapes trust in a specific technology [McKnight and Chervany 2001-2002]. Moreover, given their
specificity, we believe (differing from McKnight and Chervany) that trust in a specific technology should fully
mediate more general constructs’ influence on behavior. To evaluate trust’s nomological net, we examine trust in
technology constructs’ interrelationships as well as their relationship with two post adoption outcomes: a) intention
to explore and b) deep structure use (see Fig. 1). This also differs from the McKnight and Chervany model.
Fig. 1. Trust in technology’s nomological net
Propensity to Trust in General Technology
Propensity to trust refers to a tendency to trust other persons (Table II, entry 1) [Rotter 1971]. The term “propensity”
suggests that it is a dynamic individual difference, not a stable, unchangeable trait [Mayer, et al. 1995; Thatcher and
Perrewe 2002]. Propensity is neither trustee-specific (as are trusting beliefs in a technology), nor situation-specific
(as are institution-based trusting beliefs). When applied to trust in technology, propensity to trust suggests that one is
willing to depend on a technology across situations and technologies.
Consistent with the literature on trust in people [McKnight and Chervany 2001-2002], propensity to trust
technology is composed of two constructs —faith in general technology and trusting stance. Faith in general
technology refers to individuals’ beliefs about attributes of information technologies (IT) in general (Table II, entry
2). For example, an individual with higher faith in general technology assumes IT is usually reliable, functional, and
provides necessary help. By contrast, trusting stance-general technology refers to the degree to which users believes
that positive outcomes will result from relying on technology (Table II, entry 3). When one has higher trusting
stance-general technology, one is likely to trust technology until provided a reason not to. Consistent with trust in
people models, we hypothesize the propensity to trust constructs (i.e., trusting stance-general technology and faith
in general technology) will predict institution-based trust in technology constructs, which will mediate their effects
on trust in specific technology.
H1a. Propensity to trust in general technology will positively affect institution-based trust in technology.
H1b. Propensity to trust in general technology will exert a mediated, positive effect on trust in a specific technology.
Institution-based Trust in Technology
Where propensity to trust directs attention to trust across situations, institution-based trust focuses on the belief that
success is likely because of supportive situations and structures tied to a specific context or a class of trustees.
Applied to technology, institution-based trust refers to beliefs about a specific class of technologies within a context.
Institution-based trust in technology is composed of situational normality and structural assurance.
Situational normality (Table II, entry 4) reflects a belief that when a situation is viewed as normal and well-ordered,
one can extend trust to something new in the situation. Situational normality-technology reflects the belief that using
a specific class of technologies in a new way is normal and comfortable within a specific setting [McKnight and
Trust in Technology: Its Components and Measures
Chervany 2001-2002]. For example, one may perceive using spreadsheets to be a normal work activity, and
consequently be predisposed to feel comfortable working with spreadsheets generally. This may result in trust in a
particular spreadsheet application.
In contrast, structural assurance refers to the infrastructure supporting technology use. It means the belief
that adequate support exists—legal, contractual, or physical, such as replacing faulty equipment—to ensure
successful use of an IT (see Table II, entry 5). For example, contractual guarantees may lead one to project a
successful software implementation. Structural assurance helps individuals form confidence in software, thereby
fostering trust in a specific technology.
H2a. Institution-based trust in technology will positively affect trust in a specific technology.
H2b. Institution-based trust in technology will exert a mediated, positive effect on post-adoption technology use.
Trust (Trusting Beliefs) in a Specific Technology
In contrast to institution-based trust’s focus on classes of technology (e.g., spreadsheets), trusting beliefs in a
specific technology reflect beliefs about the favorable attributes of a specific technology (e.g., MS Excel).
Interpersonal trusting beliefs reflect judgments that the other party has suitable attributes for performing as expected
in a risky situation [Mayer, et al. 1995]. McKnight, et al. defined trusting beliefs in people as a perception that
another “person is benevolent, competent, honest, or predictable in a situation” [1998: 474]. In studies of initial
trust, willingness to depend is often depicted as trusting intention [McKnight et al, 2002]. We do not address
trusting intention, but focus on the trusting beliefs aspect of trust, as have other IT researchers (e.g., [Gefen et al.
2003; Wang and Benbasat 2005]). This focus is particular appropriate in the post-adoption context, where users'
trust is based on experiential knowledge of the technology.
Trusting beliefs in a specific technology is reflected in three beliefs: functionality, helpfulness, and
reliability. A) Functionality refers to whether one expects a technology to have the capacity or capability to
complete a required task (see Table II, entry 7). B) Helpfulness excludes moral agency and volition (i.e., will) and
refers to a feature of the technology itself—the help function, i.e., is it adequate and responsive? (See Table II, entry
8) C) Reliability suggests one expects a technology to work consistently and predictably. The term reliable (i.e.,
without glitches or downtime) is probably used more frequently regarding technology than the terms predictable or
consistent [Balusek and Sircar 1998]. Hence, trusting belief-specific technology-reliability refers to the belief that
the technology will consistently operate properly (see Table II, entry 9). These three beliefs reflect the essence of
trust in a specific technology because they represent knowledge that users have cultivated by interacting with a
technology in different contexts, gathering data on its available features, and noticing how it responds to different
actions.
Trusting beliefs in a specific technology is a superordinate second-order construct. Superordinate implies
higher rank or status in the relationship between the trust in a specific technology and its dimensions. This means
that trusting beliefs in a specific technology exists at a deeper level than its individual trusting beliefs [Law et al.
1998], with the relationships flowing from trusting beliefs in a specific technology to its dimensions [Edwards 2001;
Serva et al. 2005]. When individuals trust more in a specific technology, they will report corresponding increases in
trusting beliefs about functionality, helpfulness, and reliability. In contrast, if the construct were aggregate, trust in a
specific technology would be formed by its dimensions which would not necessarily covary [see Polites et al.
Forthcoming]. Moreover, we believe each dimension of trust in a specific technology is reflective, and should be
conceptualized as superordinate and reflective at each level of analysis.
Because trust in specific technology is grounded in users’ knowing the technology sufficiently well that
they can anticipate how it will respond under different conditions, this construct should be positively related to postadoption use. We propose that users will be more willing to experiment with different features (intention to explore
[Nambisan et al. 1999]) or to use more features (deep structure use [Burton-Jones and Straub 2006]) of a technology
because they understand it well enough to believe that it has the attributes (i.e. capability, helpfulness, and
reliability) necessary to support extended use behaviors. Because trust in a specific technology is tied to specific
software applications, we anticipate it will mediate the effect of more broadly defined technology trust concepts on
post-adoption technology use.
H3a. Trust in a specific technology will positively affect individuals’ intention to explore the technology in a postadoption context.
H3b. Trust in a specific technology will positively affect individuals’ intention to use more features of the
technology (i.e. deep structure use) in a post-adoption context.
Trust in Technology: Its Components and Measures
H3c. Trust in a specific technology will mediate the effects of propensity to trust and institution-based trust in
technology on post-adoption technology use.
4. METHODOLOGY
Item Development and Pilot Study
Items were based on interpersonal trust measures that appear in several published studies (e.g. [McKnight et al.
2002]. The authors assessed face validity by comparing the items (Appendix B) to construct definitions in Table II.
An adequate match was found. Because we are familiar with how those items were developed, we felt comfortable
rewording them to specify a technology trustee instead of a person trustee. For example, the trusting belief-specific
technology-reliability items were refined by determining synonyms of consistent and predictable and by adding two
items measuring beliefs that the software won’t fail. The remaining items were adapted from McKnight et al.
[2002]. To ensure that the items mapped to the right construct, we completed several rounds of card-sorting
exercises with undergraduate students in entry-level management information systems (MIS) classes. After each
round of card sorting, items were “tweaked” and presented to a new panel of students.
After adapting the measures, a pilot study was conducted. Students enrolled in MIS classes evaluated trust
in technology items using either MS Access or MS Excel. Across samples, the measures demonstrated good
reliability (Cronbach’s alpha > 0.89), convergent validity, and discriminant validity (correlations < AVE square
roots). Given the pattern of results was consistent across MS Access and MSExcel, we were comfortable using all
the pilot measures in our main study.
Sample
We collected data from 376 students enrolled in general MIS courses at a university in the northwestern U.S.
Because the course is required for a cross section of business disciplines, students learned how to use Excel to
support analytical models. Students needed to engage in independent exploration of Excel’s features in order to
master the tool for use in their discipline and optimize their performance in coursework. Given the need for
exploration and deep structure use to complete assignments, this represents a useful population to validate our
conceptualization of trust in technology. After listwise deletion, sample size was 359. Table III reports sample
characteristics.
Preliminary Analysis
Preliminary analysis suggested that skewness, kurtosis, and outliers were not problems in the dataset [Tabachnick
and Fidell 1996]. Moreover, all Cronbach’s alphas for the remaining measures exceeded recommended heuristics of
0.80 [Fornell and Larcker 1981]. Given this, we assessed the constructs’ convergent and discriminant validity.
Variable
Gender
Experience using Excel
Mean: 3.209; Median: 3.00;
S.D: 2.875
Education
Mean: 2.117; Median: 2.00;
S.D: 0.645
Total Subjects
Value
Male
Female
< 2 years
>= 2 and < 5 years
>= 5 years
High School
Some College
Associate’s Degree
Bachelor’s Degree
Table III: Sample Characteristics
Frequency
220
139
137
70
152
47
232
71
9
% Respondents
61.3
38.7
38.2
30.1
31.8
13.1
64.6
19.8
2.5
359
Evaluating Validity vis-à-vis Perceived Usefulness and Computer Self-Efficacy
A multi-step process evaluated the measures’ convergent and discriminant validity. First, we ran principal
components analysis in SPSS using an oblique rotation. All item loadings exceeded 0.70 and cross-loadings were
less than 0.30 (see Appendix C) except for faith in general technology item 4. Since this item’s loading was close to
0.70 and its cross loadings were below 0.24, the item was retained.
Next, to assess the discriminant validity of trust in technology measures relative to established constructs,
we conducted an exploratory factor analysis that included perceived usefulness (PU) [Davis 1989] as well as internal
and external computer self-efficacy (CSE) [Thatcher et al. 2008]. Based on this initial analysis, 10 factors were
Trust in Technology: Its Components and Measures
extracted with eigenvalues greater than 1 (Appendix D), which was consistent with the scree plot. With the
exception of faith in general technology item 4, trust in technology items loaded above 0.70 and cross-loaded at less
than 0.30. Further, PU, external CSE, and internal CSE, did not cross-load highly on trust in technology factors.
Thus, all items were included in the subsequent analyses.
Evaluating the First-Order model
Then, we used a two-step confirmatory approach to evaluate our measurement and structural models [Anderson and
Gerbing 1988]. First, we performed confirmatory factor analysis of the first-order model using EQS 6.1’s maximum
likelihood method. The first-order model demonstrated good fit (NNFI = 0.963, CFI = 0.968; RMSEA = 0.041; chisquare (χ2) = 731.47 df = 398; χ2/df = 1.84) [Hu and Bentler 1999]. Also, the Average Variance Extracted (AVE)
and Cronbach’s alphas exceeded recommended values (i.e., AVE > 0.50 and α > 0.70) for convergent validity
[Fornell and Larcker 1981]. Moreover, the square roots of the AVEs exceeded each off-diagonal intercorrelation
[Fornell and Larcker 1981], suggesting discriminant validity. Finally, all item loadings were above 0.707 (p<0.01),
which provides further evidence of our measures’ discriminant and convergent validity [Hair et al. 1998] (see
Appendix E).
Common method bias was evaluated in the first-order measurement model by allowing items to load on an
unmeasured latent method factor in addition to their theoretical construct [Podsakoff et al. 2003]. Common method
bias is present when the introduction of the method factor causes item loadings on theoretical constructs to become
non-significant [Elangovan and Xie 2000]. In the presence of the method factor, all item loadings on theoretical
constructs remained above 0.707 and significant at p<0.01 [Hair et al. 1998]. Thus, common method bias does not
present a substantial problem.
Evaluating the Second-Order model
To evaluate our second-order trust in technology conceptualization, we compared first- and second-order
measurement models. When fit statistics are equivalent, the model with the fewest paths is considered the best
fitting model [Noar, 2003]. In the second-order model, trusting beliefs in a specific technology was modeled as a
second-order factor, reflecting individual beliefs about the reliability, functionality, and helpfulness of a specific
technology. First, we assessed model fit. NNFI (0.963) is unchanged in the second-order model, while CFI (0.967)
decreases by just .001. RMSEA (0.041) is also unchanged. The chi-square/df ratio is 1.89. These findings indicate
that modeling trust in a specific technology as a second-order factor does not significantly change model fit [Bentler
and Bonett 1980]. The AVE (0.58) and Cronbach’s alpha (0.89) for trusting beliefs in specific technology exceed
recommended values, providing initial evidence for our second-order conceptualization’s convergent and
discriminant validity. Further, the square root of the AVE for the construct was greater than any intercorrelations,
indicating discriminant validity (Appendix E).
Next, we further evaluated our second-order conceptualization of trust in technology. Analogous with the
relationship between reflective constructs and their measures, first order dimensions of trust in a specific technology
(i.e. reliability, functionality, and helpfulness) are expected to covary (Edwards 2001). Correlations among the
dimensions were within the range of r = 0.50 to r = 0.63 and statistically significant at p < .001. The strength of
these correlations indicates substantial relationships among the individual technology trusting beliefs [Williams
1968]. We also evaluated each dimension’s loadings on the trust in a specific technology construct. Reliability (β =
0.78), functionality (β = 0.76), and helpfulness (β = 0.64) load highly on the second-order factor (see Figure 2).
Taken together, these analyses suggest that our reflective second order conceptualization of trust in a specific
technology is appropriate.
Evaluating the hypothesized structural model
In the second step, we tested the hypotheses in a structural model in EQS 6.1 [Anderson and Gerbing 1988]. The
indices for the hypothesized model indicated good fit [Bentler and Bonett 1980; Hu and Bentler 1999]. Fit statistics
all met, or exceeded, recommended heuristics (i.e. NNFI = .961; CFI=.965; and RMSEA = .051; chi-square (χ2) =
514.34; df = 264; χ2/df=1.95). The model explains a large amount of variance in trust in a specific technology (R 2 =
0.50), as detailed in Fig. 2. With the exception of the relationship between faith in general technology and situational
normality, all proposed direct relationships were statistically significant at p<.05. Institution-based trust did not
fully mediate the effects of propensity to trust. Faith in general technology ( = 0.28 p < .001) and trusting stance (
= 0.11 p < .05) had significant direct effects on trust in specific technology. This finding suggests that to more fully
understand sources of trust in specific technology, it may be necessary to include propensity to trust as well as
institution-based trust constructs in research models.
Trust in Technology: Its Components and Measures
Fig. 2: Structural Model of Relations among Trust Constructs
Evaluating predictive validity
To evaluate predictive validity, we re-estimated the model to include intention to explore (H3a) [Nambisan et al.
1999] and deep structure use (H3b) [Burton-Jones and Straub 2006] (see Appendix F for scale items). The fit indices
exceeded standards for good fit (see Appendix G) [Bentler and Bonett 1980; Hu and Bentler 1999]. As detailed in
Fig. 3, trusting beliefs in technology explains a large amount of variance in deep structure use (R2= 0.45) and a
moderate amount of variance in intention to explore (R2= 0.22) [Cohen, 1988]. Consistent with our hypotheses,
propensity to trust and institution-based trust constructs did not have significant direct effects on post-adoptive IT
use. This suggests that in the post-adoptive context, an individual’s willingness to engage in value-added
technology use is primarily based on their trust in a specific technology’s attributes.
Fig. 3: Structural Model predicting Post-Adoptive Use Intentions
5. DISCUSSION
While many studies have investigated ties from initial trust to users’ initial decisions about technology, little
attention has been given to how knowledge-based trust shapes value-added post-adoption technology IT use. Such
research is important, because managers seek to extract value long after an IT is introduced. By developing trust in
technology constructs, as well as demonstrating the internal and external nomological validity of their measures, this
study contributes to the research and practice in three ways. First, it provides an attribute-based framework for
distinguishing between trust in people and trust in technology. Second, it offers literature-based definitions required
for investigating forms of trust in technology. Third, it develops parsimonious measures.
Rooted in trust in people literature, this study advances IS trust research by distinguishing between
knowledge-based trust in technology and initial trust. Where initial technology decisions may reflect trust derived
from assumptions or estimates of cost and benefits, we suggest that an individual’s experiences with a specific
technology build knowledge-based trust that influences post-adoption technology use. As such, it represents an
opportunity for developing theories for how trust in a specific technology guides users’ value-added applications of
existing IT.
Trust in Technology: Its Components and Measures
Our research sheds light on how initial and knowledge-based trust in a specific technology may differ.
Consistent with trust transference [Doney et al. 1998; Stewart 2003], initial trust suggests individuals extend trust to
an unknown trustee when the individual associates the specific trustee with an institutional mechanism or familiar
context (e.g., structural assurances). For example, seal programs can contribute to individuals forming initial trust
and intention to use a website; but initial people trust can erode because it is assumptional or indirect in nature
[McKnight et al. 1998]. By contrast, our findings imply that when individuals rely on knowledge-based trust, they
draw less on institution-based beliefs, and make decisions based on trusting beliefs about characteristics of the
technology itself. For example, intentions to explore additional features of MS Excel may rest on an individual’s
trusting beliefs about the actual helpfulness of the tutorials embedded in the software. While this study does not
directly compare initial and knowledge-based trust, it suggests a need for future research that employs longitudinal
methods to compare their formation and implications for post-adoption technology use.
Although we have demonstrated trusting beliefs in technology differs from perceived usefulness and CSE,
future research should explore the relationship between these forms of object-specific technology beliefs. We
suspect that in the post-adoption context, trusting beliefs in technology may complement models like TAM, because
it adds an experiential trust component that is not currently captured in frequently studied beliefs such as perceived
usefulness and perceived ease of use. Further, we believe that trusting beliefs in technology’s influence may be
more pervasive than TAM constructs. Because perceived usefulness reflects a cost-benefit analysis which may
change with the context, its influence is likely more fragile across contexts than knowledge-based trust [Lewicki and
Bunker 1996]. Moreover, loyalty research suggests that knowledge-based trust often engenders commitment toward
a technology [Chow and Holden 1997; Zins 2001]. Such a commitment could make a specific technology appear
more attractive than reasonable alternatives, even where an alternative seems more useful or easy to use. Hence,
other constructs’ influence, such as perceived usefulness and perceived ease of use, may be subsumed by
experiential knowledge-based trust over time. In future research, it would be interesting to examine the relative
influence of trust in technology constructs vis-à-vis other constructs on use continuance or innovation over time.
To advance both trust and technology research, future studies should seek to understand more of the
relationship between distinct forms of trust. Also, research is necessary that investigates the conditions under which
knowledge-based trust develops and how it relates to trust in other actors, such as IT support staff, or vendors
[Thatcher et al, 2011]. For example, research is necessary that examines the dynamic interplay between users’ trust
in human agents that built a system, human agents that introduce a system, those that support a system, and the
technology itself. By examining how trust in different elements of the context and IT interact, one can form a
broader understanding of how trust in socio-technical systems shapes value-added technology use.
Additionally, while trust in human agents may help foster initial trust to influence users’ initial IT use
decisions, once formed, knowledge-based trust in technology may provide a plausible explanation for why people
persist in using products sold by unpopular vendors. For example, in the post-adoption context, users may continue
to use Microsoft products because they distinguish between Microsoft the IT vendor and applications such as MS
Office. A company may purchase a product from Microsoft because it trusts the vendor. But once the company has
the product in-house, trust in the vendor may have little predictive power because users mainly rely on the product
itself, not the vendor. Thus, one will continue (or discontinue) to use the technical product based on trust in the
product itself, based on use experience (e.g., a user who dislikes Microsoft Corporation may still use MS Office due
to its attributes). Hence, the practical problem this paper addresses is that trust in the vendor often does not influence
continued usage. This study provides a set of trust in technology constructs that enable one to predict continued use
when trust in the vendor adds little value to the equation. To give practitioners actionable guidelines, it is important
for additional research to tease out the underlying dynamics of these relationships.
In considering forms of trust, researchers may also consider emotions’ influence on value-added postadoption IT use. Komiak and Benbasat [2006] argue that because decisions to use a technology involve both
reasoning and feeling, cognitive beliefs alone are inadequate to explain trusting decisions. Given the post-adoption
context of this study, we focused on understanding the influence of experiential knowledge on trust in technology.
However, we suspect that integrating emotional aspects with cognitive beliefs offers a further opportunity to extend
understanding of individual IT use. To that end, having established empirically some impacts of trust in technology,
we plan to investigate whether mood primes trusting beliefs about a technology and/or moderates the relationship
between trust in technology and post-adoptive behaviors.
6. LIMITATIONS
One study limitation is that we employ only one type of technology to evaluate the model. We note that in one early
pilot study (done before the one reported here) we used Oracle Developer as the trusted technology, with similar
results found. While our work is grounded in the interpersonal trust literature, future research should explore the
Trust in Technology: Its Components and Measures
extent to which the findings presented here are transferable to other technology types. This is important, because
while users might readily distinguish between trust in a traditional IT vendor, such as Microsoft, and its products,
such as MS Excel, they may not make such distinctions with social technologies like Facebook or cloud-based
technologies like Gmail.
Across contexts and technologies, researchers will need to take care when adapting our measures to
different technologies. Our theoretical conceptualization of trust in technology accurately describes how individuals
approach specific technologies. However, when operationalizing trusting beliefs (e.g., reliability, functionality, and
helpfulness), researchers should carefully consider the meaning of these terms relative to a specific form of IT. For
example, a researcher may need to use a different set of items to describe the functionality of a sophisticated
software application such as Adobe Dreamweaver when compared to items for a DVD player. Hence, we urge
researchers to consider the context as well as the technology when adapting our measures.
Additionally, some might consider that using a student sample limits our model’s generalizability.
However, research suggests that students do not differ significantly from others in their technology use decisions
[Sen et al. 2006]. Moreover, our subjects were active users of MS Excel, and were thus an appropriate sample.
Nevertheless, researchers aiming to explore the influence of trust in technology may wish to use non-student
samples to increase generalizability.
7. CONCLUSION
Trust is an important concept, and trust in technology needs to be explored further, since our study finds it affects
value-added, post-adoption technology use. To this end, this paper provides a set of trust in technology constructs
and measures that can aid researchers. Because one is more likely to explore and use more features of a technology
if one trusts it, trust in technology may complement existing models examining post-adoption IT use. Also, theory
suggests that the influence of trust constructs may vary over time. This warrants further investigation because it
implies that different managerial interventions may be necessary to promote initial vs. post-adoption use. Finally, to
provide practitioners with actionable guidelines for interventions, it is important to tease out the relationship
between trust in people and trust in technology. For example, does trust in technology mediate the influence of trust
in people who build, advocate the use of, or support a specific technology? Or does trust in technology’s influence
depend on trust in people? Future research should explore these questions.
Trust in Technology: Its Components and Measures
REFERENCES
ANDERSON, J. C AND GERBING, D. W. (1988) Structural Equation modeling in practice : A review and recommended two-step approach.
Psychological Bulletin, 103, 3, 411-423.
ARROW, K. J. 1974. The limits of organization. Norton, New York.
BALUSEK, S. AND SIRCAR, S. 1998. The Network Computer Concept. Information Systems Management 15, 1, 48-57.
BARBER, B. (1983) The logic and limits of trust. Rutgers University Press, New Brunswick, NJ.
BENTLER, P.M. AND BONETT, D.G. 1980. Significance tests and goodness of fit in the analysis of covariance structures. Psychological
Bulletin, 88, 3 , 588-606.
BENAMATI J., FULLER, M. A., SERVA, M.A. AND BAROUDI, J. 2010. Clarifying the integration of trust and TAM in e-commerce
environments: Implications for systems design and management. IEEE Transactions on Engineering Management, 57, 3, 380-393.
BERSCHEID, E. 1993. Emotion. In Kelley, H. H., Berscheid, E., Christensen, A., Harvey, J. H., Huston, T. L., Levinger, G., McClintock, E.,
Peplau, L. A. and Peterson, D. R. (Eds.), Close relationships, 110-168. New York: W. H. Freeman.
BHATTACHERJEE, A. 2002. Individual trust in online firms: Scale development and initial test. Journal of Management Information Systems,
19, 1, 211-241.
BONOMA, T. V. 1976. Conflict, cooperation, and trust in three power systems. Behavioral Science, 21, 6, 499-514.
BURTON-JONES, A AND STRAUB, D.W. 2006. Reconceptualizing System Usage: An Approach and Empirical Test. Information Systems
Research, 17, 3, 228-246.
CLARKE, R. 1999. Internet privacy concerns confirm the case for intervention. Communications of the ACM, 42, 2, 60-67.
CHOW, S. AND HOLDEN R. 1997. Toward an understanding of loyalty: the moderating role of trust. Journal of Managerial Issues, 9, 3, 275298.
CORBITT, B. J., THANASANKIT, T. AND YI, H. 2003. Trust and e-commerce: A study of consumer perceptions. Electronic Commerce
Research and Applications, 2, 203-215.
DAVIS, F. D. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13, 3, 319340.
DONEY, P. M., CANNON, J. P. AND MULLEN, M. R. 1998. Understanding the influence of national culture on the development of trust. The
Academy of Management Review, 23, 3, 601-620.
EARLEY, P. C. 1986. Trust, Perceived Importance of praise and criticism, and work performance: an examination of feedback in the United
States and England. Journal of Management, 12, 457-473.
EDWARDS, J. R. 2001. Multidimensional constructs in organizational behavior research: an integrative analytical framework. Organizational
Research Methods, 4, 2, 144-192.
ELANGOVAN, A.R., AND XIE, J.L. 1999. Effects of perceived power of supervisor on subordinate stress and motivation: the moderating role of
subordinate characteristics. Journal of Organizational Behavior, 20, 3, 359-373.
FORNELL, C., AND LARCKER, D. 1981. Evaluating structural equation models with unobservable variables and measurement error. Journal of
Marketing Research, 18, 39-50.
FRIEDMAN, B., KAHN, P. H., JR. AND HOWE, D. C. 2000. Trust online, Communications of the ACM, 43, 12, 34-40.
FUKUYAMA, F. 1995. Trust: the social virtues and the creation of prosperity. The Free Press, New York.
GEFEN, D., STRAUB, D.W. AND BOUDREAU, M. 2000. Structural equation modeling and regression: guidelines for research practice.
Communications of AIS, 7, 7, 1-78.
GEFEN, D., KARAHANNA, E. AND STRAUB, D. W. 2003. Trust and TAM in online shopping: an integrated model. MIS Quarterly, 27, 1, 5190.
GIFFIN, K. 1967. The contribution of studies of source credibility to a theory of interpersonal trust in the communication process. Psychological
Bulletin, 68, 2, 104-120.
GOLEMBIEWSKI, R. T. AND MCCONKIE, M. 1975. The centrality of interpersonal trust in group processes. In Cooper, G. L. (Ed.), Theories
of group processes, John Wiley & Sons, London, 131-185.
HAIR, J.F., JR. ANDERSON, R. E.; TATHAM, R. L.; & BLACK, W. C. 1998. Multivariate data analysis with readings, 5th ed. Englewood
Cliffs, NJ: Prentice-Hall.
HU, L. AND BENTLER, P. M. 1999. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives.
Structural Equation Modeling 6, 1, 1-55.
JARVENPAA, S. L. AND LEIDNER, D. E. 1998. Communication and trust in global virtual teams. Organization Science, 10, 6, 791-815.
KIM, D. 2008. Self-perception based versus transference-based trust determinants in computer-mediated transactions: a cross-cultural comparison
study. Journal of Management Information Systems, 24, 4, 13-45.
KIM, G., SHIN, B. S. AND LEE, H. G. 2009. Understanding dynamics between initial trust and usage intentions of mobile banking. Information
Systems Journal, 19, 3, 283-311.
KIM, K. K. AND PRABHAKAR, B. 2004. Initial trust and the adoption of B2C e-commerce: The case of internet banking. The Data Base for
Advances in Information Systems, 35, 2, 50-65.
KOMIAK, S. Y. X. AND BENBASAT, I. 2006. The Effects of Personalization and Familiarity on Trust and Adoption of Recommendation
Agents, MIS Quarterly, 30, 4, 941-960.
KOUFARIS, M. AND HAMPTON-SOSA, W. 2004. The development of initial trust in an online company by new customers. Information and
Management, 41, 3, 377-397.
LAW, K.S., WONG, C.-S., AND MOBLEY, W.H. (1998) Toward a Taxonomy of Multidimensional Constructs, Academy of Management
Review, 23, 4, 741-755.
LEE, M. K. O. AND TURBAN, E. 2001. A trust model for consumer internet shopping. International Journal of Electronic Commerce, 6, 1,
75-91.
LEWICKI, R. J., BUNKER, B. B. 1996. Developing and Maintaining Trust in Work Relationships, Trust in Organizations: Frontiers of Theory
and Research, R. Kramer and T. Tyler (Eds.), Sage Publications, Thousand Oaks, CA, 114-139.
LI, D., BROWNE, G. J. AND WETHERBE, J. C. 2006. Why do internet users stick with a specific web site? A relationship perspective.
International Journal of Electronic Commerce, 10, 4, 105-141.
Trust in Technology: Its Components and Measures
LIM, K.H, CHOON, L.S., LEE, M.K.O., BENBASAT, I. 2006. Do I trust you online, and if so, will I buy? An empirical study of two trustbuilding strategies. Journal of Management Information Systems, 23, 2, 233-266.
LIPPERT, S. K. 2007. Assessing post-adoption utilisation of information technology within a supply chain management context. International
Journal of Technology and Management, 7, 1, 36-59.
LUHMANN, N. 1979. Trust and power, John Wiley, New York.
MAYER, R. C., DAVIS, J. H. AND SCHOORMAN, F. D. 1995. An integrative model of organizational trust. Academy of Management Review,
20, 709-734.
MCKNIGHT, D.H. 2005. Trust in information technology, in Davis, G.B. (Ed.), The Blackwell Encyclopedia of Management, Management
Information Systems, Malden, MA: Blackwell, Vol. 7, 329-331.
MCKNIGHT, D. H., AND CHERVANY, N. L. 1996. The meanings of trust. University of Minnesota MIS Research Center Working Paper
series, WP 96-04, (http://misrc.umn.edu/wpaper/WorkingPapers/9604.pdf)
MCKNIGHT, D. H. AND CHERVANY, N. L. 2001-2002. What trust means in e-commerce customer relationships: an interdisciplinary
conceptual typology. International Journal of Electronic Commerce, 6, 2, 35-59.
MCKNIGHT, D. H., CHOUDHURY, V. AND KACMAR, C. 2002. Developing and validating trust measures for e-commerce: an integrative
typology. Information Systems Research, 13, 3, 334-359.
MCKNIGHT, D. H., CUMMINGS, L. L. AND CHERVANY, N. L. 1998. Initial trust formation in new organizational relationships. Academy of
Management Review, 23, 3, 473-490.
NAMBISAN, S, AGARWAL, R, AND TANNIRU, M. 1999. Organizational mechanisms for enhancing user innovation in information
technology. MIS Quarterly, 23, 3, 365-395.
NOAR, S. 2003. The role of structural equation modeling in scale development, Structural Equation Modeling, 10, 4, 622–647
ORLIKOWSKI, W. J. AND IACONO, C. S. 2001. Desperately seeking the ‘IT’ in IT research: A call to theorizing the IT artifact. Information
Systems Research, 12, 2, 121-134.
PAUL, D.L., AND MCDANIEL, R.R. 2002. A field study of the effect of interpersonal trust on virtual collaborative relationship performance,
MIS Quarterly, 28, 2, 183-227.
PAVLOU, P. A. 2003. Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model.
International Journal of Electronic Commerce, 7, 3, 101-134.
PAVLOU, P. A. AND GEFEN, D. 2004. Building effective online marketplaces with institution-based trust. Information Systems Research, 15, 1,
37-59.
PENNINGTON, R., H. D. WILCOX, AND GROVER, V. (2003) "The role of system trust in business-to-comsumer transactions," Journal of
Management Information systems (20) 3, 197-226.
PODSAKOFF, P.M., MACKENZIE, S.B., LEE, J.Y., AND PODSAKOFF, N.P. 2003. Common method biases in behavioral research: a critical
review of the literature and recommended remedies. Journal of Applied Psychology, 88, 5, 879-903.
POLITES, G.L., ROBERTS, N., AND THATCHER, J.B. Conceptualizing Models Using Multidimensional Contructs: A Review and Guidelines
for their Use, European Journal of Information Systems, Forthcoming.
REMPEL, J. K., HOLMES, J. G. AND ZANNA, M. P. 1985. Trust in close relationships. Journal of Personality and Social Psychology, 49, 1,
95-112.
RIKER, W. H. 1971. The Nature of Trust. In Tedeschi, J. T. (Ed.) Perspectives on Social Power. Aldine Publishing Company, Chicago, 63-81.
ROTTER, J. B. 1971. Generalized expectancies for interpersonal trust. American Psychologist, 26, 5, 443-452.
ROUSSEAU, D. M., SITKIN, S. B., BURT, R. S. AND CAMERER, C. 1998. Not so different after all: a cross-discipline view of trust. Academy
of Management Review, 23, 3, 393-404.
SEN, R., R. C. KING, AND M. J. SHAW 2006. Buyers' choice of online search strategy and its managerial implications. Journal of Management
Information systems, 23, 1, 211-238.
SERVA, M. A., FULLER, M. A., AND BENAMATI, J. (2005) Trustworthiness In B2C E-Commerce: Empirical Test Of Alternative Models,
Data Base for Advances in Information Systems, 36, 3,
STEWART, K. J. 2003. Trust transfer on the World Wide Web. Organization Science, 14, 1, 5-17.
SUH, B. AND HAN, I. 2003. The impact of customer trust and perception of security control on the acceptance of electronic commerce.
International Journal of Electronic Commerce, 7, 3, 135-161.
SUN, H. 2010. Sellers’ trust and continued use of online marketplaces. Journal of the Association for Information Systems, 11, 4, 182-211.
TABACHNICK, B. G. AND FIDELL, L. S. 1996. Using multivariate statistics (3rd ed.). HarperCollins, New York.
THATCHER, J. B. AND PERREWE, P. L. 2002. An empirical examination of individual traits as antecedents to computer anxiety and computer
self-efficacy. MIS Quarterly, 26, 4, 381-396.
THATCHER, J. B., MCKNIGHT, D.H., BAKER, E.W., ARSAL, R., AND ROBERTS, N. (forthcoming) The role of trust in post-adoption it
exploration: an empirical examination of knowledge management systems. IEEE Transactions on Engineering Management.
THATCHER, J. B., ZIMMER, J.C., GUNDLACH, M.J., AND MCKNIGHT, D.H. 2008. Internal and External Dimensions of Computer SelfEfficacy: An Empirical Examination, IEEE Transactions on Engineering Management, 55, 4, 628-644.
THATCHER, J. B., MCKNIGHT, D. H., BAKER, E. W., AND ARSAL, R. E. 2011. The role of trust in postadoption exploration: An empirical
investigation of knowledge management systems, IEEE Transactions on Engineering Management, 58, 1, 56-70.
VAN DER HEIJDEN, H., VERHAGEN, T. AND CREEMERS, M. 2003. Understanding online purchase intentions: Contributions from
technology and trust perspectives. European Journal of Information Systems, 12, 41-48.
VAN SLYKE, C., BELANGER, F. AND COMUNALE, C. L. 2004. Factors influencing the adoption of web-based shopping: the impact of trust.
The Data Base for Advances in Information Systems, 35, 2, 32-49.
VANCE. A., ELIE-DIT-COSAQUE, C., AND STRAUB. D. 2008. Examining trust in information technology artifacts: the effects of system
quality and culture. Journal of Management Information Systems, 24, 4, 73-100.
WANG, W., AND BENBASAT, I. 2005. Trust in and adoption of online recommendation agents. Journal of the Association for Information
Systems, 6, 3, 72-101.
WILLIAMS, F. 1968. Reasoning with statistics, Holt, Rinehart and Winston, New York.
WILLIAMSON, O. E. 1993. Calculativeness, trust, and economic organization. Journal of Law and Economics, 34, 453-502.
ZINS, A. H. 2001. Relative attitudes and commitment in customer loyalty models. International Journal of Service Industry Management, 12, 3,
269-294.
Trust in Technology: Its Components and Measures
Trust in Technology: Its Components and Measures
Appendix A.
Representative research on trust in MIS literature
Object of Trust
Trust Attributes
Roles in Model / Empirical Relationships
Type of Trust
Studies
Institution
Structural assurances,
situational normality
Beliefs that outcomes are likely to be
successful due to the presence of supportive
situations and structures
Initial
McKnight et al., 1998
Effectiveness of 3rd
party certifications,
security infrastructure
Trust Attributes. No empirical test
Initial
Lee and Turban, 2001
Protective legal or
technological structures
Institutional-based trust affects trust in vendor
Initial
McKnight et al., 2002
Situational normality,
structural assurances
Institution-based trust affects trust (in
merchant) and PEOU/PU1.
Initial
Gefen et al., 2003
Situational normality,
structural assurances
System trust affects trust in vendor.
Initial
Pennington et al., 2003
Feedback mechanisms,
escrow services, credit
card guarantees
Effectiveness of institutional mechanisms
affects trust in the community of sellers.
Initial
Pavlou and Gefen,
2004
Structural assurances
Structural assurances influence trust in
mobile banking
Initial
Kim et al. 2009
Technology
Technical competence,
reliability, medium
understanding
Trust Attributes. No empirical test
Initial
Lee and Turban, 2001
Technology
Information quality,
good interface design
Perceived site quality aids formation of trust
in vendor
Initial
McKnight et al., 2002
Site quality, technical
trustworthiness
Perceived technical trustworthiness and
perceived site quality affects trust in web
vendor
Initial
Corbitt et al., 2003
Competence,
benevolence, integrity
Trust in e-commerce environment mediates
the relationship between perceptions of
security controls and behavioral intentions
Initial
Suh and Han, 2003
Correctness,
availability, reliability,
security, survivability
Trust in e-channel influences adoption of ebanking
Initial
(based on
perceived
competence)
Kim and Prabhakar,
2004
Competence,
benevolence, and
integrity
Trust in recommendation agent influences
intention to adopt and PU
Initial
Wang and Benbasat,
2005
Cognitive: Integrity,
competence
Cognitive trust influences emotional trust
(toward the behavior), which influences
intention to adopt online recommendation
agent
Initial
Komiak and Benbasat,
2006
Competence,
benevolence, integrity
Trust in a website, based on experience,
influences intention to continue using the
website
Relationship
Li et al., 2006
Predictability,
reliability, utility
Trust in technology solution affects
perceptions of supply chain technology and
long-term interaction between supply chain
partners.
Knowledge
Lippert, 2007
Emotional: feelings of
security and comfort
Technology
1
PEOU = Perceived Ease of Use; PU = Perceived Usefulness
Trust in Technology: Its Components and Measures
Object of Trust
Online
Vendor
Online
Vendor
Trust Attributes
Roles in Model / Empirical Relationships
Type of Trust
Studies
Accuracy, reliability,
safety
Initial trust in mobile banking influences
behavioral intentions
Initial
Kim et al. 2009
Functionality,
dependability,
helpfulness
Trust in IT influences PU and PEOU
Knowledge
Thatcher et al., 2011
Ability, integrity,
benevolence
Trust in vendor (or agent of the vendor)
affects intention to use the technology
Initial
Lee & Turban, 2001;
Benevolence,
credibility
Trust in vendor determines attitudes and
behavioral intentions
Initial
Jarvanpaa et al., 2000;
Corbitt et al., 2003;
Pennington et al., 2003
Benevolence,
credibility
Trust in vendor, based on past transactions
and reputation, determines risk perceptions,
beliefs, and behavioral intentions
Knowledge
Pavlou, 2003
Expectation of
particular action
Trust in online stores and PEOU and PU
jointly determine online purchasing
intentions
Initial
Van der Heijden et al,
2003
Commitment, avoiding
excessive advantage
Trust in internet bank influences adoption of
internet banking.
Initial
Kim and Prabhakar,
2004
Willingness to
customize, reputation,
size
Trust attributes. No empirical test.
Initial
Koufaris and
Hampton-Sosa, 2004
No separate dimensions
Trust in web merchant, as well as innovation
characteristics, determine intention to use webbased shopping
Initial
Van Slyke et al., 2004
Ability, integrity,
benevolence of an
identifiable population
Trust in a community of sellers determines
transaction intentions
Knowledge
Pavlou & Gefen, 2004
Cognitive: Competence,
benevolence, integrity,
predictability
Trust in the intermediary, and trust in the
community of buyers influences sellers’
intentions to use an e-marketplace again
Knowledge
Sun, 2010
The influence of trusting beliefs in a web vendor
on intentions is fully mediated by trusting
attitude
Initial
Benamati et al., 2010
Ability
Trust in IT support staff influences trust in
IT and PEOU
Knowledge
Thatcher et al., 2011
Reliability, integrity
Trust in the supplier increases utilization of
new technologies
Knowledge
Lippert, 2007
Bhattacherjee 2002;
McKnight et al., 2002;
Gefen et al., 2003;
Kim, 2008; Vance et
al., 2008
Emotional: feelings of
security and comfort
Trusting beliefs: ability,
integrity, benevolence
Trusting attitude: a
willingness to rely on a web
vendor
IT Support
Staff
Interfirm
Trust in Technology: Its Components and Measures
Appendix B: Trust in Technology—Measures
Trusting Belief-Specific Technology—Reliability
1. Excel is a very reliable piece of software.
2. Excel does not fail me
3. Excel is extremely dependable.
4. Excel does not malfunction for me
Trusting Belief-Specific Technology—Functionality
1. Excel has the functionality I need.
2. Excel has the features required for my tasks.
3. Excel has the ability to do what I want it to do.
Trusting Belief-Specific Technology—Helpfulness
1. Excel supplies my need for help through a help function.
2. Excel provides competent guidance (as needed) through a help function.
3. Excel provides whatever help I need2.
4. Excel provides very sensible and effective advice, if needed.
Situational Normality—Technology (Adapted from McKnight et al. 2002):
1. I am totally comfortable working with spreadsheet products.
2. I feel very good about how things go when I use spreadsheet products.
3. I always feel confident that the right things will happen when I use spreadsheet products.
4. It appears that things will be fine when I utilize spreadsheet products.
Structural Assurance—Technology (Adapted from McKnight et al. 2002):
1. I feel okay using spreadsheet products because they are backed by vendor protections.
2. Product guarantees make it feel all right to use spreadsheet software.
3. Favorable-to-consumer legal structures help me feel safe working with spreadsheet products.
4. Having the backing of legal statutes and processes makes me feel secure in using spreadsheet products.
Faith in General Technology (Adapted from McKnight et al. 2002):
1. I believe that most technologies are effective at what they are designed to do.
2. A large majority of technologies are excellent.
3. Most technologies have the features needed for their domain.
4. I think most technologies enable me to do what I need to do.
Trusting Stance—General Technology (Adapted from McKnight et al. 2002):
1. My typical approach is to trust new technologies until they prove to me that I shouldn’t trust them.
2. I usually trust a technology until it gives me a reason not to trust it.
3. I generally give a technology the benefit of the doubt when I first use it.
2
This item was dropped prior to CFA
Trust in Technology: Its Components and Measures
APPENDIX C: PCA—LOADINGS AND CROSS-LOADINGS
Rotation Method: Oblique with Kaiser Normalization.
Component
Trust.
Beliefsreliab.
situationalnormality2
situationalnormality1
situationalnormality3
situationalnormality4
structuralassurance4
structuralassurance3
structuralassurance2
structuralassurance1
reliability4
reliability3
reliability1
reliability2
faithgeneraltech2
faithgeneraltech3
faithgeneraltech1
faithgeneraltech4
helpfulness4
helpfulness1
helpfulness2
functionality2
functionality1
functionality3
trustingstance2
trustingstance1
trustingstance3
Rotated Eigenvaluesa
0.97
0.96
0.94
0.91
-0.02
0.00
-0.01
0.03
-0.06
-0.01
0.01
0.08
-0.03
0.02
0.04
-0.02
-0.02
0.00
0.00
-0.04
0.03
0.02
-0.07
0.00
0.07
5.18
Sit. Norm
-0.02
-0.02
0.00
0.05
0.96
0.94
0.92
0.86
0.02
-0.02
-0.02
0.02
0.01
0.01
-0.02
0.01
-0.03
0.03
0.01
-0.01
0.01
0.01
0.05
-0.01
-0.04
5.62
Trust
Stance
0.01
-0.04
0.01
0.01
-0.02
0.00
-0.01
0.03
0.95
0.92
0.83
0.77
-0.01
-0.01
-0.10
0.14
-0.01
-0.04
0.05
-0.03
-0.02
0.07
0.03
-0.02
0.02
6.12
Struct.
Assur
-0.01
0.01
0.02
-0.01
-0.02
0.05
0.02
-0.04
-0.08
0.05
0.05
-0.01
0.90
0.87
0.78
0.68
0.00
-0.05
0.04
0.04
-0.01
-0.03
0.02
0.05
-0.06
4.35
Faith in Gen.
Tech.
-0.02
0.01
0.01
-0.02
-0.02
-0.03
0.03
0.03
-0.04
-0.04
0.00
0.10
-0.07
0.03
0.00
0.06
0.94
0.92
0.92
-0.01
0.03
-0.02
-0.04
0.05
-0.02
5.23
Trust.
Beliefshelp.
Trust
Stance
-0.02
0.01
0.00
0.01
0.02
0.02
-0.03
-0.01
-0.05
-0.01
0.10
-0.01
0.00
0.00
-0.01
0.01
-0.01
0.07
-0.05
0.95
0.91
0.90
0.00
-0.03
0.03
5.66
0.02
0.00
-0.05
0.03
0.00
-0.05
0.01
0.05
0.05
-0.03
-0.01
0.00
-0.08
-0.04
0.15
0.01
0.01
0.01
-0.03
0.05
-0.02
-0.03
0.89
0.88
0.87
3.52
% Variance Explaineda
36.16
12.40
8.65
7.91
6,26
5.30
Extraction Method: Principal Components Analysis with Promax
Rotation converged in 6 iterations
a. When components are correlated, sums of squared loadings cannot be added to obtain a total variance
4.81
Trust in Technology: Its Components and Measures
APPENDIX D: PCA—LOADINGS AND CROSS-LOADINGS with PU and CSE
Rotation Method: Oblique with Kaiser Normalization.
Component
PU
Sit Norm
Trust. Beliefs- Faith in Gen.
Trust. BeliefsTrust. BeliefsStruct. Assur.
reliab.
Tech
CSE-internal
help.
Trust Stance CSE-external
funct.
usefulness2
0.95
-0.05
0.00
0.03
-0.05
0.00
usefulness3
0.91
0.00
0.02
-0.03
-0.04
0.05
usefulness1
0.89
-0.03
-0.02
0.02
0.03
-0.02
usefulness4
0.88
0.05
0.02
0.01
0.02
-0.03
situationalnormality2
0.01
0.98
-0.02
0.01
-0.01
0.00
situationalnormality1
0.00
0.96
-0.02
-0.04
0.01
0.01
situationalnormality3
-0.03
0.93
0.00
0.01
0.02
0.02
situationalnormality4
0.03
0.91
0.05
0.02
-0.02
-0.03
structuralassurance4
0.06
-0.01
0.96
-0.03
-0.03
-0.02
structuralassurance3
0.02
0.00
0.94
-0.01
0.04
0.00
structuralassurance2
-0.05
-0.01
0.92
0.00
0.02
-0.02
structuralassurance1
-0.03
0.03
0.86
0.03
-0.04
0.01
reliability4
-0.08
-0.05
0.02
0.97
-0.07
-0.06
reliability3
0.07
0.00
-0.02
0.89
0.04
0.04
reliability1
0.07
0.01
-0.02
0.81
0.04
0.03
reliability2
-0.04
0.09
0.01
0.77
-0.01
-0.01
faithgeneraltech2
-0.05
-0.05
0.00
0.00
0.90
0.03
faithgeneraltech3
0.02
0.02
0.01
-0.01
0.87
-0.02
faithgeneraltech1
-0.01
0.04
-0.02
-0.11
0.78
-0.04
faithgeneraltech4
0.06
-0.01
0.02
0.14
0.68
-0.03
CSE2
0.04
-0.01
0.01
-0.05
-0.11
0.96
CSE1
-0.01
0.05
-0.03
-0.05
0.05
0.93
CSE3
-0.03
-0.06
-0.01
0.11
0.03
0.77
helpfulness4
0.01
-0.01
-0.03
-0.02
0.00
-0.01
helpfulness2
-0.03
0.00
0.03
-0.03
-0.04
0.03
helpfulness1
0.05
0.01
0.02
0.03
0.04
0.01
trustingstance2
0.01
-0.06
0.05
0.03
0.02
-0.04
trustingstance1
0.02
0.00
-0.01
-0.03
0.05
0.03
trustingstance3
-0.02
0.07
-0.04
0.03
-0.05
0.03
CSE6
0.03
-0.02
-0.04
0.02
-0.07
-0.11
CSE5
-0.01
0.02
-0.01
-0.05
0.00
-0.06
CSE4
-0.02
0.02
0.08
0.01
0.07
0.37
functionality2
0.02
-0.04
-0.01
-0.03
0.05
0.00
functionality1
0.00
0.02
0.01
-0.02
0.00
0.01
functionality3
0.04
0.02
0.01
0.08
-0.03
0.01
Rotated Eigenvalues
7.23
5.49
6.00
7.09
4.78
2.91
% Variance Explaineda
30.11
9.81
8.39
7.21
5.97
4.39
Extraction Method: Principal Components Analysis with Promax
Rotation converged in 6 iterations
a. When components are correlated, sums of squared loadings cannot be added to obtain a total variance
-0.01
-0.01
0.01
-0.01
-0.02
0.02
0.01
-0.02
-0.03
-0.03
0.04
0.04
-0.03
-0.05
-0.01
0.11
-0.05
0.02
0.00
0.05
0.02
-0.01
0.03
0.94
0.93
0.90
-0.04
0.05
-0.01
0.02
0.02
-0.06
-0.01
0.04
-0.03
6.10
4.24
0.01
-0.01
0.01
0.01
0.02
0.00
-0.04
0.03
0.00
-0.05
0.01
0.05
0.04
-0.02
-0.01
0.00
-0.07
-0.04
0.15
0.01
0.05
-0.02
0.00
0.01
0.01
-0.03
0.89
0.88
0.87
0.02
0.00
-0.03
0.05
-0.01
-0.03
3.74
4.06
0.02
-0.03
-0.01
0.00
0.00
0.01
0.01
-0.01
0.01
0.01
0.00
-0.03
-0.04
0.01
-0.01
0.03
-0.05
0.01
0.06
-0.09
-0.13
-0.08
0.18
0.03
-0.06
0.04
0.05
0.00
-0.04
0.96
0.96
0.60
-0.01
0.02
0.02
2.65
3.65
-0.02
0.03
-0.04
-0.05
-0.03
0.01
0.02
-0.01
-0.01
0.02
-0.01
0.01
-0.03
-0.02
0.08
0.02
0.03
0.00
0.02
-0.05
0.03
0.02
-0.03
-0.01
0.07
-0.07
0.00
-0.03
0.03
0.01
0.05
-0.08
0.93
0.91
0.87
6.90
2.95
Trust in Technology: Its Components and Measures
APPENDIX E: Latent correlation matrices for 1st and 2nd order confirmatory factor analysis
Latent Correlation Matrix: 1st Order CFA
C.A.
AVE
1
2
3
4
5
6
7
8
1.Trusting Stance
0.86
0.68
0.83
2.Faith General Tech.
0.83
0.56
0.43
0.75
3.Situational Normality
0.94
0.84
0.15
0.14
0.92
4.Structural Assurance
0.94
0.81
0.29
0.42
0.30
0.90
5.Functionality
0.91
0.78
0.29
0.37
0.44
0.42
0.88
6.Reliability
0.90
0.70
0.33
0.41
0.42
0.44
0.63
0.84
7.Helpfulness
0.93
0.77
0.13
0.31
0.40
0.43
0.50
0.51
0.88
8.Perceived Usefulness
0.92
0.76
0.27
0.36
0.24
0.33
0.62
0.56
0.50
0.87
9. Internal CSE
0.86
0.68
-.04
-.01
0.13
0.07
0.00
0.08
0.05
-.06
10. External CSE
0.84
0.67
-.07
-.05
-.02
-.03
-.11
-.02
-.04
-.06
C.A = Cronbach’s alpha; square root of AVEs given in diagonal; all correlations significant at p<0.05, unless indicated by grey shading
9
10
0.83
0.31
0.82
Means and Std. Deviations for Constructs
1.Trusting Stance
2.Faith General Tech.
3.Situational Normality
4.Structural Assurance
5.Functionality
6.Reliability
7.Helpfulness
8.Perceived Usefulness
9. Internal CSE
10. External CSE
Mean
4.91
5.26
3.98
4.37
5.10
5.03
4.43
5.51
3.95
5.50
Std. Dev
0.19
0.17
0.12
0.07
0.10
0.24
0.04
0.20
0.39
0.24
Latent Correlation Matrix: 2nd Order CFA
C.A.
AVE
1
2
3
4
5
6
1.Trusting Stance
0.86
0.68
0.83
2.Faith General Tech.
0.83
0.56
0.43
0.75
3.Situational Normality
0.94
0.84
0.15
0.14
0.92
4.Structural Assurance
0.94
0.81
0.29
0.42
0.30
0.90
5.Trust in a specific technology
0.89
0.58
0.35
0.49
0.53
0.57
0.77
6. Perceived Usefulness
0.92
0.76
0.27
0.36
0.24
0.33
0.75
0.87
7. Internal CSE
0.86
0.68
-.04
-.01
0.13
0.07
0.05
-.06
8. External CSE
0.84
0.67
-.07
-.05
-.02
-.03
-.08
-.06
C.A = Cronbach’s alpha; square root of AVEs given in diagonal; all correlations significant at p<0.05, unless indicated by grey shading
7
8
0.83
0.31
0.82
Trust in Technology: Its Components and Measures
APPENDIX F: NON-TRUST Measures
Usefulness (Adapted from Davis, 1989):
1. Using Excel would improve my ability to present data graphically
2. Using Excel for my data analysis would help me evaluate information
3. Using Excel would make it easier to perform calculations on my data.
4. I would find Excel useful for analyzing data.
Internal Computer Self-Efficacy (Adapted from Thatcher et al, 2008):
1.
2.
3.
There was no one around to tell me what to do
I had never used a package like it before
I had just the built-in help facility for reference
External Computer Self-Efficacy (Adapted from Thatcher et al, 2008):
1.
2.
3.
I could call someone to help if I got stuck
Someone showed me how to do it first.
Someone else helped me get started.
Intention to Explore—Specific Technology (Excel) (Adapted from Nambisan et al., 1999)
1. I intend to experiment with new Excel features for potential ways of analyzing data.
2. I intend to investigate new Excel functions for enhancing my ability to perform calculations on data.
3. I intend to spend considerable time in exploring new Excel features to help me perform calculations on data.
4. I intend to invest substantial effort in exploring new Excel functions
Deep Structure Usage (Adapted from Burton-Jones and Straub, 2006)
When I use Excel, I use features that help me
1.
2.
3.
4.
5.
… analyze the data.
… derive insightful conclusions from the data
… perform calculations on my data
… compare and contrast aspects of the data
… test different assumptions in the data
Trust in Technology: Its Components and Measures
APPENDIX G: latent correlation matrix and fit indices
Nomological Validity: Latent Correlation Matrix for 2nd Order CFA
C.A.
AVE
1
2
3
4
5
6
7
1.Trusting Stance
0.86
0.68
0.83
2.Faith General Tech.
0.83
0.56
0.43
0.75
3.Situational Normality
0.94
0.84
0.15
0.14
0.92
4.Structural Assurance
0.94
0.81
0.29
0.42
0.30
0.90
5.Trust in Technology
0.89
0.78
0.29
0.37
0.44
0.42
0.77
6.Deep Structure Use
0.93
0.72
0.14
0.24
0.44
0.33
0.65
0.85
7.Intention to Explore
0.94
0.80
0.08
0.20
0.25
0.22
0.44
0.35
0.89
C.A = Cronbach’s alpha; square root of AVEs given in diagonal; all correlations significant at p<0.05, unless indicated by grey shading
Nomological Validity: summary of fit indices
Model
Chi-square
df
Chi-square/df
CFI
NNFI
RMSEA
RMSEA 90% CI
1ST Order CFA
731.468
398
2nd
1.84
0.963
0.956
0.048
0.043, 0.054
Order CFA
770.332
410
1.89
0.960
0.954
0.050
0.044, 0.055
Structural Model
945.866
445
2.13
0.948
0.942
0.056
0.051, 0.061
Download