HCI/MIS Workshop Proceedings Format

advertisement
What Does it Mean to Trust Facebook?
Examining Technology and Interpersonal Trust Beliefs
Nancy K. Lankton
Department of Accounting and Information Systems
Michigan State University
lankton@bus.msu.edu
D. Harrison McKnight
Department of Accounting and Information Systems
Michigan State University
mcknight@bus.msu.edu
1
What Does it Mean to Trust Facebook?
Examining Technology and Interpersonal Trust Beliefs
ABSTRACT
Researchers have recently studied technology trust in terms of the technological artifact
itself. Two different kinds of trusting beliefs could apply to a website artifact. First, the trusting
beliefs may relate to the interpersonal characteristics—competence, integrity, and benevolence.
Second, they may relate to corresponding technology characteristics—functionality, reliability,
and helpfulness. Since social networking websites like Facebook may demonstrate either
interpersonal or technology trust characteristics, researchers may need to carefully choose the
beliefs to model. Thus it is important to understand not only the conceptual meaning of these
beliefs, but also whether human and technology trust beliefs are distinct. Using data collected
from 362 university-student Facebook users, we test two second-order factor structures that
represent alternative ways to model the three interpersonal and three technology trust beliefs. We
find that the best-fitting measurement model depicts the three conceptually-related pairs of trust
beliefs (competence-functionality, integrity-reliability, and benevolence-helpfulness) as three
distinct second-order factors. This model outperformed the model splitting trusting beliefs into
separate interpersonal and technology second-order factors. The results show people distinguish
among three types of conceptually-related trust attributes, and that they trust Facebook as both a
technology and a quasi-person. These second-order trust factors can be used in future research to
better understand social networking trust and usage continuance intentions.
Keywords: Interpersonal trust, technology trust, social networking, websites, measurement, second-order factors
Acknowledgements: An earlier version of this paper was presented at AMCIS 2008. We thank
the reviewers and editors of DATABASE for their helpful comments on this paper. We also
appreciate Fred Rodammer for his help in collecting data for this study.
2
What Does it Mean to Trust Facebook?
Examining Technology and Interpersonal Trust Beliefs
INTRODUCTION
Facebook has grown rapidly as hundreds of millions of users have adopted it to
communicate and socialize (Bausch & Han, 2006). Trust may play a role in this meteoric rise.
For example, some researchers suggest that social networking users have a generalized trust
toward the group of people that visit their sites and read their postings (Kennedy & Sakaguchi,
2009). Other researchers find that people also trust social networking websites (Dwyer, Hiltz &
Passerini, 2007; Fogel & Nehmad, 2009; Sledgianowski & Kulviwat, 2009). However, what
exactly does it mean to say one trusts Facebook? Does one trust Facebook as a technology (i.e., a
website artifact) or as a quasi-person or organization?
Trust is often a factor in the use or acceptance of consumer product websites (Gefen,
Karahanna & Straub, 2003). When studying trust in product websites, researchers often examine
interpersonal trust, i.e., site user trust in the e-vendor (e.g., Bhattacherjee, 2002; Gefen et al.,
2003; Kim, 2008). In this context, interpersonal trust means one is willing to depend on the evendor because one believes the e-vendor has such favorable attributes as ability (competence),
integrity, and benevolence (Mayer, Davis & Schoorman, 1995). Research has used these
interpersonal trust beliefs to represent how users perceive the attributes of the e-vendor. This is
pure interpersonal trust, because it involves two humans, a user and an e-vendor.
Recently, some empirical information systems research has explored trust in software
recommendation agents (e.g., Komiak & Benbasat, 2006; Wang & Benbasat, 2005). These
agents are technological artifacts, not humans. This type of trust, called trust in technology,
differs from interpersonal trust because it represents a human-to-technology trust relationship
3
rather than a human-to-human trust relationship. Trust in technology means one is willing to
depend on the other because one believes the technology has desirable attributes (McKnight,
2005). Although some have said trust can only exist between humans (Friedman, Kahn & Howe,
2000), many researchers now acknowledge that humans can and do trust technology, despite
several differences between human-to-human and human-to-technology exchanges (see Lee &
See, 2004 and Wang & Benbasat, 2005 for discussions of trust in technology). To-date, trust in
technology is an under-explored information systems research domain.
Due to the limited amount of technology trust research, it is difficult to answer the
question, “What are the appropriate attributes of trust in technology?” Research on trust in
software agents has employed interpersonal trust beliefs (i.e., competence, integrity, and
benevolence) to represent trust in technology because software agents have some human-like
characteristics, such as giving advice and interacting with the user on-screen (Wang & Benbasat,
2005). However, some technical artifacts possess fewer interpersonal characteristics than do
software agents. For example, many websites neither give advice nor interact with users.
Therefore, while interpersonal trust applies to software agents, it may only partially apply to
websites. For example, people interface with other people on Facebook, but they neither obtain
advice directly from Facebook itself nor interact with Facebook as a person or quasi-person.
While trust in social networking websites research has generally examined interpersonal trust
attributes (Dwyer et al., 2007; Fogel & Nehmad, 2009; Sledgianowski & Kulviwat, 2009),
people may trust Facebook in other ways. For example, McKnight (2005) explains that people
may trust a technology because it provides specific functionality, operates reliably, and is helpful
to its users. Thus people may be willing to depend on Facebook (or any technology) because it
has these technology-related attributes that make it trustworthy (McKnight, 2005).
4
We propose three technology-related trust beliefs that parallel the three most commonly
used interpersonal trust beliefs. We suggest that the technology trust belief functionality is
analogous to the interpersonal trust belief competence, in that they both refer to users’ beliefs
about what the other can do for them. Similarly, we introduce reliability as a technology trust
belief similar to the interpersonal trust belief integrity because they both refer to users’ beliefs
that the other will do what we expect they will do. We suggest helpfulness as a technology trust
belief that parallels the interpersonal trust belief benevolence in that they both relate to beliefs
that the other provides responsive aid.
This paper tests empirically whether interpersonal trust beliefs are separate and distinct
from technology trust beliefs. The paper also examines how well the pairs of conceptual
attributes above (e.g., benevolence and helpfulness) hold together as distinct attribute pairs. We
gather data from students who use Facebook, a social networking website. Many social
networking websites have grown in popularity among university students. These sites allow their
users to create profiles and personal networks. It is possible, even though people do not interact
with Facebook as a “person,” that they may still attribute human characteristics to it, as in prior
research (Dwyer et al., 2007; Fogel & Nehmad, 2009; Sledgianowski & Kulviwat, 2009), and as
Reeves and Nass (1996) have found with various technologies. However, we believe social
networking websites represent technologies about which users may perceive both human-like
and technology-like trust characteristics, and thus form both interpersonal and technology
trusting beliefs. Thus our research contributes by examining both types of trusting beliefs as they
relate to Facebook.
In examining interpersonal and technology trusting beliefs, we develop hypotheses
relating to their factor structure, and test two alternative second-order factor structures by
5
comparing measurement model results. We also contribute by assessing the second-order factors’
nomological validity or whether the factors behave as they should within a well-established
theoretical framework (McKnight, Choudhury & Kacmar, 2002a; Straub, Boudreau & Gefen,
2004). Using trust theory, we analyze the trust second-order factors first as consequents of
reputation, privacy concern, and ease of use, and then as predictors of trusting intention and
continuance intention. These relationships have been established with interpersonal trusting
beliefs in other contexts (e.g., Lowry, Vance & Moody et al., 2008; Gefen et al., 2003).
However, to our knowledge prior trust-social networking research has not yet examined such
relationships, nor have these relationships been used to validate alternative interpersonal and
technology trusting belief factor structures.
THEORY AND HYPOTHESES DEVELOPMENT
Technology Trust Beliefs
Researchers in various fields have investigated technology trust. For example, human
computer interface researchers have examined trust in automation by testing the extent to which
human operators will trust automated control of systems such as semi-automatic pasteurization
plants with optional manual control (e.g., Muir & Moray, 1996; see Lee & See, 2004 for a
review). In the social sciences, researchers have examined trust in the technological artifact of
online environments (Komiak & Benbasat, 2006; Lee & Turban, 2001; Wang & Benbasat,
2005), and in various business information systems (Lippert, 2001, 2007; Lippert & Swiercz,
2005).
While trust in technology research is just beginning, scholars across these contexts appear
to consistently find trust in technology exists and is composed of multiple beliefs. Some trust
beliefs relate to the human-like characteristics of technology. For example, Wang & Benbasat
6
(2005) apply the three most common interpersonal trust beliefs—competence, integrity, and
benevolence—to their study of Internet recommendation agents. However, other researchers use
trust beliefs that relate more to the technology-like characteristics of technology including its
functionality and reliability (Lippert, 2001; Muir & Moray, 1996). Choosing which trust beliefs
to use may depend on the extent to which the technology possesses human-like characteristics.
For example, the software agents Wang & Benbasat (2005) studied have more human-like
characteristics than Muir and Moray’s (1996) automated systems. Recommendation agents
“interact” with users and provide them advice on particular products. By contrast, software
systems like Microsoft Access provide little advice and interact little with the user, especially in
the conversational manner that people do. Thus, technology trust beliefs may be more
appropriate for Access than are interpersonal trust beliefs.
Social networking websites represent a technology in which the distinction between
human and technology characteristics is less clear. These technologies may demonstrate some
human-like trusting characteristics such that users may develop competence and integrity beliefs.
For example, Dwyer et al. (2007) find that users generally agree with the statement “I trust that
Facebook will not use my personal information for any other purpose.” This statement reflects
Facebook’s integrity in terms of safeguarding private information, an attribute we associate with
people. Other researchers find that statements like “I feel that this website is honest” reflect trust
in social networking websites (Sledgianowski & Kulviwat, 2009). Again, honesty is an attribute
we usually apply to people. Social networking sites may also demonstrate technology-like
trusting characteristics that elicit beliefs such as “Facebook is very reliable and consistent to
use.” Therefore, researchers may apply both interpersonal and technology trust beliefs to
understand users’ trust in Facebook. Thus it is important to determine the extent to which
7
Facebook users relate better to technology trust or interpersonal trust. To our knowledge, no
research to-date has done this.
In this research, we propose three trust-in-technology beliefs that are related, yet distinct
from the three most commonly used interpersonal trust beliefs, which are competence, integrity,
and benevolence (Gefen et al., 2003). We test whether the six beliefs are distinct from each other
and investigate alternative factor structures for them. The following paragraphs explain the three
proposed technology trusting beliefs (see Table 1).
[Insert Table 1 Here]
Functionality Belief
Functionality means the degree to which an individual believes the technology will have
the functions or features needed to accomplish one’s task(s) (McKnight, 2005) (see Table 1).
Functionality originates conceptually from the interpersonal trust competence belief that
represents an individual’s belief that a trustee has the ability, skills, and expertise to perform
effectively (Mayer et al., 1995). While individuals demonstrate competence by performing a task
well or by giving good advice, technology demonstrates ‘competence’ by performing a function
well or by providing system features the user needs in order to perform a task. Thus trust in the
competence of technology generally refers to the technology’s ‘functional’ capability to perform
a task (McKnight, 2005). Similar trusting beliefs have been used in technology contexts
including trust in automation (Muir and Moray, 1996).
Reliability Belief
Reliability is defined as the degree to which an individual believes the technology will
continually operate properly, or will operate in a consistent, flawless manner (McKnight, 2005)
8
(see Table 1). This technology trust belief has its conceptual foundation in the integrity belief of
interpersonal trust that represents the trustor’s perceptions that the trustee adheres to a set of
principles that the trustor finds acceptable (Mayer et al., 1995, p. 719). A person may
demonstrate reliability by keeping commitments and telling the truth. Technologies cannot
demonstrate honesty or a moral conscience by keeping promises or commitments. However,
every technology comes with the implicit promise that it will work reliably and consistently.
Therefore, a technology demonstrates integrity by being reliable or by consistently doing what it
implicitly promises to do every time the technology is used. Showing its human roots, reliability
has been used by interpersonal trust researchers as an interpersonal trust belief (Rempel, Holmes
& Zanna, 1985). Reliability has also been used before in technology trust studies (Lippert, 2001;
Muir & Moray, 1996) (see Table 1).
Helpfulness Belief
Helpfulness is defined as the degree to which an individual believes the technology will
provide adequate and responsive help, usually through a help function (see Table 1). Helpfulness
is based on the benevolence belief from interpersonal trust and trust in online environments
(Mayer et al., 1995; Gefen et al., 2003). The benevolent trustee cares and acts in the trustor’s
interest (Wang & Benbasat, 2005; Mayer et al., 1995). We assume technology is not helpful in
terms of volition or moral agency (i.e., it cannot consciously care about its user). In fact, that
would constitute unwarranted personification of the technology (McKnight, 2005). Instead, we
presume that technology demonstrates its helpfulness through help functions that aid goal
attainment. Individuals who perceive that a technology can provide the help needed will perceive
fewer risks and uncertainties associated with technology use.
9
In summary, we propose three technology trust beliefs —functionality, reliability, and
helpfulness—that are based on three interpersonal trust beliefs–competence, integrity, and
benevolence, respectively. Our hypotheses are based on two alternative ways of modeling these
beliefs using second-order factors. A second-order or multi-dimensional factor is a theoretically
meaningful, overall abstraction of interrelated dimensions or first-order factors (Law, Wong &
Mobley, 1998). A common latent or reflective second-order factor exists if the dimensions are
manifestations of the second-order factor or the construct leads to the dimensions (Law et al.,
1998; Law & Wong, 1999; MacKenzie, Podsakoff, Shen & Podsakoff, 2006). An aggregate or
formative second-order factor exists if the dimensions form the second-order construct (Law et
al., 1998; Podsakoff et al., 2006). In our two alternative second-order factor models, we model
the first-order dimensions as reflective (not formative) of the second-order factors because we
expect the trust dimensions to co-vary with and even influence each other (Mackenzie et al.,
2005; McKnight et al., 2002a: Petter, Straub & Rai, 2007). Also, we believe the first-order
dimensions jointly reflect the overall trust concept and may be influenced by it (Diamantopouos,
Piefler & Roth, 2008; Petter et al., 2007). Because we also model the first-order factors as
reflective constructs (i.e., the measurement items for each first-order trust belief reflect that trust
belief), we are examining reflective first-order and reflective second-order factors. This is
consistent with other trust research that portrays trust dimensions as reflective first-order factors
that reflect a second-order trust concept (McKnight et al., 2002a, Wang & Benbasat, 2005).
Second-order factors are advantageous because they can explain the co-variation among
the first-order factors in a more parsimonious manner (Law et al., 1998; Segars & Grover, 1998).
However, researchers debate whether the general theories related to second-order factors have
more utility than the more specific theories related to first-order factors, and whether second-
10
order factors have appropriate reliability and validity (see Edwards, 2001 for a detailed
discussion of this debate). We propose that trust is a second-order factor based on trust theory
(e.g., McKnight et al., 2002a). Further, we empirically analyze the appropriateness of alternative
second-order factors based on prior research recommendations (Tanriverdi, 2006).
Our first hypothesis predicts that the three technology trust beliefs will reflect a secondorder technology trust factor and the three interpersonal trust beliefs will reflect a separate
second-order interpersonal trust factor (see Figure 1a). These second-order factors represent
overall interpersonal and technology trust concepts. This hypothesis assumes that users probably
view the technology trust beliefs as relating to the technology itself. By contrast, users probably
view the interpersonal beliefs as relating to some kind of person-like characteristics of the
technology (Wang & Benbasat, 2005). If this is true, it suggests that respondents are able to
distinguish between interpersonal trust in Facebook and technology trust in Facebook. We
believe Facebook respondents should be able to distinguish by these categories because they are
able to perceive Facebook as either a technology (i.e., a website) or a quasi-person.
H1: The three technology trust beliefs will reflect a second-order factor that is
separate from the second-order factor reflected by the three interpersonal trust beliefs.
[Insert Figure 1 Here]
Our second hypothesis predicts that the pairs of conceptually related trust beliefs will
reflect three second-order factors. We argued above that functionality is the technology analog of
the interpersonal competence trust belief. That is, functionality is a specific perception that, for
technology trust, is similar in nature to competence for interpersonal trust. If functionality is a
technology trust instantiation of the interpersonal competence belief, then these beliefs will be
significantly and highly correlated. The same is true of the integrity-reliability pair and the
11
benevolence-helpfulness pair because similar logic was used. Because we believe these three
pairs are highly intracorrelated, we predict that another viable way to model these beliefs is to
have the competence-functionality pair reflect one second-order trust factor, the integrityreliability pair reflect another second-order trust factor, and the benevolence-helpfulness pair
reflect a third second-order factor (see Figure 1b).
Literature evidence for this conceptual pairing comes from articles that try to synthesize
the most important trusting beliefs from among the many trusting beliefs used. Mayer et al.
(1995) distilled from the literature three beliefs—ability (similar to the competence-functionality
pair), benevolence (like the benevolence-helpfulness pair), and integrity (like the integrityreliability pair). They showed in the literature that most of the oft-used types of trusting beliefs
related to these three concepts. Similarly, McKnight et al.’s (2002a) Table 1 showed that most
trusting beliefs could be clustered by meaning similarity into competence, integrity, and
benevolence belief categories.
H2: Competence-functionality, integrity-reliability, and benevolence-helpfulness
will reflect three second-order factors that are distinct from each other (see Figure 1b).
H1 and H2 may both be supported. Both ways of modeling these trust constructs may
demonstrate good fit. But it is likely that one of them has better fit than the other. That is, the six
trusting beliefs may be better modeled either with a technology-interpersonal split (as in Model
1) or with a conceptual split (as in Model 2). In effect, H1 and H2 offer alternative plausible
ways of modeling how people perceive the six interpersonal and technology trusting beliefs.
People tend to link like things and to separate unlike things (Levi-Strauss, 1968). H1 suggests
trusting beliefs be modeled to separate interpersonal and technology categories regardless of
conceptual type. H2 suggests they be modeled to distinguish among the three conceptual types
12
regardless of an interpersonal-technological distinction. We argued above for both ways of
modeling these constructs. We now argue that Model 2 will be the better model.
The more one gets to know another party, the better one is able to distinguish among their
several characteristics (Lewicki et al. 1998). For example, one may trust one’s mother to fix
dinner competently, but may not trust her to play a piano piece perfectly. On the other hand, we
may not be able to distinguish such specific attributes in a stranger. Instead, we categorize the
stranger into such broad categories as “competent” or “incompetent.” We come to know another
by intensive interactive experience with them. The same is true with a technology; one comes to
know it well through intense experience with it. We have found that the typical college-aged
Facebook user has become very experienced with Facebook. In this study, subjects used
Facebook every day on average, and they had used it for an average of 1.6 years. For this reason,
they should be able to distinguish well between the competence, integrity, and benevolence
attributes of Facebook. Early e-commerce studies have not always found this to be true (e.g.,
Bhattacherjee, 2002; Gefen et al. 2003). But Facebook users tend to have more experience with
the website trustee than the respondents to early studies had with their e-commerce trustee.
Further, we have defined the technology trust attributes to form matching pairs with the
interpersonal trust attributes. That is, the functionality technology belief is very similar to the
competence belief used in interpersonal trust. Reliability is a specific type of integrity, per
McKnight et al. (2002a), who listed reliability as a component of their integrity cluster.
Helpfulness is tied to benevolence, since it is a techno-form of the interpersonal benevolence
concept. Because Model 2 reflects these close conceptual ties and because most Facebook users
are experienced with it, we believe Model 2 will have a better fit than Model 1.
13
Another reason is that Model 1 makes a human-technology distinction that has often not
been found in respondents’ minds in practice. For example, Wang and Benbasat (2005) found
that subjects had no problem attributing interpersonal characteristics to a recommendation agent
technology. Reeves and Nass (1996) generalized this finding across a number of technologies,
finding that people comfortably attribute human-like attributes to technical artifacts. Hence, we
think the Model 1 distinction between interpersonal trust and technology trust will be weaker
than the Model 2 distinction between the three major conceptual types of trust.
H3: Model 2 (distinguishing the three conceptual trusting belief types) will have a
better fit than will Model 1 (distinguishing interpersonal from technology trusting beliefs).
If H3 is true, then people relate less to Facebook’s human versus technology distinction
than to Facebook’s competence, integrity, and benevolence distinction. This implies people are
comfortable treating Facebook both as a technology and a quasi-person.
METHODOLOGY
We performed a survey in Fall 2006 to test the hypotheses. The survey used social
networking websites as the target technology. The study participants were junior and senior
business college students in a required introductory information systems course at a Midwestern
U.S. public university. College students are an appropriate sample for investigating Facebook
trusting beliefs because a sizeable percentage of Facebook users are college-aged. 40% of unique
Facebook users were age 18-24 in 2006 and 29% were age 18-24 in 2007 (Lipsman, 2007).
Procedure
Of the 511 students enrolled in the course, 427 students (84%) completed the paper-based
survey on a voluntary basis during class time. The survey measured the three technology trusting
14
beliefs, the three interpersonal trusting beliefs, reputation, privacy concern, ease of use, trusting
intention, and continuance intention. The survey instructions asked subjects to indicate one
social networking site in which they were currently a member or one site in which they might
become a member. The survey then instructed subjects to answer all remaining questions
referring to that social networking site, which the questions referred to as “MySNW.com.” Of
the 427 responses, 362 both indicated Facebook as that social networking website and stated that
they had previously used the site. These respondents were 54% male and on average 20 years
old. We used this subsample to analyze the factor structure of Facebook users’ trusting beliefs.
Testing the Hypothesized Second-Order Factor Models
We tested the hypothesized second-order factor models in three ways. First, we assessed
the convergent and discriminant validity of the first-order factors by performing a principal
components analysis in SPSS, and analyzing the measurement model via a confirmatory factor
analysis in EQS. Second, we ran a measurement model, also using EQS, for both hypothesized
second-order factor structures. We assessed the appropriateness of the second-order factor
structures following recommendations by Tanriverdi (2006). We also compared the
measurement models by examining goodness-of-fit statistics and performing chi-squared
difference tests.
Third, we assessed the nomological validity of the hypothesized second-order factor
structures by running structural equation models, using EQS. The structural model relationships
we examine are based on trust theory (e.g., McKnight et al., 1998, 2002a). Specifically, we
investigate reputation, privacy concern, and ease of use as antecedents to the second-order trust
factors and trusting intention and usage continuance intention as direct and indirect consequents,
respectively (see Figure 2).
15
[Insert Figure 2 Here]
The first antecedent, reputation, means that one assigns attributes to another person based
on second-hand information about the person (McKnight et al., 1998). For example, an
individual may believe that another individual has a good reputation because their friends or coworkers have said good things about that person. If one has a good reputation another individual
can develop trusting beliefs about that person even without first-hand knowledge (McKnight et
al., 2002b). The second antecedent, privacy concern, means users believe the website will protect
their personal information. Privacy concern can influence trusting beliefs because it makes the
environment feel trustworthy. One important condition, especially in the Internet environment, is
protection from the loss of privacy (McKnight et al., 2002a). Ease of use means that one
perceives that using the website is free from effort. Gefen et al. (2003) posit that experiential
factors about one’s use of a website can increase trusting beliefs. If one perceives the site is free
from effort and easy to use, one will more likely ascribe positive attributes to the trustee (Gefen
et al., 2003). Empirical research has confirmed the effects of these three antecedents on trusting
beliefs (e.g., Al Abri, McGill & Dixon, 2009; Klein, 2006; Li, Rong & Thatcher, 2009; Gefen et
al. 2003).
We also examine how strongly the hypothesized trusting belief factor structures relate to
trusting intention, which means a willingness to depend on the other person (McKnight et al.,
2002a). Trusting beliefs relate positively to trusting intention because individuals with higher
trusting beliefs will perceive the technology to have attributes that allow one to depend on it
despite possible risks (McKnight et al., 2002a). Prior research has established this relationship in
online contexts (e.g., Lowry et al., 2008; McKnight et al., 2002a, b). To complete our
nomological network, we examine the association of trusting intention with usage continuance
16
intention or the intention to continue using the technology (i.e., Facebook.com) beyond an initial
usage period. McKnight et al. (2002b) describes how being willing to depend on the trustor is a
volitional preparedness to become vulnerable and is typically demonstrated by engaging in
trusting behaviors. Because it is difficult to measure behaviors, many studies instead examine the
relationship between trusting intention and other behavioral intentions. For example, ecommerce researchers have found that trusting intention influences intentions to make onsite
purchases, share personal information with the site, and re-use web services (Jarvenpaa,
Tractinsky & Vitale, 2000; Turel, Yuan & Connelly, 2008). Thus we predict that individuals
with higher trusting intention will intend to continue using Facebook in the future.
Measurement Scales
The scales are shown in the Appendix. For the interpersonal trust beliefs, we adapted
prior scales from McKnight et al. (2002a) and formatted them with headers like those of
McKinney, Yoon & Zahedi (2002). We adapted the trusting intention items from McKnight et al.
(2002a) and the ease of use items from Venkatesh and Davis (1996). For continuance intention
we used items from Venkatesh et al. (2003) and made changes to reflect continuance of use. We
also added one item that refers to continuing to use Facebook in the near future. We adapted the
privacy concern items from the Smith et al. (1996) unauthorized secondary use and improper
access privacy subscales. We also added an item pertaining to individual control over privacy.
We measured functionality, helpfulness, reliability, and reputation using scales that we
developed based on a pilot test using 233 students from the same course in a previous semester.
We developed the pilot items to emphasize the constructs’ core meanings. For example, the
helpfulness items relate to guidance and help, which is its core meaning. The reputation items
relate to hearing favorable comments from others about using Facebook, which is its core
17
meaning. The pilot items had Cronbach’s alphas ranging from 0.88 to 0.96. We adopted all items
directly from the pilot study to the current study except for two reliability items that we changed
to refer more to the software’s consistency and accuracy. Because the pilot items referred to a
different technology, we also re-worded all items to refer to “MySNW.com.”
DATA ANALYSIS AND RESULTS
First-Order Factor Measurement Model
We first analyzed the fit and psychometric properties for the measurement model with the
six trust beliefs, reputation, privacy concern, ease of use, trusting intention, and continuance
intention to ensure that the first-order trusting beliefs are distinct. For goodness-of-fit a nonsignificant χ2 statistic can indicate the data fits the model well. However, as sample size
increases the χ2 test has a tendency to be significant and as sample size decreases, it has a
tendency to be non-significant (Schumaker & Lomax, 1996). Because our sample size is larger,
we present both the χ2 value (and its significance level) and the χ2/df test. For the χ2/df test a
value of 2 or less reflects good fit (Ullman, 2001), and 3 or less is acceptable (Kline, 1998). We
also examine the non-normed fit index (NNFI), the comparative fit index (CFI), the root mean
square error of approximation (RMSEA), and the Akaike information criteria (AIC). An NNFI
and CFI of at least 0.95 is suggested for good fit (Hu & Bentler, 1999). Others suggest that this
value is too stringent (Marsh, Hau & Wen, 2004), and many IS researchers consider 0.90 or
above to be adequate fit (e.g., Gefen & Ridings, 2003; Rutner, Hardgrave & McKnight, 2008;
Tanriverdi, 2006). Hu and Bentler (1999) also suggest that RMSEA should be 0.06 or lower to
represent good fit, while others believe that values less than 0.05 indicate good fit, and values as
high as 0.08 represent adequate fit (Browne & Cudeck, 1993). There are no suggested minimum
or maximum values for the AIC, because this goodness-of-fit statistic is mainly used to compare
18
models. Lower AIC values (sometimes the values can even be negative) indicate better fitting
and more parsimonious models (Byrne, 2006).
The measurement model for the model with the six first-order trust beliefs (no secondorder factor structures), reputation, privacy concern, ease of use, trusting intention, and
continuance intention has adequate fit despite the significance of the χ2 statistic (χ2 = 1552.18, p
< .001). The NNFI was .942, the CFI was .948, the RMSEA was .051, the χ2/df was 1.93 (χ2 =
1552.18, df = 805), and the AIC was -57.82. These fit statistics are all close to or at suggested
levels.
To assess convergent validity for this model, we assessed the item factor loadings, the
internal consistency reliability (ICR), Cronbach’s alpha (CA), and the average variance extracted
(AVE). We used a principal components factor analysis in SPSS with a direct oblimin rotation
because the trusting beliefs should be correlated, as others have found. This analysis shows that
all items load on their own factor at more than the 0.70 standard (Fornell & Larcker, 1981)
except the first reliability item, which loads at 0.62 and the first competence and benevolence
items that both load at 0.68 (see Table 2). The ICR for each construct was greater than 0.80 and
the AVE for each construct was greater than 0.50 (see Table 3), which are the recommended
minimums (Fornell & Larcker, 1981). The CAs are also above the required 0.70 minimum (Hair,
Anderson, Tatham & Black, 1998). These tests indicate adequate convergent validity.
[Insert Tables 2 and 3 Here]
We assessed discriminant validity using three methods. First, we analyzed the Lagrange
Multiplier (LM) statistics in EQS. Second, we examined the principal components cross loadings
in SPSS. Third, we compared the square root of the AVEs to each construct’s correlations with
other constructs. The LM test showed several significant but minor model misspecifications. The
19
incremental LM χ2 values were relatively small, ranging from 3.90 to 25.28, and the
corresponding standardized parameter change values (i.e., the parameters’ estimated values if
freely estimated in a subsequent test of the model) were also small, ranging from .04 to .33. This
test demonstrates that cross-loadings are not a problem in our data (Byrne, 2006), supporting
discriminant validity. The principal components analysis also supports discriminant validity as
the SPSS cross-loadings are well below the .30 suggested maximum (Hair et al., 1998) (see
Table 2). Finally, for each factor, the square root of the AVE is greater than the correlations in
that construct’s row or column (Chin, 1998) (see Table 3). The AVE itself is also greater than the
correlations in that construct’s row or column for all factors. This latter test constitutes stronger
evidence than the test comparing correlations with AVE square roots (Fornell & Larcker, 1981).
All the above tests support discriminant validity.
We performed one additional test of the discriminant nature of the six trusting beliefs.
Mean difference tests between the conceptually related trust beliefs show that mean reliability is
significantly different from mean integrity (t = 3.077, p < .01), and mean functionality is
significantly different from mean competence (t = 11.073, p < .001). This test found no
difference between helpfulness and benevolence (t = .313). Thus, not only are the first two pairs
of constructs distinct, but they are also perceived at different mean levels.
We next tested for multicollinearity and common method variance at the first-order factor
level. We assessed multicollinearity by examining variance inflation factors and condition
indexes. Variance inflation factors range from 1.31 to 2.15, which are well below suggested
cutoffs of 10.00 (Hair et al., 1998) and 4.00 (Fisher & Mason, 1981). Also, condition indexes are
under the suggested maximum of 30 (Belsley, Kuh & Welsch, 1980) suggesting that
multicollinearity is not a problem in this data. We assessed common method variance by adding
20
a factor with all measures as indicators to the theorized model (Widaman, 1985). This model
shows that the non-normed fit index improves only minimally (.012), and the original factor
loadings are still significant (Elangovan & Xie, 1999). Therefore, we conclude that common
method variance is not a problem in this data.
Second-Order Factor Measurement Models
We next analyzed and compared the hypothesized second-order factor measurement
models. These models are shown in Figure 1a-b. We evaluated the appropriateness of using the
second-order structures using tests recommended by Tanriverdi (2006). The first test was to
ensure the first-order factors for each second-order construct are significantly correlated and of
moderate to high magnitude. We find the correlations among the six trusting beliefs (r = 0.28 to
0.69) are all statistically significant at p < .05 and of moderate to high magnitude (see Table 3).
The second test was to ensure the factor loadings of the first-order factors on the second-order
factors are significant. We find that the loadings range from 0.52 to 0.90 and are all significant at
p<.001 (see Table 4). The third test was to ensure the second-order measurement models have
similar fit with the first-order measurement model, remembering that the goodness-of-fit of a
higher-order model can never be better than that of its first-order model (Marsh & Hocevar,
1985). Table 5 presents the fit indices for the second-order measurement models we tested. With
the exception of the significant χ2 statistic all the fit statistics are within or close to suggested
guidelines. Also, these fit statistics are similar to those of the first-order measurement model. We
also calculated the target (t)-coefficient, which is calculated as the χ2 of the first-order factor
model divided by the χ2 of the second-order factor model. It tells how much variance in the firstorder factors the second-order factor explains and is an alternative way to compare the fit of a
second-order model with that of the first-order model (Marsh & Hocevar, 1985). We find that the
21
target coefficient value for Model 1 is .88 and for Model 2 is .95 (see Table 5). These values
suggest that the second-order factors explain a sufficient amount of variance in the first-order
factors. In all, these tests demonstrate the appropriateness of the second-order factor models, and
show support for H1 and H2.
[Insert Tables 4 and 5 Here]
Given that the second-order factor models are appropriate, we then compare their
measurement models. We find that the H2 model (Model 2) is the better fitting model with a
lower χ2/df statistic, a higher NNFI and CFI, and a lower RMSEA than the H1 model (Model 1)
(see Table 5). The chi-square difference test and the AIC statistic also show that Model 2 is
better fitting than Model 1 (see Table 5). Therefore H3 is supported.
Structural Equation Models: Nomological Validity
We then tested how both second-order factor models behave in a nomological network
(see Figure 2). If the second-order constructs predict and are predicted by the same kinds of
variables used in other studies, this lends additional support for Models 1 and 2. To do so, we ran
three structural models. The first structural model used only the first-order factors, and had
reputation, privacy concern, and ease of use as antecedents to the six trusting beliefs and trusting
intention and continuance intentions as direct and indirect consequents to these beliefs,
respectively. The other two structural models used second-order factors and thus were used to
test the nomological validity of Models 1 and 2. The results are presented in Table 6. In the firstorder factor model, the antecedents all have significant relationships with the trusting beliefs
except for the reputation-helpfulness, ease of use-integrity, and ease of use-benevolence
relationships. Also, all first-order trusting beliefs significantly influence trusting intention except
integrity. For second-order factor models to be appropriate, not all the first-order factor
22
relationships have to be significant (Tanriverdi, 2006). In fact, one benefit of using second-order
factors is to predict better than the first-order factors.
[Insert Table 6 Here]
We find that for the second-order structural models, the antecedents significantly
influence the second-order trust factors. The only exception is the non-significant effect of ease
of use on the benevolence-helpfulness second-order factor in Model 2. In both models, the
second-order trust factor(s) significantly predict trusting intention with the exception of the
integrity-reliability factor in Model 2. Also, in both models, trusting intention significantly
predicts usage continuance intentions. These results show that in general both hypothesized
second-order factor structures have nomological validity.
As a supplementary analysis, we included the direct paths from the trusting beliefs to
continuance intention in Models 1 and 2 in addition to the indirect paths through trusting
intention (see Table 7). We then tested for mediation of trusting intention on the trusting
beliefs—continuance intention relationship following the Sobel method (Sobel, 1982, 1986),
which calculates the significance of the indirect path. For Model 1, we find that the indirect paths
from both interpersonal trust (β = .09, p < .01) and technology trust (β = .17, p < .001) are
significant (see Table 7). Because in Model 1 technology trust has a significant direct effect on
continuance intention (β = .28, p < .001) and interpersonal trust does not, the mediation of
trusting intention is partial for technology trust and full for interpersonal trust. For Model 2, we
find that the indirect path to continuance intention through trusting intention is significant for
both benevolence-helpfulness (β = .09, p < .01) and competence-functionality (β = .14, p < .001).
Because in this model only the competence-functionality trusting belief has a significant direct
effect on continuance intention (β = .36, p < .001), trusting intention has a partial mediation
23
effect for competence-functionality and a full mediation effect for benevolence-helpfulness.
Adding the direct paths from the trusting beliefs second-order factors to continuance intentions
does not significantly change the other relationships reported in Table 6.
DISCUSSION AND LIMITATIONS
Because trust can be an important factor in social networking use, this study explored
Facebook users’ trust beliefs to determine what it means to say one trusts Facebook. No research
to-date has contrasted the interpersonal versus technological attributes comprising Facebook
trust. Our study’s main objective was to explore two alternative second-order factor structures
composed of both interpersonal and technology trust beliefs. Other research examining
technology trust has examined either interpersonal trust attributes or technology trust attributes,
but has not compared them in one study. Our Model 1 depicts interpersonal trusting beliefs and
technology trusting beliefs as separate second-order constructs (H1), Model 2 depicts each
trusting belief conceptual pair as a second-order construct (H2). We first tested and compared the
second-order factor measurement models. Then we assessed how the second-order factors
behave in a nomological network including reputation, privacy concern, ease of use, trusting
intention, and usage continuance intention. Our findings contribute to research on trust, social
networking, and information systems continuance in general. We now discuss these findings and
provide both research and practice implications.
Measurement Model Comparison Implications
By testing and comparing the two alternative second-order factor measurement models,
we find that both models have adequate (or close to adequate) fit. However, the measurement
model with conceptually-related trusting beliefs (competence-functionality, integrity-reliability,
and benevolence-helpfulness) as separate second-order factors (Model 2) is the better fitting
24
measurement model. We found that each of the six trusting beliefs types is discriminant from the
others. So to this extent respondents distinguish the technology-related trust characteristics of
Facebook from its human-like trust characteristics. However, they distinguish Facebook trusting
characteristics based on the characteristics’ conceptual nature (Model 2) more than they do the
human-versus-technology categories (Model 1). For example, our respondents think the
website’s reliability models better with integrity than with helpfulness. This confirms the theory
that technology reliability really is an integrity-related construct. Similarly, we find helpfulness
models well with benevolence, and functionality with competence. Also, the fact that
respondents distinguish between human- and technology trust to a lesser degree suggests that
they are relatively comfortable attributing both human and technology attributes to Facebook. In
fact, our findings suggest that users blend human demonstrations of trust with technology
demonstrations of trust. This could be because users think of the Facebook website both as a
technology and a quasi-person, even though it is a technical artifact.
Future research will benefit from our findings by reflecting individuals’ Facebook trust as
conceptually-related trust belief pairs, rather than as trust beliefs that distinguish between the
technology-like and human-like characteristics. Each second-order construct not only reflects
two conceptually-related trust beliefs, but also the interrelationships between them. Future
research should verify whether our results translate to other social networking websites. For
example, Dwyer et al. (2007) find that there are some differences in trust between Facebook and
MySpace users. Also, because social networking website interfaces, privacy policies, and
functionality change over time, and sometimes in response to user feedback, researchers should
explore whether these second-order trust beliefs and their structures change over time.
Structural Model Nomological Validity Implications
25
The two structural models we examine test the nomological validity of the alternative
second-order structures. We find that all paths depicted in Figure 2 are generally significant. This
implies that the Models 1 and 2 second-order factors behave as predicted by trust theory, further
confirming their validity. Research could explore how Models 1 and 2 constructs behave in other
trust-related models. For example, research could include trust beliefs in the extended privacycalculus model (Dinev & Hart, 2006) and the TAM-Trust model (Gefen et al., 2003).
An important research implication is that the antecedents predicted the trusting beliefs
differently, revealing something about the antecedents’ nature. Table 6, Model 1 column shows
that reputation predicts interpersonal trust ( =.29) slightly better than technology trust ( =.23).
Similarly, privacy concern predicts interpersonal trust ( =.45) better than it does technology
trust ( =.25). This likely indicates that privacy concern and reputation are interpersonal in
nature. By contrast, ease of use predicts technology trust ( =.49) better than it does interpersonal
trust ( =.28). This makes sense because ease of use is about the technology, not the person.
Table 6, Model 2 column shows reputation relates about equally to competencefunctionality ( =.29) and integrity-reliability ( =.28), but relates slightly less to benevolencehelpfulness ( =.21). This suggests that individuals seem to be more influenced by second-hand
information related to Facebook’s ability and honesty rather than whether it acts in their best
interest. Privacy concern clearly relates better to benevolence-helpfulness ( =.55) and integrityreliability ( =.43) than to competence-functionality ( =.16). This finding suggests that privacy
concern increases trust mostly because it gives users increased confidence that the site will act in
their best interest and be ethical. By contrast, our findings show that individuals relate ease of
use more to competence-functionality ( =.58) than to integrity-reliability ( =.19) or
26
benevolence-helpfulness ( =.08). This makes privacy concern and ease of use complementary
predictors. Overall our findings can help future researchers clarify which trust antecedents are
most effective predictors of the specific trusting beliefs and their second-order factors. Further,
these findings might be able to explain non-significant findings regarding these antecedents (e.g.,
Li et al., 2008).
This research also contributes by presenting a mechanism (trusting intention) by which
trusting beliefs in social networking websites increase continuance intention. Specifically, the
findings show that individuals’ trusting beliefs make them willing to depend or make themselves
vulnerable to the social networking website, which in turn increases future use. We did find that
the integrity-reliability conceptual pair has a non-significant effect on trusting intention. This
shows that it was perhaps hard for subjects to identify honesty, truthfulness, and consistency with
the Facebook website and their willingness to become vulnerable to it. Our findings imply that
users may be more influenced by certain trust-related conceptual pairs than others when taking
risks with technologies. Future research should explore this finding further.
Also, the finding that trusting intention only partially mediates the effects of technology
trust is important. In the past, most research has used interpersonal trust beliefs, which are fully
mediated in their effects on continuance intention. We find that technology trust was a more
powerful predictor than interpersonal trust, and had a direct effect on Facebook continuance
intentions, suggesting that researchers should consider its use in such studies. Similarly, the
competence-functionality construct had a direct effect on continuance intention, while the other
two conceptual second-order factors did not predict as well. These mediation findings imply that
future research should include trusting intention when examining social networking trust beliefs
27
and continuance intentions. Future research could examine other possible belief-intention
mediators such as attitudes.
Another contribution our study makes is the development of three technology trust
beliefs—functionality, reliability, and helpfulness that are derived from the three most common
interpersonal trust beliefs—competence, integrity, and benevolence, respectively. While other
researchers have developed technology trust beliefs (e.g., Lippert, 2001; Muir & Moray 1996),
our study makes the conceptual linkage between trust beliefs based on how humans demonstrate
the trust characteristics versus how technology demonstrates the trust characteristics. This
process is important for ensuring that we do not ascribe human traits to a technology
inappropriately. Future research should explore whether there are more technology trust beliefs
than the three we examined. For example, interpersonal trust includes other trust beliefs, such as
predictability (Rempel et al., 1985) and carefulness (Gabarro, 1978). Likewise, researchers may
find that certain technologies demonstrate additional trust characteristics that users consider
important when deciding to use a technology.
Several limitations to the study exist. First, this study uses only one data set. Just because
a model fits one sample or one technology does not imply that it fits all (Doll, Xia & Torkzadeh,
1994). There may be other technologies in which Model 2 is not the best fitting or most
parsimonious model. For example, research involving technology that tends to have strong
human-like characteristics (e.g., recommendation agents) may find Model 1 is the best model.
Second, because we only examined usage intentions, our research does not specifically test the
extent to which technology trust increases use of social networking websites. While technology
trust has been shown to increase use of other technologies, this area needs more research. Third,
we examine subjects from just one university. While these subjects appear to represent typical
28
Facebook users in terms of age at the time of the study, students at other colleges and
universities, and older individuals may have different perceptions relating to the Facebook
website. Also, more recent statistics show that the largest percent of Facebook users are people
aged 35-54, and the highest growth age group is people age 55 plus (Corbett 2010). Future
research should examine more diverse samples to enhance generalizability.
Implications for Practice
Our results have practical implications that are important for managers and developers
who are trying to help social networking users deal with technology risk. While many factors
exist that organizations should consider when developing software, we show that they should
also try to make the software more trustworthy. In considering ways to do this, practitioners
should keep in mind that Facebook users perceive the website as a technology and as a quasiperson. Hence, on the technology side, practitioners should consciously work on making the
website use experience have more desired functionality, and be more reliable and more helpful to
users. Website simplicity and following usability guidelines like those of Nielsen and Loranger
(2006) will help. On the interpersonal side, practitioners should maintain a good reputation for
integrity, competence and benevolence towards users. This can be done both by making the
website use experience positive and by nurturing a positive image through press releases that
emphasize these three qualities. Also, by respecting personal privacy issues and by putting user
needs over profit motive, website firms can maintain a positive image in the public eye.
Users also distinguish among the three conceptually similar trusting characteristics more
than they distinguish the human-like trusting characteristics from the technology-like trusting
characteristics. That is, each conceptual pair (e.g., competence-functionality) forms a tight duo.
This implies that practitioners should focus on the conceptual types of trust regardless of whether
29
they reflect the technology’s human or technology characteristics. For example, the benevolencehelpfulness duo is about the principle of putting user needs first. This principle should be kept in
mind in every change to the way business is done (organizationally) and in every change to the
web site operation itself (technically). Every helpful website change may influence user beliefs
about both helpfulness and benevolence. Every functional website improvement may influence
both user functionality belief and competence belief.
Practitioners should also be aware that certain trusting beliefs have more influence on
trusting intentions than others. We find competence, functionality, and benevolence to be the
most highly correlated to both trusting intention and continuance intention. When placed in
second-order models, we find technology trust predicts trusting intention better than does
interpersonal trust. We find competence–functionality to predict trusting intention better than
benevolence-helpfulness, which predicts better than integrity-reliability. This implies that
competence and functionality are the two most important trusting beliefs for Facebook users.
Social network providers should try to emphasize these attributes in their design decisions by
providing excellence and functional richness in how the website operates. If the social
networking website provides the functionality that users want, these users will be more willing to
depend on it to network socially online. Next in predictive power was benevolence, suggesting
social network providers should show they have the interests of the user in mind. For example,
social networking companies should show they wish to address user needs. They should
emphasize the security aspects of their websites that can keep users safe from unwanted
solicitation and/or identity theft.
While there might be other factors that influence intentions to continue using a social
networking website, our study finds that the more a user becomes willing to depend on a social
30
networking website, the more they are likely to continue using it (= 0.50***). Because some
trust beliefs have direct effects on continuance intention (technology trust, functionalitycompetence) that are only partially mediated by trusting intention, these are also important for
ensuring users continue to use the website.
Managers and developers should also recognize the factors that are most likely to
increase trust in social networking websites. For example, we find that privacy and ease of use
have moderate to large influences on trust beliefs. However, these perceptions drive trust in
different ways. Therefore, not only should practitioners focus on ensuring confidentiality of
personal information, they should also ensure the website is usable. Competition among social
networking websites can make these factors even more critical to maintaining memberships and
sustaining growth. For example, users may be more willing to switch to another social
networking website because they perceive lower risk with the new site due to its greater ease of
use and its better privacy policies.
Another practical implication is that because our study shows that a variety of trusting
beliefs are important to Facebook users’ trusting intention and continuance intentions, it will be
important for web developers and managers to monitor and track user trusting beliefs. They
could collect this information by using online surveys or creating focus groups, or researching
related blogs. The questions shown in the Appendix can be used. Also, managers could monitor
whether or not the trusting beliefs and their influences change over time, and if they differ based
on demographics like age or gender.
CONCLUSION
What does it mean to trust Facebook? This study contributes by showing that Facebook
users trust the website as both a technology and a quasi-person. They appear to relate well to
31
Facebook both as a technology (in terms of functionality, reliability, and helpfulness beliefs) and
as a “person” (in terms of competence, integrity, and benevolence beliefs). Model 2 performs
better than Model 1. Thus, people distinguish more clearly between Facebook’s conceptual
attributes (Model 2) than they do between Facebook’s personal versus technology nature (Model
1). This result suggests people conceive of Facebook as both a technology and a quasi-person.
We also find certain trusting beliefs like competence and functionality predict trusting intention
towards Facebook better than other beliefs. In addition, we find that technology trust beliefs
predict Facebook use continuance intention better than do interpersonal beliefs. This research
provides many opportunities for future researchers to extend these findings and further our
understanding of what it means to trust a technology artifact like the Facebook.
REFERENCES
Al Abri, D., McGill, T., and Dixon, M. (2009). Examining the Impact of E-Privacy Risk
Concerns on Citizens’ Intentions to Use E-Government Services: An Oman Perspective.
Journal of Information Privacy & Security, 5(2), 3-26.
Bausch, S., and Han, L. (2006). Podcasting Gains an Important Foothold among U.S. Adult
Online Population. Nielsen/NetRatings Retrieved 9/2/2007, from http://www.nielsennetratings.com/pr/pr_060712.pdf.
Belsley, D. A., Kuh, E., and Welsch, R. E. (1980). Regression Diagnostics: Identifying
Influential Data and Sources of Collinearity, New York: John Wiley & Sons.
Bhattacherjee, A. (2002). Individual Trust in Online Firms: Scale Development and Initial Trust,
Journal of Management Information Systems, 19(1), 213-243.
Browne, M. W., and Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen
and J. S. Long (Eds.). Testing Structural Equation Models, 136-161. Newbury Park, CA:
Sage.
Byrne, B. M. (2006). Structural Equation Modeling with EQS, Mahwah, NJ: Lawrence Erlbaum
Associates, Inc.
Chin, W. (1998). The partial least squares approach to structural equation modeling. In G. A.
Marcoulides (Ed.). Modern Methods for Business Research, 295-336. Mahwah, NJ:
Lawrence Erlbaum Associates.
32
Corbett, P. (2010) Facebook demographics and statistics report 2010 – 145% growth in 1 year.
Retrieved 05/20/10, from http://www.istrategylabs.com/2010/01/facebook-demographicsand-statistics-report-2010-145-growth-in-1-year/.
Diamantopoulos, A., Riefler, P., and Roth, K. P. (2008). Advancing Formative Measurement
Models. Journal of Business Research, 61, 1203-1218
Dinev, T., and Hart, P. (2006). An Extended Privacy Calculus Model for E-Commerce
Transactions. Information Systems Research, 17(1), 61-80.
Doll, W. J., Xia, W., and Torkzadeh, G. (1994). A Confirmatory Analysis of the End-User
Computing Satisfaction Instrument. MIS Quarterly, 18(4), 453-461.
Dwyer, C., Hiltz, S. R., and Passerini, K. (2007). Trust and Privacy Concern within Social
Networking Sites: A Comparison of Facebook and MySpace. Proceedings of the Thirteenth
Americas Conference on Information Systems, Keystone Colorado, August 9 -12.
Edwards, J. R. (2001). Multidimensional Constructs in Organizational Behavior Research: An
Integrative Analytical Framework. Organizational Research Methods, 4(2), 144-182.
Elangovan, A. R., and Xie, J. L. (1999). Effects of Perceived Power of Supervisor on
Subordinate Stress and Motivation: The Moderating Role of Subordinate Characteristics.
Journal of Organizational Behavior, 20, 359-373.
Fisher, J., and Mason, R. (1981). The analysis of multicollinear data in criminology. In J. Fox
(Ed.). Quantitative Criminology. New York: Academic Press.
Fogel, J., and Nehmad, E. (2009). Internet Social Network Communities: Risk Taking, Trust,
and Privacy Concerns. Computers in Human Behavior, 25, 153-160.
Fornell, C., and Larcker, D. F. (1981). Evaluating Structural Equations with Unobservable
Variables and Measurement Error. Journal of Marketing Research, 18, 39-50.
Friedman, B., Kahn, Jr., P. H., and Howe, D. C. (2000). Trust Online. Communications of the
ACM, 43(12), 34-40.
Gabarro, J. J. (1978). The development of trust, influence, and expectations.” In A. G. Athos and J.
J. Gabarro (Eds.). Interpersonal Behavior: Communication and Understanding in
Relationships, 290-303. Englewood Cliffs, NJ: Prentice-Hall.
Gefen, D., Karahanna, E., and Straub, D. W. (2003). Trust and TAM in Online Shopping: An
Integrated Model. MIS Quarterly, 27(1), 51-90.
Gefen, D., and Ridings, C. M. (2003). IT Acceptance: Managing Use—IT Group Boundaries.
Database for Advances in Information Systems, 34(3), 25-30.
33
Hair, J. F., Anderson, R. E., Tatham, R. L., and Black W. C. (1998). Multivariate Data Analysis,
Upper Saddle River: Prentice Hall.
Hu, L., and Bentler, P. M. (1999). Cutoff Criteria for Fit Indexes in Covariance Structure
Analysis: Conventional Criteria versus New Alternatives. Structural Equation Modeling,
6(1), 1-55.
Jarvenpaa, S. L., Tractinsky, N., and Vitale, M. (2000). Consumer Trust in an Internet Store.
Information Technology and Management, 1(1-2), 45-71.
Kennedy, M., and Sakaguchi, T. (2009). Trust in social networking: Definitions from a global,
cultural viewpoint. In C. Romm-Livermore and K. Setzekorn (Eds.) Social Networking
Communities and E-Dating Services: Concepts and Implications, 225-238. Hershey, PA:
Information Science Reference.
Kim, D. (2008). Self-Perception Based versus Transference-Based Trust Determinants in
Computer-Mediated Transactions: A Cross-Cultural Comparison Study. Journal of
Management Information Systems, 24(4), 13-45.
Klein, R. (1996). Internet-Based Patient-Physician Electronic Communication Applications:
Patient Acceptance and Trust. e-Service Journal, 5(2), 27-51.
Kline, R. B. (1998). Principles and Practice of Structural Equation Modeling, New York: The
Guildford Press.
Komiak, S. Y. X., and Benbasat, I. (2006). The Effects of Personalization and Familiarity on
Trust and Adoption of Recommendation Agents. MIS Quarterly, 30(4), 941-960.
Law, K. S., Wong, C. S., and Mobley, W. H. (1998). Towards a Taxonomy of Multidimensional
Constructs. The Academy of Management Review, 23(4), 741-755.
Law, K. S., and Wong, C. S. (1999). Multidimensional Constructs in Structural Equation
Analysis: An Illustration Using the Job Perception and Job Satisfaction Constructs. Journal
of Management, 25(2), 143-160.
Lee, J. D., and See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance.
Human Factors, 46(1), 50-81.
Lee, M. K. O., and Turban, E. (2001). A Trust Model for Consumer Internet Shopping.
International Journal of Electronic Commerce, 6(1), 75-91.
Levi-Strauss, L. (1968). The Savage Mind. Chicago: University Of Chicago Press, 1968.
Lewicki, R. J., McAllister, D. J., and Bies, R. J. (1998) Trust and distrust: New relationships and
realities. Academy of Management Review, 23(3), 438-458.
34
Li, X., Rong, G., and Thatcher, J. (2009). Swift Trust in Web Vendors: The Role of Appearance
and Functionality. Journal of Organizational and End User Computing, 2191), 88-108.
Lippert, S. K.(2001). An Exploratory Study into the Relevance of Trust in the Context of
Information Systems Technology. PhD Dissertation, The George Washington University.
Lippert, S. K. (2007). Investigating Post-adoption Utilization: An Examination into the Role of
Interorganizational and Technology Trust. IEEE Transactions on Engineering Management,
54(3), 468-483.
Lippert, S. K., and Swiercz, P. (2005). Human Resources Information Systems (HRIS) and
Technology Trust. Journal of Information Science, 31(5), 340-353.
Lipsman, A. (2007). Facebook Sees Flood of New Traffic from Teenagers and Adults,
Comscore. Retrieved 8/14/2007, from
http://www.comsocire.com/press/relaease.asp?press=1519.
Lowry, P. B., Vance, A., Moody, G., Beckman, B., and Read, A. (2008). Explaining and
Predicting the Impact of Branding Alliances and Web Site Quality on Initial Consumer Trust
of e-Commerce Web Sites. Journal of Management Information Systems, 24(4), 199-224.
MacKenzie, S. B., Podsakoff, P. N., and Jarvis, C. B. (2005). The Problem of Measurement
Model Misspecification in Behavioral and Organizational Research and Some Recommended
Solutions. Journal of Applied Psychology, 90(4), 710-730.
Marsh, H. W., Hau, K. T., and Wen, Z. (2004). In Search of Golden Rules: Comment on
Hypothesis Testing Approaches to Setting Cutoff Values for Fit Indexes and Dangers in
Overgeneralizing Hu and Bentler’s (1999) Findings. Structural Equation Modeling, 11(3),
320-341.
Marsh, H. W., and Hocevar, D. (1985). Application of Confirmatory Factor Analysis to the
Study of Self-Concept: First and Higher Order Factor Models and Their Invariance across
Groups. Psychological Bulletin, 97, 562-582.
Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995). An Integrative Model of
Organizational Trust. The Academy of Management Review, 20(3), 709-734.
McKinney, V., Yoon, K. and Zahedi, F. M. (2002). The Measurement of Web Customer
Satisfaction: An Expectation and Disconfirmation Approach. Information Systems Research,
13(3), 296-315.
McKnight, D. H. (2005). Trust in Information Technology. In G.B. Davis (Ed.). The Blackwell
Encyclopedia of Management, Management Information Systems, 7, 329-331. Malden, MA:
Blackwell,.
35
McKnight, D. H., Choudhury, V., and Kacmar, C. J. (2002a). Developing and Validating Trust
Measures for e-Commerce: An Integrative Typology. Information Systems Research, 13(3),
334-359.
McKnight, D. H., Choudhury, V. and Kacmar, C. J. (2002b). The Impact of Initial Consumer
Trust on Intentions to Transact with a Web Site: A Trust Building Model. Journal of
Strategic Information Systems, 11(3), 297-323.
McKnight, D. H., Cummings, L. L., and Chervany, N. L. (1998). Initial Trust Formation in New
Organizational Relationships. Academy of Management Review, 23(3), 473-490.
Muir, B. M., and Moray, N. (1996). Trust in Automation. Part II. Experimental Studies of Trust
and Human Intervention in a Process Control Simulation. Ergonomics, 39(3), 429-460.
Nielsen, J. and Loranger, H. (2006). Prioritizing Web Usability. Berkeley, CA: New Riders.
Petter, S., Straub, D. W., and Rai, A. (2007). Specifying Formative Constructs in Information
Systems Research. MIS Quarterly, 31(4), 623-656
Podsakoff, N., Shen, W., Podsakoff, P. M. (2006). The role of formative measurement models in
strategic management research: Review, critique and implications for future research. In D.
En, D. Ketcheny and D. Bergh (Eds.) Research Methodology in Strategic Management, 197252. Greenwich, CT: JAI Press.
Reeves, B., and Nass, C. (1996). The Media Equation: How People Treat Computers, Television,
and New Media like Real People and Places. Cambridge: Cambridge University Press.
Rempel, J. K., Holmes, J. G., and Zanna, M. P. (1985). Trust in Close Relationships. Journal of
Personality and Social Psychology, 49, 95-112.
Rutner, P. S., Hardgrave, B. C., and McKnight, D. H. (2008). Emotional Dissonance and the
Information Technology Professional. MIS Quarterly, 32(3), 635-652.
Schumaker, R. E., and Lomax, R. G. (1996). A Beginner’s Guide to Structural Equation
Modeling, Mahwah, New Jersey: Lawrence Erlbaum Associations.
Segars, A. H., and Grover, V. (1998). Strategic Information Systems Planning Success: An
Investigation of the Construct and its Measurement. MIS Quarterly, 22(2), 139-163.
Sledgianowski, D., and Kulviwat, S. (2009). Using Social Network Sites: The Effects of
Playfulness, Critical Mass and Trust in an Hedonic Context. The Journal of Computer
Information Systems, 48(4), 74-83.
Smith, H. J., Milberg, S. J., and Burke, S. J. (1996). Information Privacy: Measuring Individuals’
Concerns About Organizational Practices. MIS Quarterly, 20(2), 167-196.
36
Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equation
models. In S. Leinhart (Ed.) Sociological Methodology, 290-312, San Francisco: JosseyBass.
Sobel, M. E. (1986). Some new results on indirect effects and their standard errors in covariance
structure models. In N. Tuma (Ed.) Sociological Methodology, 159-186, Washington D.C.:
American Sociological Association.
Straub, D., Boudreau, M. C., and Gefen, D. (2004). Valuation Guidelines for IS Positivist
Research. Communications of the Association for Information Systems, 13, 380-427.
Tanriverdi, H. (2006). Performance Effects of Information Technology Synergies in
Multibusiness Firms. MIS Quarterly, 30(1), 57-77.
Turel, O., Yuan, Y., and Connelly, C. E. (2008). In Justice we Trust: Predicting User Acceptance
of e-Customer Services. Journal of Management Information Systems, 24(4), 123-151.
Ullman, J. G. (2001). Structural equation modeling. In B. G. Tabachnick and L. S. Fidell (Eds.).
Understanding Multivariate Statistics, 4th edition, 653-771. Nedham Heights, MA: Allyn
and Bacon.
Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of
information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.
Venkatesh, V., and Davis, F. D. (1996). A Model of the Antecedents of Perceived Ease of Use:
Development and Test. Decision Sciences, 27(3), 451-481.
Wang, W., and Benbasat, I. (2005). Trust in and Adoption of Online Recommendation Agents.
Journal of the Association for Information Systems, 6(3), 72-101.
Widaman, K. (1985), Hierarchically Nested Covariance Structure Models for MultitraitMultimethod Data. Applied Psychological Measurement, 9(1), 1-23.
37
Figure 1. Hypothesized Second-Order Factor Models
a) Model 1: H1
b) Model 2: H2
38
Figure 2. Structural Model: Nomological Validity Test
39
Table 1. Conceptual Origins of Technology Trust Beliefs
Conceptual Origins
Technology
Trust Beliefs
Trust in
Information
Systems
Functionality
The degree to
which one
anticipates the
technology will
have the
functions or
features needed
to accomplish
one’s task(s)
Reliability
The degree to
which an
individual
anticipates the
technology will
continually
operate properly,
or will operate in
a consistent
flawless manner.
Helpfulness
The degree to
which an
individual
anticipates the
technology will
provide adequate
and responsive
help.
Reliability
The technology
is fully
functioning and
not experiencing
system
downtime when
completing job
related tasks
(Lippert, 2001).
Trust in Online
Environments
and Online
Agents
Trust in
Automation
Interpersonal
Trust
Competence
The trustee has
the ability, skills,
and expertise to
perform
effectively in
specific domains
(Wang and
Benbasat, 2005,
p. 76).
Competence
The extent the
technology
performs its
functions
properly (Muir
and Moray, 1996,
p. 434)
Ability
(Competence)
The group of
skills,
competencies,
and
characteristics
that enable a
party to have
influence with
some specific
domain (Mayer et
al., 1995, p. 717).
Integrity
An individual
believes that the
trustee adheres to
a set of principles
(Wang and
Benbasat, 2005,
p. 76).
Reliability
The extent the
technology
responds
similarly to
similar
circumstances at
different points in
time (Muir and
Moray, 1996, p.
434).
Integrity
The trustor's
perception that
the trustee
adheres to a set
of principles that
the trustor finds
acceptable
(Mayer et al.,
1995, p. 719).
Benevolence
The trustee cares
about her and
acts in her
interests (Wang
and Benbasat,
2005, p. 76).
40
Benevolence
The extent to
which the trustee
is believed to
want to do good
to the trustor,
aside from an
egocentric motive
Mayer et al.,
1995, p. 718).
Table 2. Factor Loadings*
Item
1
2
3
4
5
6
7
8
9
10
11
Reliability 1
.62
.08
.09
.15
.11
.09
.14
.04
.12
.07
.03
Reliability 2
.82
.02
.01
.07
.05
.05
.04
.02
.13
.01
.01
Reliability 3
.83
.04
.02
.01
.05
.07
.02
.02
.07
.08
.07
Reliability 4
.89
.03
.05
.05
.07
.02
.02
.03
.08
.04
.03
Reliability 5
.86
.01
.05
.02
.09
.09
.01
.01
.09
.02
.02
Functionality 1
.12
.74
.01
.03
.00
.07
.12
;07
.04
.02
.13
Functionality 2
.08
.80
.01
.09
.01
.03
.06
.03
.03
.10
.01
Functionality 3
.05
.79
.06
.01
.10
.01
.11
.11
.00
.03
.02
Functionality 4
.06
.75
.10
.03
.15
.01
.01
.00
.05
.01
.00
Helpfulness 1
.00
.02
.85
.01
.03
.01
.03
.08
.01
.06
.02
Helpfulness 2
.02
.02
.90
.03
.01
.02
.02
.03
.01
.04
.01
Helpfulness 3
.01
.05
.91
.02
.01
.01
.02
.03
.01
.01
.01
Helpfulness 4
.01
.02
.85
.08
.08
.05
.00
.07
.04
.05
.00
Integrity 1
.00
.06
.06
.89
.05
.01
.03
.04
.02
.01
.01
Integrity 2
.01
.02
.02
.94
.00
.03
.04
.01
.04
.02
.01
Integrity 3
.02
.00
.01
.86
.04
.07
.02
.01
.03
.00
.02
Integrity 4
.05
.06
.01
.81
.09
.10
.01
.01
.05
.03
.01
Competence 1
.02
.18
.05
.09
.68
.03
.10
.00
.12
.02
.01
Competence 2
.02
.01
.00
.10
.83
.01
.06
.01
.09
.03
.05
Competence 3
.03
.04
.01
.01
.83
.02
.07
.02
.06
.04
.03
Competence 4
.07
.09
.02
.02
.72
.14
.01
.01
.03
.08
.03
Benevolence 1
.04
.01
.05
.17
.11
.68
.04
.05
.10
.02
.11
Benevolence 2
.01
.00
.14
.00
.01
.85
.02
.01
.02
.04
.03
Benevolence 3
.01
.01
.02
.06
.00
.86
.00
.07
.02
.02
.00
Reputation 1
.01
.08
.00
.02
.09
.01
.83
.01
.08
.03
.04
Reputation 2
.02
.06
.02
.00
.05
.04
.85
.03
.03
.03
.02
Reputation 3
.03
.04
.07
.03
.09
.02
.86
.06
.06
.01
.00
Reputation 4
.08
.07
.02
.01
.06
.03
.85
.04
.02
.05
.02
Privacy concern 1
.01
.11
.05
.01
.01
.12
.10
.72
.01
.05
.01
Privacy concern 2
.02
.12
.07
.02
.12
.08
.12
.72
.05
.00
.06
Privacy concern 3
.01
.06
.02
.04
.06
.01
.08
.84
.10
.00
.09
Privacy concern 4
.01
.01
.02
.06
.05
.08
.03
.83
.09
.01
.01
Ease of Use 1
.02
.02
.02
.09
.01
.12
.04
.01
.83
.08
.05
Ease of Use 2
.03
.04
.06
.02
.17
.02
.02
.07
.81
.03
.09
Ease of Use 3
.07
.10
.06
.08
.08
.17
.05
.02
.71
.02
.01
Trusting Intention 1
.01
.10
.05
.03
.11
.03
.00
.00
.03
.84
.06
Trusting Intention 2
.01
.01
.01
.06
.06
.02
.02
.00
.02
.89
.01
Trusting Intention 3
.01
.03
.04
.05
.04
.01
.00
.02
.01
.97
.03
41
Trusting Intention 4
.04
.01
.03
.02
.04
.02
.02
.04
.02
.89
.03
Continuance Intention 1
.01
.02
.02
.01
.03
.01
.01
.00
.00
.00
.98
Continuance Intention 2
.00
.01
.02
.02
.02
.01
.01
.01
.04
.02
.95
Continuance Intention 3
.02
.00
.02
.00
.01
.00
.00
.01
.00
.02
.97
Continuance Intention 4
.03
.01
.03
.04
.00
.01
.01
.01
.01
.00
.97
* SPSS Principal Components Analysis with Direct Oblimin rotation
42
Table 3. Means, SDs, ICRs, CAs, AVEs, and Correlations among Latent Constructs
Means
SD
ICR
CA
AVE
1
1. Reliability
4.62
1.11
.91
.91
.67
.82
2. Functionality
5.17
0.98
.90
.90
.69
.53
.83
3. Helpfulness
3.86
1.08
.93
.93
.76
.37
.41
.87
4. Integrity
4.42
1.22
.95
.95
.82
.51
.46
.38
.90
5. Competence
5.68
1.01
.93
.93
.78
.36
.65
.28
.39
.88
6. Benevolence
3.83
1.24
.88
.88
.71
.44
.40
.41
.69
.35
.84
7. Reputation
5.31
0.99
.90
.90
.70
.33
.45
.29
.41
.54
.35
.84
8. Privacy
4.21
1.46
.84
.84
.56
.27
.39
.27
.46
.35
.52
.39
.75
9. Ease of Use
5.68
.89
.86
.85
.68
.34
.53
.30
.30
.65
.20
.46
.33
.82
10. Trusting Intention
4.71
1.16
.94
.94
.81
.42
.53
.38
.43
.50
.46
.40
.37
.48
.90
11. Continuance Intention 5.47
1.31 .98
.99
.94
.18
.40
.18
.21 .36 .24 .31 .23
SD = standard deviation, ICR = internal consistency reliability, CA = Cronbach’s alpha, AVE = average variance extracted
*Diagonal elements are the square roots of the AVE; off-diagonal elements are correlations between latent constructs.
**All correlations are significant at p < .05
.46
.50
First-Order
Factor
2
3
4
Table 4. First-Order Factor Loadings on Second-Order Factors*
Model 1 (H1)
Model 2 (H2)
Loading
Second-Order Factor
Loading
Second-Order Factor
Reliability
.70
Technology Trust
.64
Integrity-Reliability
Functionality
.73
Technology Trust
.90
Competence-Functionality
Helpfulness
.56
Technology Trust
.52
Benevolence-Helpfulness
Integrity
.82
Interpersonal Trust
.79
Reliability-Integrity
Competence
.55
Interpersonal Trust
.72
Competence-Functionality
Benevolence
.77
Interpersonal Trust
.78
Helpfulness-Benevolence
* All factor loadings are significant at p < .001.
43
5
6
7
8
9
10
11
.97
Table 5. Model Fit Statistics
Model Fit Statistics
χ2
df
χ2 / df
NNFI CFI
RMSEA
First-Order Factor
Model
650.42, p < .001
237
2.74
.935
.945
Second-Order Factor
Model 1 (H1)
740.77, p < .001
245
3.02
.925
Second-Order Factor
Model 2 (H2)
683.83, p < .001
243
2.81
.933
Model #
.070
Target
Coefficient
176.42 na
χ2 Difference
Test
na
.934
.075
250.77 .88
1 & 2, p < .001
.941
.071
197.83 .95
AIC
Note: These are the fit statistics for measurement models with the trusting beliefs only. Their relative nature and the result of the χ2 difference test
is similar to that for the measurement models with all the study variables (i.e., trusting beliefs, reputation, privacy concern, ease of use, trusting
intention, and continuance intention).
44
Table 6. Nomological Validity Test
First-Order Factor Model
Second-Order Factor
Model 1 (H1)
Standardized Path Coefficients
Rep  Reliability
.14*
Rep  Functionality
.16**
Rep  Helpfulness
.11
Rep  Integrity
.20***
Rep  Competence
.26***
Rep  Benevolence
.15*
Rep  Technology Trust
Rep  Interpersonal Trust
Rep  Integrity-Reliability
Rep  Competence-Functionality
Rep  Benevolence-Helpfulness
Priv  Reliability
.23***
Priv  Functionality
.24***
Priv  Helpfulness
.23***
Priv  Integrity
.44***
Priv  Competence
.11*
Priv  Benevolence
.55***
Priv  Technology Trust
Priv  Interpersonal Trust
Priv  Integrity-Reliability
Priv  Competence-Functionality
Priv  Benevolence-Helpfulness
EOU  Reliability
.25***
EOU  Functionality
.44***
EOU  Helpfulness
.21**
EOU  Integrity
.11
EOU  Competence
.68***
EOU  Benevolence
.01
EOU  Technology Trust
EOU  Interpersonal Trust
EOU  Integrity-Reliability
EOU  Competence-Functionality
EOU  Benevolence-Helpfulness
Reliability  TI
.10*
Functionality  TI
.22***
Helpfulness  TI
.12*
Integrity  TI
.04
Competence  TI
.23***
Benevolence  TI
.19***
Technology Trust  TI
Interpersonal Trust  TI
Integrity-Reliability  TI
Competence-Functionality  TI
Benevolence-Helpfulness  TI
TI  Continuance Intention
.49***
Model Goodness-of-Fit Statistics and Variance Explained
NNFI
.923
CFI
.929
RMSEA
.059
Second OrderFactor Model 2
(H2)
.23***
.29***
.28***
.29***
.21**
.25***
.45***
.43***
.16**
.55***
.47***
.22***
.19**
.58***
.08
.49***
.28***
45
.50***
.10
.43***
.28***
.50***
.924
.929
.058
.928
.933
.056
χ2/df
AIC
R2 Reliability
R2 Functionality
R2 Helpfulness
R2 Integrity
R2 Competence
R2 Benevolence
R2 Technology Trust
R2 Interpersonal Trust
R2 Integrity-Reliability
R2 Competence-Functionality
R2 Benevolence-Helpfulness
R2 Trusting Intention
R2 Continuance Intention
* = p < .05, ** = p < .01, *** = p < .001
1864.70/
832 = 2.24
200.70
23.4%
45.1%
18.2%
36.8%
53.9%
39.1%
1863.00/
842 = 2.21
179.00
1801.00/
838 = 2.15
125.00
56.2%
56.5%
38.2%
24.4%
48.7%
69.3%
49.2%
45.0%
25.2%
46.0%
25.0%
Table 7. Supplementary Analysis: Effects of Trusting Beliefs on Continuance Intentions
Direct
Indirect (through
Total
Mediation (%)
Trusting Intention)
Model 1
Technology Trust
.28*** .17***
.45***
Partial (27%)
Interpersonal Trust
-.04
.09**
.05
Full
Integrity-Reliability
-.14
.04
-.10
None
Competence-Functionality
.36*** .14***
.50***
Partial (22%)
Benevolence-Helpfulness
.02
.11
Full
Model 2
.09**
* = p < .05, ** = p < .01, *** = p < .001
46
APPENDIX
Technology Trusting Beliefs
Functionality
I believe MySNW.com is functional. It:
1. has the functionality I need.
2. has the features required for my online social activities.
3. has the ability to do what I want it to do.
4. has the overall capabilities I need.
Reliability
I believe MySNW.com is reliable. It:
1. is a very reliable website.
2. does not fail me.
3. is extremely dependable.
4. does not malfunction for me.
5. provides error-free results.
Helpfulness
I believe MySNW.com is Helpful. It:
1. supplies my need for help through a help function.
2. provides competent guidance (as needed) through a help function.
3. provides whatever help I need.
4. provides very sensible and effective advice, if needed.
Interpersonal Trusting Beliefs
Competence
I believe MySNW.com is competent. It:
1. is competent and effective in providing online social networking.
2. performs its role of facilitating online social networking very well.
3. is a capable and proficient online social networking provider.
4. is very knowledgeable about online social networking.
Integrity
I believe MySNW.com has Integrity. It:
1. is truthful in its dealings with me.
2. is honest.
3. keeps its commitments.
4. is sincere and genuine.
Benevolence
I believe MySNW.com is Benevolent. It:
1. acts in my best interest.
2. does its best to help me if I need help.
3. is interested in my well-being, not just its own.
47
Reputation
1. Others have mentioned good things about using MySNW.com.
2. I have heard others speak favorably about using MySNW.com.
3. Other people have told me they are satisfied with using MySNW.com.
4. I have heard that most others are pleased with using MySNW.com
Privacy concern
1. MySNW.com strives to keep my personal information confidential.
2. MySNW.com will never allow unauthorized access to my MySNW.com page.
3. No one except those I designate will ever be allowed to see what I am doing on my
MySNW.com page
4. MySNW.com does a good job of letting me control my privacy .
Ease of Use
1. Interacting with MySNW.com does not require a lot of my mental effort.
2. I find MySNW.com easy to use.
3. I find it easy to get MySNW.com to do what I want it to do.
Trusting Intention (Measured on a 7-point Likert scale from (1) Not true at all to (7) Very true)
1. When I network socially online, I feel I can depend on MySNW.com.
2. I can always rely on MySNW.com for online social networking.
3. MySNW.com is a website on which I feel I can fully rely on when networking on the web.
4. I feel I can count on MySNW.com when networking online.
Usage Continuance Intention (Measured on a 7-point Likert scale from (1) Not true at all to (7) Very true)
1. In the near future, I intend to continue using MySNW.com.
2. I intend to continue using MySNW.com.
3. I predict that I would continue using MySNW.com.
4. I plan to continue using MySNW.com.
Note: Unless otherwise indicated all items measured on a 7-Point Likert scale from (1) Strongly
disagree to (2) Strongly agree
48
Download