BI Norwegian Business School – Thesis you? –

advertisement
Lars-Kristian Kjølberg (0808977)
Knut Erlend Hjorth-Johansen (0913794)
BI Norwegian Business School – Thesis
- Expertise: What does education give
you? –
On education and task complexity and their
moderating effect on expertise
Study Programme:
Organizational Psychology and Leadership
Date of submission:
03.12.2012
Name of supervisor:
Thorvald Hærem
This thesis is a part of the MSc programme at BI Norwegian Business School. The school takes no
responsibility for the methods used, results found and conclusions drawn.
Master Thesis GRA 19003
03.12.2012
Table of content
ABSTRACT ...................................................................................................................................III
ACKNOWLEDGEMENT ............................................................................................................ IV
1. INTRODUCTION....................................................................................................................... 1
2. THEORETICAL MODEL ......................................................................................................... 3
3. THEORETICAL FOUNDATION ............................................................................................. 4
3.1. DEGREE OF EXPERTISE ........................................................................................................... 4
3.2. EDUCATION ............................................................................................................................ 6
3.3. TASK COMPLEXITY ................................................................................................................. 9
3.4. TASK PERFORMANCE ............................................................................................................ 10
3.5. RISK PROPENSITY ................................................................................................................. 12
3.6. OVERCONFIDENCE ............................................................................................................... 13
3.7. PERCEIVED UNCERTAINTY.................................................................................................... 15
4. METHOD .................................................................................................................................. 17
4.1. PARTICIPANT CHARACTERISTICS .......................................................................................... 17
4.2. SAMPLING PROCEDURES ....................................................................................................... 18
4.3. MEASURES ........................................................................................................................... 19
4.3.1 Expertise ....................................................................................................................... 19
4.3.2. Education..................................................................................................................... 20
4.3.3. Risk propensity ............................................................................................................ 20
4.3.4. Task complexity ........................................................................................................... 21
4.3.5. Perceived uncertainty .................................................................................................. 21
4.3.6. Overconfidence ............................................................................................................ 22
4.3.7. Task Performance ........................................................................................................ 23
5. RESULTS .................................................................................................................................. 23
5.1. MISSING DATA ..................................................................................................................... 23
5.2. ASSUMPTIONS OF MULTIPLE REGRESSION ............................................................................ 24
5.3. DESCRIPTIVE STATISTICS ..................................................................................................... 25
5.4. POST-HOC............................................................................................................................. 30
6. DISCUSSION ............................................................................................................................ 31
6.1. Possible explanation; Education .................................................................................... 32
6.2. Possible explanation; Task complexity ........................................................................... 35
6.3. Post hoc .......................................................................................................................... 36
7. PRACTICAL IMPLICATIONS .............................................................................................. 37
8. LIMITATIONS ......................................................................................................................... 38
9. CONCLUSION ......................................................................................................................... 40
Side i
Master Thesis GRA 19003
03.12.2012
10. REFERENCES ........................................................................................................................ 41
APPENDIX 1 ................................................................................................................................. 46
APPENDIX 2 ................................................................................................................................. 48
APPENDIX 3 ................................................................................................................................. 49
APPENDIX 4 ................................................................................................................................. 50
APPENDIX 5 ................................................................................................................................. 52
Side ii
Master Thesis GRA 19003
03.12.2012
Abstract
The thesis make us of a quasi-experimental design in order to investigate how
education affects expert’s risk propensity, perceived uncertainty, overconfidence
and task performance. The moderating effect of task complexity was considered
for the relationships between expertise and perceived uncertainty, overconfidence
and task performance. In order to demonstrate these effects, data was collected
from 55 Java programmers from global companies located in Norway and
Vietnam. All participation was voluntary. An Internet based survey was
developed and respondents was free to choose when and where to conduct it. The
results suggest that a higher degree of expertise results in higher performance and
less overconfidence on high complexity tasks. Furthermore, the results suggest
that degree of experience is more important than education in perceiving
uncertainty.
Side iii
Master Thesis GRA 19003
03.12.2012
Acknowledgement
First and foremost, our greatest thanks go to our supervisor Thorvald Hærem, who
has guided us through this endeavor. We also express our gratitude to Gunnar
Bergersen and Jo Hannay who has given us valuable insight and help in gathering
data. Finally, we want to thank our friends and family for their support and
encouragement.
………………………..
…………………………
Lars-Kristian Kjølberg
Knut Erlend Hjorth-Johansen
Side iv
Master Thesis GRA 19003
03.12.2012
1. Introduction
The aim of this thesis is to contribute within the field of expertise. Individuals
with varying degree of expertise will be examined and compared on the basis of
their knowledge acquisition and their path towards the degree of expertise
(Summers, Williamson, & Read, 2004). Individuals with high degree of expertise
differ from other individuals with low degree of expertise in regard to their
superiority. Individuals who are considered experts has specialized knowledge of
the domain and will outperform both novices; which is individuals who has only
commonsense everyday knowledge or prerequisite knowledge assumed by the
domain, and sub experts; individuals that are above the novice level and have
generic, but inadequate specialized knowledge about the domain (Ericsson &
Smith, 1991).
Early discussions of expertise were concerned with the idea of nature vs.
nurture. Nature was coined as the “talent” that an individual naturally possessed
while nurture involved training and being coached towards performing at an
expert level performance in a given domain. The assumption that the prerequisite
for performing at an expert level is genetically transferable has been met with
skepticism; socialization, learning and environmental contributing mechanisms
have proved much greater effect on developing expertise (Ericsson & Lehman,
1996). There has been conducted an extensive amount of research attempting to
capture knowledge and knowledge development among experts within several
different domains, conclusively insight in deliberate practice and training is the
main contribution and the common denominator of this research (Ericsson, 2005).
In contrast to the vast amount of research considering the differences between
novices and experts, we examine individuals with varying degrees of expertise
only. Hereby, we aim to distinguish between different paths towards the superior
performance level within the certain domain representative for where the expert
usually operates (Summers et al., 2004; Ericsson & Lehman 1996; Ericsson &
Smith, 1991). Although research on expertise has been conducted in numerous
domains, such as chess players, physics and sports (Ericsson & Lehman, 1996),
results largely point to the same conclusion; expert performance is acknowledged
as domain specific (Ericsson, 2005; Haerem & Rau, 2007; Sonnentag, Niessen &
Volmer, 2006). Based on the assumption that individuals of expertise should be
able to display their superiority consistently within their domain, it is reasonable
Side 1
Master Thesis GRA 19003
03.12.2012
to expect it to be scientifically analyzable in controlled settings (Ericsson &
Lehman, 1996).
We base our research in the software industry and use individuals with
expertise within Java programming as research subjects. As research on expertise
has been conducted within many areas, our selection of domain is due to the
potential for measurement adequacy; an important practical implication when
examining the achievements of expert performance (Sonnentag et al., 2006).
Individuals with a high degree of expertise are inclined to engage in forward
reasoning strategies in problem solving, such as software programming were the
solution can be predicted by stable rules termed as the programming language
(Hærem, 2002). Hærem (2002) states that “in this domain the difference between
novices and experts is that experts tend to develop the breadth of the problem
solution first, while novices tend to develop the depth” (p.52). The advantages of
the breadth first strategy are revealed in the high complexity tasks of
programming whereupon the solution often depends on the breadth of alternatives
on previous steps (Anderson, Farell, & Sausers, 1984). A high task complexity
places more demand on the task doer than a task of low complexity by the
increase of the information load, a diversity of the information and the rate of
information change (Campell, 1988; Wood, 1986). This serves as a potential for
determining human performance (Wood, 1986).
This paper aims to contribute to the field of expertise by providing insight
into how educational background is affecting risk, and how both education and
task complexity is affecting, perceived uncertainty, overconfidence, and task
performance among individuals with various degrees of expertise. Our research
question is therefore as follows:
“How does education and task complexity affect expertise in relation to
performance, risk propensity, perceived uncertainty and overconfidence?”
Side 2
Master Thesis GRA 19003
03.12.2012
2. Theoretical Model
Figur 1
The basis of the model, the independent variable, is individuals that hold varying
degrees of expertise. These individuals are seen in relation to four variables of
interest; risk propensity, perceived uncertainty, overconfidence and task
performance. These relationships are moderated by education. In other words; will
there be differences between those individuals who have relevant educational
background in addition to experience, and those individuals who hold experience
only which knowledge is developed through practice? In addition to this, the
relationships between various degree of expertise and perceived uncertainty,
overconfidence and task performance will be moderated by task complexity.
In the following the theoretical foundation and hypotheses of each variable will be
presented.
Side 3
Master Thesis GRA 19003
03.12.2012
3. Theoretical foundation
3.1. Degree of expertise
Expertise is hereby defined as degree of technical superiority on a specific set of
representative tasks for a domain (Bergersen, Dybå, Hannay, Karahasanović, &
Sjøberg, 2011; Ericsson & Lehman 1996; Hærem & Rau, 2007).
Theory on expertise is somewhat wide ranged. Pioneering work by de Groot in
1946 examined the expert level of chess players (Ericsson, 2005), subsequently,
numerous of different domains such as music, sports and IT-programming has
been studied, motivated by the means of making training of less skilled
individuals more efficient. Extraction of the knowledge development of an expert
has been a concern with the aim to let students learn the expert’s knowledge
directly instead of rediscover it by them self. The idea of duplicating expertise is
rather optimistic. For individuals at a lower level of knowledge acquisition, the
insight to deliberate practice and training among experts is somehow the most
rewarding contribution; becoming an expert one self just by studying how the
experts obtain knowledge is simply not realistic (Ericsson, 2005).
Previous research in the field of expertise has focused largely on how
experts and novices differ on task performance or comparisons of experts with
different degree of experience. As example, Kendel (1973) found that length of
experience among experts on psychiatric diagnosis did not relate to the validity of
diagnoses given. Summers, Williamson and Read (2004) states that research on
expertise have largely considered the different paths toward the expertise level of
competence relevant to a given domain. Their study compared professional credit
managers who had learned through experience rather than education with credit
managers who had no experience but training in the relevant concepts. Results
showed that education can be a more efficient foundation for developing expertise
than experience only, which might be in accordance to similarities between
education and deliberate practice (Ericsson, 2008).
In order to better understand underlying cognitive mechanisms among
experts, Sanjram and Kahn (2011) examined the prospective memory; “[cognitive
capability] to remember to carry out delayed intention in fulfilling various task
demands” (Burgess, Veitcha, de Lacy Costello, & Shallice, 2000, as cited in
Sanjram & Khan, 2011 p.428) This was done with the purpose of distinguishing
qualities among experts and novices in the domain of programming. Performance
Side 4
Master Thesis GRA 19003
03.12.2012
and strategy was investigated among monochrons (individuals who prefer to do
one thing at the time) and polychrons (individuals who prefer to do many things at
the same time) within multitasking operations. Conclusively, cognitive complex
people tend to be more monochromic than individuals with simpler cognition who
tend to be more polychromic. The authors found that expertise is effectively
facilitating the maintenance of the different resources for performing multiple
activities (Sanrjam & Khan, 2011).
Operationalization of programming expertise is often done without
adequate validation as the conceptualization often is based on a manager’s
evaluation of the programmer who is labeling the level of seniority (Bergersen, et
al., 2011). For example, Bergersen et al. (2011) operationalized programmer’s
expertise level in terms of seniority; Arisholm, Gallis, Dybå and Sjøberg (2007)
used the same operationalization in addition to a pretest programming task,
attempting to measure their subjects programming skills, in order to assess the
internal validity of the experiment. By deploying level of seniority as measure of
expertise, the expertness of the individual programmer is hereby not necessarily
captured (Bergersen et al., 2011). Sanjram and Khan (2011) operationalized
programmer expertise as years of experience, which is common for quasiexperimental designed research within programming and software development
(Sonnentag et al., 2006). The underlying assumption is that expertise develops as
a function of time spent within the domain (Sanjram & Khan, 2011). Further, the
level of formal and academic education within the specific domain indicated the
level of expertise. Previous research by Schmidt, Hunter and Outerbridge (1986)
showed that experience and performance increased linearly within the first five
years, later in time, the relationship seems to flat out. Length of experience does
not necessary relate to a high performance level within programming and software
design (Sonnentag et al., 2006). This supports the assumption by Sanjram and
Khan (2011) that expertise develops as a function of time spent within the
domain. In their study the expert’s experience was, in fact, their progression in
relevant education (Sanjram & Kahn, 2011). Contrary to a merging of education
and experience, we aim to discriminate between Java programmers with highly
relevant education and those with less relevant education, by considering
education as moderation.
Side 5
Master Thesis GRA 19003
03.12.2012
Building on the definition of expert performance as “consistently superior
performance on a specific set of representative tasks for a domain” (Ericsson &
Lehman, 1996, 277), expertise is hereby seen as a technical superiority within
Java programming (Bergersen et al., 2011; Haerem & Rau 2007). The individuals
that are included in the data collection have varying degrees of expertise within
Java programming. We follow Sanjram and Kahn’s (2011) operationalization of
expertise as years of domain specific experience, which is suitable for quasi
experiments (Sonnentag et al., 2006). The length of experience that is considered
is distinguished from education, which is operationalized s a moderating effect in
the relationship between expertise and the different outcomes. In line with the
assumption that expertise develops as a function of time spent within the domain,
there is no cut off point in the length of experience, to whether individuals are
qualified as experts or not. Individuals will be considered having a varying degree
of expertise by their varying ability to perform domain related tasks; the longer
experience they have in the domain, the higher degree of expertise they have
(Sanjram & Kahn, 2011). This approach opens for the possibility to see whether
different factors affect performance, in addition to the other aspects; risk taking,
perceived uncertainty and overconfidence.
3.2. Education
Education refers to the academic credentials or degrees an individual have
obtained, according to Ng and Feldman (2009) who found that the level of
education is positively related to task performance. This contradicts to Chase and
Simon’s (1973) assumption, that experts’ task-specific knowledge must have been
acquired through experience. The assumption does not embrace that education is
serving as a platform for robust learning (Friedlander et al., 2010). Academia has
possibilities for structured learning, a situation that differs from most self-taught
learning. Friedlander et al. (2010) points to several aspects that foster robust
learning, which in turn develop a better memory capability connected to the
certain knowledge. The following aspects are depicted from the research with the
aim to reveal how education is differing from knowledge developed through
experience.
Friedlander et al. (2010) stresses the importance of the learning
environment as it affects functional and structural changes in the interconnected
cellular networks between neurons (synapses) at a variety of sites throughout the
Side 6
Master Thesis GRA 19003
03.12.2012
central nervous system. “Memory is a dynamic process where the information
represented is subject to our personal experiences, the context of the learning
environment, subsequent events, levels of attention, stress, and other factors.”
(Friedlander et al., 2010, 415). The learning situation provides the possibility of
active learning where the teacher and the student interact. “There is considerable
neurobiological evidence that functional changes in neural circuitry that are
associated with learning occur best when the learner is actively engaged”
(Friedlander et al., 2010, 417).
Repetition is central to the education context. Repetition of certain
knowledge will produce neuronal pathways that contribute to learning; it leads to
an enormous amount of molecular signals that develops to be more persistent
compared to the briefer knowledge that is less repeated. The plasticity of the brain
and those mechanism described above applies to both young, developing brains,
as well as those with more maturity. The latter occurs in denate gyros of the
hippocampus, but the functional implication of this is to be determined. Moreover,
the brain’s intrinsic reward system plays a major role in reinforcement of learned
behaviors. Connection of one’s learning to previously stored impressions, and
visualization of the learning content, helps the process of storing learning into
memory (Friedlander et al., 2010).
Drawing on this we assume that education is closely related to deliberate
practice where feedback and repetition is central (Ericsson, 2008). Further,
individuals that have reached the level of expertise, with both education and
practice will probably have obtained a higher level of expertise than individuals
that have reached the level of expert with experience based on practice alone,
which in turn may provide better task performance.
Deliberate practice can be defined as training with feedback (Ericsson,
2008). Ericsson and Lehman (1996) state that an individual that has been guided
with deliberate practice will attain a higher level of knowledge acquisition than
those without carefully structured training and practice regimen. Barnett and
Koslowski (2002) argue along the same lines, that one need to understand what
experiences may lead to expert performance, it is not sufficient to simply look at
the amount of experience. Building on this, Ng and Feldman (2009) found that
education is positively related to core task performance and that education became
increasingly important as the complexity in those tasks increased. On tasks of less
Side 7
Master Thesis GRA 19003
03.12.2012
complexity, however, the authors found that education level was less significant
for performance.
It is found that job experience had greater impact on job related knowledge
than performance at work (Schmidt, Hunter & Outerbridge, 1986). Results from
research suggest that job experience enhance skills, techniques, methods and
psychomotor habits, which in turn improve the performance capabilities
independent of the increase in job knowledge. Further, job knowledge and
performance increase linearly with experience up to 5 years of experience, after
this point in time the relation seems to flat out (Schmidt, Hunter & Outerbridge,
1986). A study by Bergersen and Gustafsson (2011) investigated the relationship
between programming skill and its main antecedents by using Cattell’s investment
theory. The relevance of this study is partially due to the highly competitive and
globalized industry of software production, which is focused on delivering highquality software at low cost. The authors predicted that programming knowledge
is the main casual antecedent for programming skill. Tests of cognitive abilities
are frequently utilized in order to recruit and retain highly productive software
developers (Bergersen & Gustafsson, 2011). General mental ability (g) is a central
predictor of performance used under recruitment circumstances (Bergersen &
Gustafsson, 2011; Schmidt & Hunter, 1998). Next, in accordance with Cattel’s
investment theory fluid g is about all new learning and is therefore ubiquitous and
closely related to general mental ability, and in turn working memory, which
relates to consciousness (Sweller, van Merrienboer, & Paas, 1998). On the other
hand crystallized g is about acquired knowledge. The authors found that the
influence of fluid g and experience on skill and job performance was mediated
through knowledge, which is in accordance with Schmidt, Hunter and Outerbridge
(1986), above. Working memory capacity and experience contributes to
programming skill. This relationship is mediated by programming knowledge
which accounts for a large degree of variance in programming skill. Further,
programming experience and knowledge is more often obtained through education
than on the job (Bergersen & Gustafsson, 2011). Sanjram and Khan (2011)
differentiated among their participants by their level of education within the
relevant domain. Novices with a basic course in programming were compared to
advanced students of computer science and engineering. Contrary to the view that
education and experience can be merged (Chase & Simon 1973), Sonnentag et al.
Side 8
Master Thesis GRA 19003
03.12.2012
(2006) states that years of experience are not necessary related to high
performance within advance software design and programming.
A larger fraction of the effect from education is cognitive which is
contradicting to previous research where it is claimed that only a small portion of
the returns from education is improving the human capital, namely cognitive
abilities (Baron & Werfhorst, 2011). When looking at general cognitive ability
alone, the estimate is that the cognitive component varies between 32 and 63
percent, depending on the country that is analyzed (Baron & Werfhorst, 2011).
This was also supported in 1989 when research showed that training and
experience may be positively related to the ability to structure problems (Garb,
1989).
Based on the theoretical fundament, we believe that relevance of education
should affect different aspects of interest among individuals of varying degree of
expertise as we conceptualize education as a moderating effect, according to the
model presented.
3.3. Task complexity
Our conception of task complexity relies upon contribution from several
researchers. The concept of task and the idea behind task complexity will be
reviewed briefly in order to present the conception.
A task contains three essential components; products, acts and information
cues, according to Wood (1986). Products are defined as measurable results of
acts while acts are simplistically referred to as a pattern of behavior with a certain
purpose. Finally, a task contains information cues that the task performer can use
to identify the required actions or judgments in the process of performing the task
(Wood, 1986). Bonner (1994) subsequently elaborated on this definition. The new
approach of the concepts had a more simplistic appeal; task input, process and
outputs (Haerem & Rau, 2007).
With the fundaments of a general task in mind, complexity of the task will
now be explained. Task complexity has the potential to contribute to determine
human performance; it is an important aspect when performance is to be rankordered as it places varying demand on knowledge skills and resources in an
ascending order (Wood, 1986). Summed up; as complexity increases, so does the
demand on the task doer (Wood, 1986). Campbell (1988) characterizes an
increase in complexity as an increase in the information load, the diversity of the
Side 9
Master Thesis GRA 19003
03.12.2012
information and the rate of information change. This involves the potential use of
multiple cognitive paths to arrive at the end state, the possibility of multiple
desired outcomes, conflicting interdependence among cognitive paths and the
presence of uncertainty (Campbell, 1988). Haerem and Rau (2007) developed a
set of tasks to investigate the difference of knowledge representation and search
strategies between experts, intermediates and novices. The authors made an
important distinction between surface structure tasks, deep structure tasks and
mixed structure tasks. However, for the purpose of our research, we rephrase the
different levels of complexity; surface structure tasks will be phrased as low
complexity tasks, deep structure tasks will be phrased as high complexity tasks.
Further, we concentrate on identifying the differences between low complexity
tasks and the high complexity tasks only. In order to make this type of distinction,
the term critical complexity requires an explanation. Critical complexity is defined
as the complexity embodied in the task resolution path that minimizes the amount
of information processing, which in turn creates the difference between low
complexity tasks and high complexity tasks (Haerem & Rau, 2007). In low
complexity tasks the critical complexity resides in the input and/or the output. For
individuals to complete this kind of tasks a search and analysis of the input and
output is necessary. In order to solve high complex tasks one must focus on the
task process rather than the inputs and outputs in order to solve the task efficiently
(Haerem & Rau, 2007).
Because of the fundamental difference between tasks of high and low
complexity presented above, we believe that task complexity should affect
different aspects of interest that are connected to the tasks directly. Hereby we
conceptualize task complexity as a moderator, according to the model presented.
3.4. Task performance
Ng and Feldman (2009) found a positive relationship between education and core
task performance, which refers to the basic required duties of a particular job. A
core task can be a specific task an expert would conduct within the expertise
domain. The expert holds the declarative and procedural knowledge that is
required for the task to be completed successfully (Ng & Feldman, 2009).
Research by Haerem and Rau (2007) discovered that different degrees of expertise
could lead to different perceptions of task complexity, and in turn, to different
Side 10
Master Thesis GRA 19003
03.12.2012
performance on task of different complexity. The two fundamental dimensions of
perceived task complexity is task variability (the number of exceptional cases
encountered in the work) and task analyzability (the nature of the search process
that is undertaken when exceptions occur) (Perrow, 1967). A higher degree of
expertise will foster lower perceived task variability and higher perceived task
analyzability (Haerem & Rau, 2007). Moreover, a higher the degree of expertise
will foster a higher performance on complex tasks, this implies that a high degree
of expertise will enhance the performance on complex tasks (Haerem & Rau,
2007). Ng and Feldman (2009) found support for this assumption. They found that
the relationship between education and performance is moderated by jobcomplexity. Higher education gave higher performance on complex tasks (Ng &
Feldman, 2009).
Based on this we assume that experts with education and practice will
have achieved a higher level of expertise than experts with only practice, within
the same timeframe, on the grounds of the use of deliberate practice. Our research
subjects have attained their expertise through different paths; nonetheless, they
have all achieved a certain level of technical superiority on representative tasks
for their domain (Bergersen et al. 2011; Ericsson & Lehman 1996; Hærem & Rau,
2007). Based on Ericsson’s (2008) hypothesis; “there is an underlying factor of
attained expertise in a domain, where the majority of the task can be ordered on a
continuum of difficulty”, (p. 989), we believe that because the individuals with
higher degree of experience and high relevance of education have achieved the a
higher expertise level; they should perform better on tasks than individuals with
increasing experience and lower relevance of education. On highly complex tasks
we assume that individuals with more experience will perform better than
individuals with less experience, this is mainly due to the possibility for expertise
to improve by the time spent in the domain (Chase & Simon, 1973). On tasks of
low complexity, where the complexity lies in the input and output, we don’t think
that the increasing level of expertise will not lead to better performance.
H1: The relationship between the degree of expertise and task performance will be
moderated by task complexity.
H2: The relationship between the degree of expertise and task performance will be
moderated by education.
Side 11
Master Thesis GRA 19003
03.12.2012
H3: Increasing level of expertise, and a high relevance of education will lead to
higher task performance compared to low relevance of education.
H4: A high level of expertise will give higher performance on task of high
complexity compared to a low degree of expertise.
3.5. Risk propensity
We distinguish between three aspects of risk; risk as phenomenon, risk taking and
risk propensity. Risk as a phenomenon is variation in the distribution of possible
outcomes, their likelihoods and their subjective value (March and Shapira, 1987).
Risk taking is the actual behaviors of an individual who have to make a choice
between alternatives of differing risk (Lejuez et al., 2002). We operationalize risk
as risk propensity; an individual´s willingness to take risk (MacCrimmon &
Wehrung, 1990). As a cognitive psychological phenomenon, risk propensity is
seen as distinguishable into two categories, also called systems. These are the
experiential system and the analytic system (Slovic et al., 2004, 2005). This can
relate to the well-known approach to human cognition, described by Kahneman
(2003). Kahneman categorizes cognition into two systems. System one is
reflective and “slow”, system two is intuitive and “fast”, these two systems are
intertwined and works simultaneously, some tasks or situations takes more use of
system one than other situations do. Likewise, risk processing is seen as (1)
intuition like, fast and automatic but vulnerable to manipulation and information
overload (called the experiential system), or (2) assessing, calculative and
dependent of conscious attention (called the analytic system) (Slovic et al., 2004;
Glöckner & Witteman, 2010). Regardless of whether it is the analytic or the
experiential system that is in use in a specific situation, it is argued that the
perception of the judgment criterions is affected by feelings of the situation,
(Slovic et al., 2004, Druckman & McDermott, 2008; Keller, Siegrist & Gutscher,
2006). This means that for an individual to perceive and conduct a judgment upon
a choice that might involve a potential for a negative outcome or that provides an
opportunity to obtain a positive outcome (Lejuez et al., 2002), feelings help
determine the actual choice (Slovic et al., 2004, 2005; Druckman & McDermott,
2008). These feelings are highly individual; they differ from person to person and
are also termed as heuristics (Slovic, 2004).
Our purpose is to examine how differing degree of expertise and relevance
of education will affect individual’s risk propensity. A denominator for the
Side 12
Master Thesis GRA 19003
03.12.2012
tendency to avoid risk is found to be the length of education; contrary, risk
aversion is fostered by trust in one’s own competences (risk is here seen as
analytic; calculative, assessing and deliberate) (Haerem, Kuvaas, Bakken, &
Karlsen, 2010). Nonetheless, it is not stated that longer education leads to less
trust in one’s own competences; we surmise that this aspects does not necessarily
have any relation. Education has clear similarities to deliberate practice that is a
robust and acknowledged path to a high level of performance (Ericsson, 2008). As
previously mentioned, deliberate practice includes training with feedback and
insight into theories and knowledge that is underlying for a certain topic
(Ericsson, 2008; Barnett & Koslowski, 2002). As education also includes these
aspects, one can assume that individuals who has competence from a deliberate
practice context will develop insight into his/her own knowledge and explore both
what knowledge one has and what one don’t know and by that be less
“convinced” (or less naïve) that one’s competence is sufficient enough to select
the more risky alternative in favor of a safer bet.
Previous research connecting risk and expertise shows that experience
with a task can improve risk judgment associated with completing tasks
(Christensen-Szalanski, Beck, Christensen-Szalanski, & Koepsell, 1983). Studies
have also shown that both experts and lay people tend to overestimate risk; the
estimation of risk is exaggerated and inaccurate. However, experts tend to
overestimate less (Christensen-Szalanski et al., 1983). As we have previously
proposed, that education leads to a higher level of expertise, we believe that a
similar pattern exists between experts with and without education as do between
lay-people and experts.
H5: Increasing degree of expertise, high relevance of education will decrease risk
propensity compared to a low relevance of education.
3.6. Overconfidence
According to Moore and Healy (2008) research on overconfidence has been done
in inconsistent ways in previous studies. They argue that researchers have
operationalized overconfidence in different ways, without distinguishing clearly
between overestimation, overplacement and overprecision. In short;
overestimation is about exacerbated estimation of one’s own actual performance,
Side 13
Master Thesis GRA 19003
03.12.2012
ability etc.; overplacement is about comparing oneself with others subjectively
and guessing one’s score in comparison to the others; overprecision is about
inaccuracy of one’s belief (Moore & Healy, 2008).
We define overconfidence in terms of overprecision where the accuracy of
one’s estimation is influenced by uncertainty of the task, which in turn will
“produce a subjective probability distribution that is narrower than reality
suggests it ought to be” (Moore & Healy, 2008, p. 505). Examining
overconfidence includes confidence intervals which is estimations provided by
respondents. As illustration, estimating the price of a house from 1,0 – 2,0 million
provides wider confidence intervals that an estimation of the house price as 1,3 –
1,8 million. Even though both estimates has the same midpoint, were the true
price of the house is 1,5 million, the last and most narrow estimate contain more
useful information than the other, regarding its accuracy. “Wider intervals will
generally increase hit rate, all else equal. If experts have higher hit rate than
novices, it may be because they know more about the limits of their knowledge”
(McKenzie, Liersch, & Yaniv, 2008, p.180). It is found that experts had a
midpoint closer to the true value and provided narrower intervals with fewer
errors, which in turn is more informative than wider intervals that increase the hit
rate (Keren, 1987; McKenzie et al., 2008). The virtue of informative estimates,
contrary to high hit rate, is interpreted by Yaniv and Foster (1995, 1997, referred
to in McKenzie et al., 2008) as people’s inherent desire. Jørgensen, Teigen and
Moløkken (2003) coin these intervals as prediction intervals. In their study
participants provided estimations of the effort requirement of certain tasks.
Overconfidence is done mostly in order to compare experts and novices.
These researches have given mixed results (McKenzie, Liersch, & Yaniv, 2008).
Conclusively: “it seems safe to say that experts are overconfident, but it is unclear
how they compare with novices” (McKenzie et al. 2008, p.180). The essence is
hereby that knowledge acquisition leads to overconfidence (Plous, 1993).
Nonetheless, we extend this by assuming that individuals at expertise level that
have a high degree of relevant education possess greater meta-knowledge about
one’s own knowledge and capabilities than self-taught individuals at the same
expertise level. This is due to the similarities between education and deliberate
practice where feedback is central, and should decrease overconfidence (Plous,
1993). Further, the lack of feedback within software development and
programming makes it difficult to learn from experience (Jørgensen et al., 2003),
Side 14
Master Thesis GRA 19003
03.12.2012
and the same accounts for the programming tasks that are deployed for the data
collection were no feedback will be given throughout the conduction. Therefore
our assumption is that experts without education will possess more
overconfidence than experts who has relevant educational background.
An individual who sets wider intervals will be considered less
overconfident than those who set narrower intervals in correspondence with
his/her hit rate, estimation accuracy and the actual performance relative to the
time/effort used (McKenzie et al., 2008; Jørgensen et al., 2004). “If the hit rate is
lower than the confidence level, we observe overconfidence” (Jørgensen et al.,
2004, p. 81).
Nonetheless, one should be aware of interpreting low estimation results as
poor estimation skills (Jørgensen et al., 2004). Complex technology and
development of innovative software solutions has built-in uncertainty and
problem specification have to be decided during the design process. This
complexity can originate deviations between the expert’s estimate of time
consumption and the actual use of time (Sonnentag et al., 2006). A task that does
not require much time, such as easy tasks, is usually evoking overestimation of
time necessary to complete the task and underestimation of performance (Moor &
Healy, 2008).
H6: Increasing degree of expertise, a high relevance of education decrease
overconfidence, compared to low relevance of education.
H7: A high level of expertise will reduce overconfidence on tasks of high
complexity, compared to a low level of expertise.
3.7. Perceived uncertainty
Perceived uncertainty is hereby understood as perception of a task; more
specifically the perception of a certain task’s complexity (Hærem & Rau, 2007).
Perceived uncertainty can be seen in contrast to risk propensity that is an inherent
and general inclination to take risk (Lejuez et al., 2002). Perceived uncertainty is
relating directly to the task and is including perception of the task’s complexity by
two dimensions; the perceived analyzability and the perceived variability (Hærem
& Rau, 2007).
Side 15
Master Thesis GRA 19003
03.12.2012
Perrow (1967) defined perceived task analyzability as the nature of the
search process that is undertaken when exceptions occur, such as unfamiliar
stimuli encountered during a task. The search process is dependent on whether the
task is previously learned or programmed. If the task is highly programmed, the
search is logical, systematic and analytical, while if the task is not previously
learned and thus un-programmed, the search is based on chance and guesswork
(Haerem & Rau, 2007; Perrow, 1967). On the other hand, perceived variability is
defined as the number of exceptional cases encountered in the work (Haerem &
Rau, 2007). Haerem and Rau (2007) found that the higher the degree of expertise,
the lower the degree of perceived task variability and the higher the perceived task
analyzability, which in relation to perceived uncertainty would indicate that the
higher the degree of expertise, the lower the perceived uncertainty.
Following the reasoning that deliberate practice contributes to expertise,
defined as superior performance (Ericsson & Lehman 1996), and has similarities
with education by the magnitude of domain related feedback (Ericsson, 2008),
perceived uncertainty will possibly be affected by this element. Education may
provide insight into theories and knowledge that is underlying for the certain topic
(Ericsson, 2008; Barnett & Koslowski, 2002), which in turn may lead to an insight
to one’s own knowledge in such a way that one’s limitations is also understood.
Broadness in knowledge acquisition may also lead to an assessment of what
theory should be deployed (Friedlander et al., 2011) in a certain task-solving
situation. Because education also can be considered as experience (Sanjram &
Kahn, 2011), and that amount of experience predicts the level of expertise (Chase
& Simon, 1973), we assume that individuals at a low level of expertise and
relevant education will have less perceived uncertainty. This is assumed because
relevant education at lower degree of expertise may provide acquaintance to
theories and knowledge that contributes to feeling of certainty when meeting
relevant tasks, the experience with these theories may simply evoke too
conclusive (Plous, 1993; Slovic et al., 2004). Individuals at low levels of expertise
without relevant education might not have developed these heuristics (Solvic,
2004). Deeper insight to underlying theories is hereby not present because it
develops as a function of time spent in the domain (Friedland et al., 2001;
Sonnentag et al., 2006). When individuals have relevant education and a longer
experience, the deeper insight should be present. We assume that perceived
uncertainty will increase when facing domain related tasks.
Side 16
Master Thesis GRA 19003
03.12.2012
As these dimensions serve as a foundation for operationalizing perceived
uncertainty we hypothesize as follows:
H8: At low degree of expertise, high relevance of education leads to less
perceived uncertainty than less relevant education, while at high degree of
expertise, high relevance of education leads to more perceived uncertainty than a
low relevance of education.
H9: As the degree of expertise increase, perceived uncertainty will decrease on
tasks of high complexity, compared to a low level of expertise.
4. Method
4.1. Participant characteristics
Participants were selected in accordance to a technical superiority on tasks of Java
programming. With regard to its properties, Java programming was chosen as the
domain to where participants should hold various degree of expertise. This
criterion does not set clear requirements for which to be qualified to participate.
With this in mind, we aimed to reach individuals with experience within Java
programming. Arenas that were assumed to relate with individuals of high Java
programming competence was mapped. There were mainly three different kinds
of Java-related arenas that were contacted for this purpose: Internet forums,
public- and private sector. The broad range of individuals that were targeted and
invited to participate was considered as contributing; the survey that was
developed constrained the possibility for non-experts, or of target competency, to
slip in. In other words, the survey required a certain level of competence to be
conducted. The strategy of selecting individuals with varying degree of expertise,
among those without this ability, is in accordance with, but yet still nuancing the
approach by Sonentag et al. (2006). In their case, an expertise level task requires a
certain level of competence to be completed successfully, in our case a varying
degree of completeness was allowed. While Sonentag et al. (2006) term
individuals at expertise level as those who hold abilities to complete the task; we
conceptualize these individuals as those who have the ability to conduct the tasks
with a varying degree of completeness. This opens for the possibility to reveal
Side 17
Master Thesis GRA 19003
03.12.2012
how individual differences can lead to varying task performance, where the
degree of completeness equals to performance.
As the participants were considered holding varying degrees of expertise
by their ability to conduct the survey, there were no set requirements to the past
experience that the individuals held. As a part of the survey, those who
participated were self-reporting their education and their experience with software
development and Java programming. The method was chosen based on the
intention to collect participants with varying educative and experientially
backgrounds. The inviting procedure was conveyed without constrains in regards
to geographical areas, previous performance level, age or gender. Overall we
invited individuals from forums, organizations and seminars connected to Norway
in addition to one company located in Vietnam. 18 participants had Vietnam as
platform and was paid 15 euro per hour, 15 participants had Norway as platform.
There was no control of gender or age throughout the data collection.
4.2. Sampling procedures
Throughout the spring 2012, plans for collecting data were established. The data
collection tool that we used was developed in collaboration with both Technebies
and our supervisor. By use of Qualtrics and a downloadable application, a survey
consisting of a self-reporting part and three programming tasks was established.
Participation in the survey did only require a computer with an internet
connection, an Internet browser, and a Java development environment installed.
The survey could be conducted without any particular appearance at any certain
place, there were no restrictions to where and when to conduct the survey, which
took about 1,5 – 2 hours to complete. The advantage of this task solving setting is
that the difference in expertise arises in a natural way rather than being
manipulated in the laboratory (Keren, 1987). All data was collected between June
and November 2012. All participation where to be done individually and there
was possible to take breaks between each of the three tasks.
In order to collect participants we invited individuals by contacting
organizations in public and private sector that related to Java programming in
Norway and Vietnam. Further, we held a presentation of research for attendants in
a course holder firm that educated Java programmers at advanced level in
Norway, we were given permit to send invitation letters to several companies
Side 18
Master Thesis GRA 19003
03.12.2012
within private and public sector also in Norway, and we presented the research for
attendants at a Java seminar located in Oslo. Further, some Internet forums with a
Norwegian platform were approached. Individuals who wanted to participate sent
us an email or wrote their email address on a list. A link to the survey was sent out
to these email addresses. Totally seventy links were sent out to emails that was
collected through the different sources. All participants were guaranteed
anonymity. We intended to collect data from 50 respondents; the sampling
procedure resulted in totally 55 respondents were 52 contributed to measure risk
propensity and 33 respondents answered most of the survey. This will be
explained into detail under results.
4.3. Measures
4.3.1 Expertise
Measuring the expertise construct we applied a formative approach, as we believe
the measured variables cause the construct, i.e. the construct is not latent. (Hair,
2010). According to Diamantopoulos & Winklhof (2001) content specification,
indicator specification and indicator collinearity are critical to successful index
construction. The content of the construct is supposed to represent the combined
general software development experience and Java-programming specialization.
Each subject answered a 13 item questionnaire regarding their software
development experience, both general and Java-programming specific, and about
their estimation skills (Appendix 1). 3 items regarding estimation skill were
dropped because they did not fit the content specification.
The 10 remaining items were putt through a principal component analysis
and revealed a 3 factor structure where 2 seemed to reflect the content
specification good, thus upholding indicator specification. The factors extracted
were named Length of experience, consisting of length of total programing
experience, length of total Java programming experience and number of project
roles held, and Specialization of experience, consisting of current consecutive
length of Java programming, percent of work day used on coding and selfreported Java expertise. Both factors were subsequently combined into the
experience construct.
As collinearity amongst formative items can be problematic (Hair, 2010,
Diamantopoulos & Winklhof, 2001) the indicators were regresses on the expertise
construct (Diamantopoulos & Winklhof, 2001) and then inspected for collinearity.
Side 19
Master Thesis GRA 19003
03.12.2012
The regression proved that collinearity was not a problem as all the tolerance
levels was above the recommended cut-off value of .10 and all VIF values were
below 5 (Diamantopoulos & Winklhof, 2001; Gripsrud, Olsson & Silkoset, 2004;
Hair, 2010).
4.3.2. Education
In order to measure the relevance of education the responds self-reported on
length and type of education. The respondents could choose between 14 different
categories of education, based on the categorization from Samordna Opptak
(unifying unit of education in Norway). In addition they reported the length of
their education within each category. Each respondents educational background
was then ranked from 1-7 based on type and length of education (Appendix 2),
where 1 indicated lowest relevance to Java-programming and 7 indicated most
relevance to Java-programming. We made the ranking of education categories by
evaluating the aspects that has similarity to Java programming.
4.3.3. Risk propensity
For risk propensity we used a 4 item measure from Calantone, Garcia, and Dröge
(2003) as a basis. The questionnaire was originally aimed at risk propensity within
strategy planning for development of new products. We rephrased these questions
in order to be applicable to the domain of programming, intentionally maintaining
the essence of the questions. The measure was also extended with a 5th question
to better represent risk propensity in software development (Appendix 3). The
rewritten measure was exposed to principal component analysis in order to
establish construct validity.
Before conducting the analysis we assessed the correlation matrix for
adequacy of factor analysis. Several criterions were used. First, according to Hair
(2010): ”a strong conceptual foundation is needed to support that a structure does
exist” (p.105). The validation by Calantone, Garcia, and Dröge (2003) suggests
that there in fact is a strong conceptual foundation. Second, it is recommended
that the sample size should be minimum 50 and have a 10:1 ratio or more, which
is met as there is 52 (N = 52) cases for 5 variables giving a sampling ration of
11:1 (Hair, 2010). Third, as recommend by Tabachnick and Fidell (2001), the
correlation matrix was inspected for coefficients greater than .30, but this was
somewhat disappointing as only one coefficient had a value in over .30. Fourth,
Side 20
Master Thesis GRA 19003
03.12.2012
and finally, both the Bartlett’s test of sphericity (Bartlett, 1954), and the KaiserMeyer-Olkin (KMO) measure of sampling adequacy (Kaiser, 1970; 1974) was
used to assess the dataset for factor analysis. The Bartlett’s test of sphericity was
not significant (p = .072) , the KMO, however, was above the lowest value
recommended by Hair (2010) (.50) reaching .563, albeit this is rated as miserable
(Hair, 2010), indicating the factorability of the correlation matrix.
A Principal Component Analysis with Variamax rotation revealed 2
factors, of which 4 questions loaded on the first factor, while a single item, item 3,
loaded on the second factor. The item was removed from the factor analysis and a
one factor solution emerged, which can be viewed in table 3.
Finally, a reliability analysis revealed that the 4 items derived had low
reliability, a coefficient alpha of only .453, which is regarded as unacceptable
(Hair, 2010). Despite these disappointing results, we chose, based on the
theoretical foundation of the risk scale, to summarize the items into one factor and
proceed with regression analysis.
4.3.4. Task complexity
Task complexity was operationalized by use of three tasks of varying degrees of
complexity. The task with medium complexity was developed by Arisholm and
Sjøberg (2004) and is in their research used as a pretest task in advance of four
tasks of increasing complexity named “coffee machine tasks”. The task of low
complexity was the third of the coffee machine tasks. The task with high
complexity was developed by Bergersen and Gustafsson (2011). Each of the three
tasks that we deployed had a level of complexity dissimilar from the others. The
tasks was given to the respondents in a sequence were the task of less complexity
first and the one with most complexity last. The complexity increased within the
same dimension, which means that the same programming language was in use in
all tasks, but the requirement of this language was increasingly complex. They
were coded as 1 – 3 were 1 was low complexity and 3 was high complexity.
4.3.5. Perceived uncertainty
The Perceived uncertainty measure is based on a combination of two measures
developed by Haerem (2002), perceived task analyzability and perceived task
variability. Each dimension consists of 4 items in the form of questions where the
respondents rate their perceptions on a scale of 1-7. Both dimensions were
Side 21
Master Thesis GRA 19003
03.12.2012
rewritten to reflect the task domain of Java programmers, and in addition
perceived task variability was extended with a fifth question to fully capture the
dimension (Appendix 4).
To establish the construct validity of the scales a principal component
analysis was conducted, but before conducting the analysis we assessed the
correlation matrix for adequacy of factor analysis. Several criterions were used.
First, according to Hair (2010), ”a strong conceptual foundation is needed to
support that a structure does exist” (p.105). The validation by Hærem (2002)
suggests that there in fact is a strong conceptual foundation. Second, it is
recommended that the sample size should be minimum 50 and have a 10:1 ratio or
more, which is met as there is 94 (N = 94) cases for 9 variables giving a sampling
ration of 10,44:1 (Hair, 2010). Third, Tabachnick and Fidell (2001) recommend
an inspection of the correlation matrix for coefficients greater than .30. The
correlation matrix revealed several coefficients over .30. Fourth, and finally both
the Bartlett’s test of sphericity (Bartlett, 1954), and the Kaiser-Meyer-Olkin
(KMO) measure of sampling adequacy (Kaiser, 1970; 1974) was used to assess
the dataset for factor analysis. The Bartlett’s test of sphericity was significant (p <
.05) and the KMO was above the recommended value of .60 reaching .763 ratet
by Hair (2010) as middling, indicating the factorability of the correlation matrix.
The validity of the scales was tested using a principal component analysis.
The analysis revealed that the third item in the analyzability scale loaded
negatively with both scales and was as a consequence the item was removed.
After the removal the factor analysis revealed two dimensions as predicted.
The reliability was calculated based on all respondents’ perceptions of all
three tasks as this is a repeated measure. The reliability coefficient alpha for the
analyzability dimension was .756 and for the variability dimension it was .902
which is above the recommended cut-off point of .70 and regarded as acceptable
(Hair, 2010). The result is presented in appendix 4.To create the Perceived
uncertainty variable, we summed each dimension and added them together. As the
two dimensions theoretically and conceptually opposites, the task analyzability
dimension was reversed before summing the two dimensions.
4.3.6. Overconfidence
Participants were asked to estimate most likely how complete they thought task
solution would be (Appendix 5). As with perceived uncertainty this was a
Side 22
Master Thesis GRA 19003
03.12.2012
repeated measure where the estimation was done after specifications for each of
the three tasks was given, and before participants got the opportunity to start
solving the task in question. To calculate the overconfidence we used the mean
relative error (MRE), |actual – estimated| / estimated, which is an accuracy
measure used to calculate under- and over confidence (Jørgensen,& Sjøberg,
2003).
4.3.7. Task Performance
Task performance is a repeated measure that was measured by a system-generated
calculation of the completeness of each task. A higher degree of completeness was
interpreted as better task performance, and given as performance score.
5. Results
5.1. Missing data
We conducted a missing value analysis (MVA) to detect missing data in our
dataset. 96 variables and 55 cases giving a total of 5280 data points were included
in the analysis. Of the 96 variables 84 had 1 or more missing values. These 84
variables had a total of 1237 (23,43 %) data points missing. Further inspection of
the data revealed that of the 55 respondents only 34 (62,96%) had downloaded
the application needed to solve the programming tasks. This is not a missing data
process that be classified as ignorable, nor is it data missing at random and as
such, action needed to be taken (Hair, 2010). Because the measure of Risk
propensity was included in the initial questionnaire and therefore was answered
prior to downloading the task solving application a decision was made to divide
the dataset into two sets. One dataset contained all of the 55 respondents used to
analyze Risk propensity, and one dataset containing the 34 that had downloaded
the task solving application used to analyze Task performance, Perceived
uncertainty and Overconfidence. These datasets were then further scrutinized for
the identification of additional missing data.
In the dataset consisting of 55 respondents, 3 respondents (5,45%) only
completed parts of the survey, and quit without answering the questions regarding
risk propensity. Although, according to Hair (2010), there is no specific rule of
thumb about when to delete respondents we saw the need for the removal of these
3 as they did not contribute at all to the dependent variable. Consequently they
Side 23
Master Thesis GRA 19003
03.12.2012
were removed from the sample. Of the reminding respondents in the dataset, there
were no missing data, giving a sample of 52 respondents.
In the dataset consisting of 34 respondents, 1 respondent (2,94% of the 34)
did not attempt to solve any of the programing tasks and was subsequently
removed from the dataset. Furthermore, 4 respondents did not solve the high
complexity task or answer the questions related to perceived uncertainty nor
overconfidence on this task and one respondent did not answer the analyzability
items for the low complexity task. This gives a total of 1,62% missing data in the
dataset. As this is a repeated measure, by variable this represents 4,04% for
Performance and Overconfidence respectively and 5,05% for Perceived
uncertainty. Little's MCAR test (Chi-Square = 102.434, df = 121, Sig. = .888)
indicated that the data was indeed MCAR (Hair, 2010). The imputation method
used was the complete case approach, although this method has several
disadvantages the method was used as the extent of missing data was sufficiently
low and the sample was large enough to warrant it (Hair, 2010).
5.2. Assumptions of multiple regression
According to Hair (2010) there are 4 assumptions for multiple regressions;
linearity of the phenomenon, constant variance of the error term, independence of
the error terms, and normality of the error term distribution. In order to assess
these assumptions we inspected the residuals for both datasets.
Inspection of the dataset with 52 respondents revealed that the equation
met the assumptions concerning linearity of the phenomenon, constant variance of
the error term, and independence of the error terms, however the assumption of
normality of the error term distribution was not met. Several transformation
techniques (Hair, 2010) were tried to correct this, unfortunately with
disappointing results. As such the variable was used in its original form.
The inspection of the dataset with 33 respondents revealed that all three
regression equations met the assumptions of linearity and independence of the
error terms. The assumptions about constant variance of the error term and
normality of the distribution were, however, not met for either of the equations.
Several transformation techniques have been utilized to accommodate this
shortcoming of the data, but unfortunately the data showed best fit in their
original, untransformed form. (Hair, 2010). It is, however, argued that this
Side 24
Master Thesis GRA 19003
03.12.2012
problem of nonnormality becomes smaller when the sample is larger than 50
(Hair, 2010).
Another important assumption for utilizing regression analysis is that the
variables do not correlate to a large extent. Multicollinearity leads to shared
variance between variables, decreasing their ability to predict the dependent
variable in question, as well as the ability to decipher their individual effects
(Hair, 2010). As we are analyzing interaction effects we centered the independent
variable as recommended by Aiken & West (1991) to avoid multicollinearity on
both datasets. Furthermore we ran multicollinearity statistics on all four regression
equations in order to reveal if it would still be a problem. However none of them
showed any problems with multicollienarity, as all variables had tolerance levels
above the recommended cut-off value of .10, with the majority above .90, and
VIF-values below 2.0, with the majority between 1.0 and 1.5 (Hair, 2010).
5.3. Descriptive statistics
Table 1 presents the descriptive statistics and correlations among the variables in
the sample consisting of the 52 respondents that answered the initial questionnaire
regarding background and risk propensity. This sample was used solely for testing
the hypothesis regarding risk propensity. Note that Education Rank is the
education dimension and is coded 1-7 depending on relevance of education. The
hypothesis regarding risk propensity, H6, gain no preliminary support as none of
the relationships between the variables are significant.
Table 1
Means, Standard Deviations, and Intercorrelcations of Degree of expertise,
Education rank and Risk propensity.
Variable
M
SD
1 .XP1XP3
.0000 .81550
-
2. Education Rank
4.19
.032
-
3. Risk propensity
3.2837 .92624
.091
.137
1.609
1
2
3
-
*p <.05. **p<.01.
Table 2 presents the descriptive statistics and the intercorrelations among the
variables in the sample consisting of 33 respondents.
Side 25
Master Thesis GRA 19003
03.12.2012
Table 4 indicates that Task Complexity correlates negative with Performance (r =
-.465, p <.01) and Overconfidence (r = -.399, p < .01), giving some preliminary
support for H1, and H4 and H7. Furthermore, Education Rank is negatively
correlated with Perceived uncertainty (r = -.268, p < .01), giving some preliminary
support for H8.
Table 2
Means, Standard Deviations, and Intercorrelcations of the independent and
dependent variables
Variable
M
SD
1
2
3
4
5
1. Degree of
.0000
.809
-
2. Task Complexity 2.00
.821
.000
-
3. Education Rank
4.273
1.609
-.030
.000
4. Performance
72.549
39.715
.185†
-.465** -.016
-
5. Overconfidence
16.641
35.098
.158
-.399** -.027
.875** -
6. Perceived
2.066
.928
-.315** -.068
6
expertise
-
.061
-268** -.145 -
Uncertainty
†
p < .10. *p <.05. **p<.01.
Hypothesis testing
Table 3 presents the results of the regression analysis.
Columns 1, 3 and 5 of table 5 indicates a significant main effect of degree
of expertise on performance, perceived uncertainty and overconfidence (b =
9.959, p < .05, b = -.310, p < .01 and b = -7.493 p < .10 respectively). Note that
when regressing the independent variable and the moderators on perceived
uncertainty we control for performance as performance and perceived uncertainty
had a significant negative correlation (p <.01) and seemed to a have significant (p
< .10) main effect on perceived uncertainty. Column 2 and 6 indicates some
significant interaction effects between degree of expertise and task complexity on
performance (b = 11.510, p < .05) and overconfidence (b = -10.722, p < .05),
though there are no significant interaction effects between degree of expertise and
education rank on neither performance nor overconfidence leaving H2, H3 and H6
not supported. Column 4 indicates a significant interaction effect between degree
of expertise and education rank on perceived uncertainty (b = .129, p < .10), but
Side 26
Master Thesis GRA 19003
03.12.2012
no significant interaction effect between degree of expertise and task complexity
on perceived uncertainty leaving no support for H9.
To test the reminding hypotheses, we needed to analyze the interaction
effects of each hypothesis. Two regression lines are plotted for each dependent
variable, one for high level of the moderator, and one for low level of the
moderator (Aiken & West, 1991).
Table 3
Summary of regression analysis for variables Performance, Perceived
uncertainty and Overconfidence
Performance
Perceived uncertainty
Overconfidence
Dependent
Variable
Constant
1
2
3
4
5
6
71.323**
71.008**
2.087**
2.087**
-17.578**
-
(3.574)
(3.525)
(.090)
(.090)
(3.300)
17.871**
(3.253)
9.959*
10.554*
-.310**
-.282*
7.493†
8.050*
(4.411)
(4.376)
(.116)
(.116)
(4.072)
(4.039)
.204
.216
-.273
-.248
.174
.187
Task
-23.172**
-
-.012
-.031
-17.572**
-18.075*
complexity
(4.426)
23.711**
(.132)
(.132)
(4.086)
(4.031)
-.473
(4.368)
-.010
-.027
-.406
-.418
Expertise
-.484
Education
.373
.314
.036
.042
-.637
-.691
Rank
(2.281)
(2.252)
(.057)
(.057)
(2.106)
(2.078)
.015
.012
.061
(.071)
-.029
-.031
Performance
-.005†
-.006
(control)
(.003)
(.003)
-.219
-.238
Expertise x
11.510*
.043
10.722*
Complexity
(5.377)
(.140)
(4.962)
.191
.031
.201
Expertise x
1.126
.106†
1.061
Education
(2.454)
(.063)
(2.264)
Rank
.041
.167
.044
R
2
.258
.295
.149
.176
.190
.232
N
94
94
93
93
94
94
F
10.545**
7.462**
2.967**
3.105**
7.118**
5.367**
Note. The regression parameter appears above the standard error (in parentheses).
†
p<.10. * p < .05. ** p <.01.
Side 27
Master Thesis GRA 19003
03.12.2012
Fig. 2 plots the effects of degree of expertise on performance for high and
low task complexity. It indicates, as hypothesized, that a higher degree of
expertise results in higher performance on high Task Complexity (slope p <.01),
thus supporting H4. The slope plotting degree of expertise against performance on
low complexity tasks indicates almost no difference in performance along the
expertise continuum and furthermore, the slope is not significant (p = .643).
Figur 2
The risk propensity variable was tested based on the theoretical foundation
even though it did not have sufficient construct validity. The result, however was
somewhat disappointing, with no main effect of expertise on Risk propensity, and
the interaction term far from significant, leaving H6 unsupported.
Figure 3 plots the effects of degree of expertise on overconfidence for high
and low task complexity. The slope for high task complexity indicates that as
degree of experience increase, the overconfidence decreases. This is supported by
the significant slope (p < .01), and thus H7 is supported. On low complexity tasks,
there seem to be a small decrease in overconfidence, the slope is however not
significant (p = 0,654).
Side 28
Master Thesis GRA 19003
03.12.2012
Figur 3 (Note that the scale for overconfidence is reversed for easier interpretation)
Figure 4 depicts the effect of degree of expertise on perceived uncertainty
moderated by relevance of education. It indicates that at a low degree of expertise
high level of relevant education will have reduced perceived uncertainty
compared to a low level of relevant education, albeit a very small difference.
Furthermore, it indicates that this relationship changes as the degree of expertise
increases. Even though perceived uncertainty decrease from low degree to high
degree of expertise, it decreases more for low relevance of education than high
relevance of education showing that at a high degree of expertise, a higher
relevance of education results in more perceived uncertainty than low relevance of
education. The slope for low education rank is significant (p < .01), while the
slope for high education rank, however, is not significant (p = 0,495). This leaves
H8 only partially supported.
Figur 4
Side 29
Master Thesis GRA 19003
03.12.2012
5.4. Post-hoc
As the factor analyses of the items making up the expertise construct revealed two
factors with seemingly different properties of expertise we wanted to explore
whether there were any interaction effects between the two. We tested this by
regressing the experience variables, and the interaction between the two, on the
dependent variables. Only Perceived uncertainty proved significant and the results
are presented in table 4.
Column 1 indicate significant main effects by both experience variables on
Perceived uncertainty, and column 2 also show a significant interaction between
the two (b = -.262, p < .01). Figure 1 plot the effects of length of experience for
high and low degree of specialization on Perceived uncertainty.
Table 4
Perceived uncertainty
Dependent Variable
Constant
1
2
2.071**
2.146**
(.080)
(.075)
Length of
-.539**
-.473**
experience
(.083)
(.077)
-.590
-.518
Specialization of
.192*
.366**
experience
(.085)
(.088)
.205
.391
Length of
-.262**
experience x
(.061)
Specialization of
-.403
experience
R2
.317
.431
N
94
94
F
21.315**
22.981**
† p < .10. *p <.05. **p<.01.
Fig.5 indicates that programmers with a high level of specialized experience have
higher perceived uncertainty throughout the length of experience continuum
compared to programmers with lover degree of specialized experience.
Side 30
Master Thesis GRA 19003
03.12.2012
Figur 5
6. Discussion
In this thesis, we have investigated how education and task complexity are
influencing individuals of various degree of expertise. Four outcome variables of
relevance have been considered, namely risk propensity, perceived uncertainty,
overconfidence and task performance. We have included two separate moderators
in our research; education, that is seen as moderating the relationship between
expertise and all of the four outcome variables; and task complexity as moderator
of the relationship between expertise and perceived uncertainty, overconfidence
and task performance. Task complexity is not included as a moderator of the
relationship between expertise and risk propensity because risk is operationalized
as an individual’s general and intrinsic tendency to choose a risky option, which
differ from risk taking behavior as such (Lejuez et al., 2002). Hereby, risk
propensity is not seen in relation to any certain tasks, which is contrasting to the
other variables that are related to the tasks used in the survey.
We hypothesized the following for the moderating effect of education:
A high relevance of education should contribute to better task performance when
expertise increases, which should be contrasting to the contribution from
education of low relevance. A high relevance education should contribute to a
decrease in both risk propensity and overconfidence when the expertise increases.
A high relevance of education should lead to less perceived uncertainty when
individuals have a low degree of expertise, and more perceived uncertainty when
the degree of expertise is high. For the moderating effect of task complexity, the
Side 31
Master Thesis GRA 19003
03.12.2012
following was hypothesized: Higher degree of expertise should provide better
performance on tasks of high complexity, compared to a low degree of expertise.
Higher degree of expertise should decrease overconfidence when the task has a
high complexity, compared to a low level of expertise. Higher degree of expertise
should contribute to a decrease in perceived uncertainty when the tasks have a
high complexity, compared to a low degree of expertise.
The results indicate that both task complexity and education plays a role in
some of the aspects in which individuals with varying degrees of expertise are
approaching and performing domain specific tasks. For the moderating effect of
task complexity, our expectation was supported for task performance and
overconfidence, but not with for perceived uncertainty. For the moderating effect
of education, the expectation that a high relevance of education should foster more
perceived uncertainty among individuals of high expertise was not supported.
However, we found support for the expectation that perceived uncertainty should
decrease with low relevance of education when expertise increases. With regard to
task performance, risk propensity and overconfidence we found no support. In the
following, possible explanations will be presented for each of the two moderators
and the relationship in which they were expected to moderate.
6.1. Possible explanation; Education
Despite that expertise is domain specific (Ericsson & Lehman, 1996; Ericsson,
2005; Haerem & Rau, 2007; Sonnentag et al., 2006) and that relevant education
should enhance knowledge strength in the certain domain, which in turn should
improve task performance (Ng & Feldman, 2009), our results does not confirm
this relationship. According to theory, feedback should provide an increase of task
performance (Barnett & Koslowski, 2002). Even though the educational context is
providing solid feedback throughout knowledge acquisition (Friedlander et al.
2011), which consequentially should mean that domain related feedback is given
more consistently to those having a more relevant educational background,
increased task performance is not the case, according to our results. Drawing on
this, experience might be considered as a more dependable predicator for task
performance, were high complexity programming tasks are current. This is
confirming Chase and Simon’s (1973) assumption, that task-specific knowledge at
an expertise level must have been acquired through experience. One may assume
that education should be considered merged with experience (Sanjram & Kahn,
Side 32
Master Thesis GRA 19003
03.12.2012
2011). Further, a possible explanation for why this might be the case, despite that
feedback situations should give a higher level of expertise (Barnett & Koslowski,
2002), is that knowledge development, independent of learning context, is
increasing linearly with experience in a period of five years before it flattens out
(Schmidt et al., 1986). This do moreover supplement the consideration of
education merged with experience (Sanjram & Kahn, 2011); whether or not you
have relevant education does not matter, it is the length experience, up to five
years (Schmidt et al., 1986) that counts. One can also reflect on whether education
and feedback can provide a better type of knowledge acquisition than a self-taught
approach to the topic were strict rules and formality is the major characteristic
(Jørgensen et al., 2003). Possibly, the domain is not suitable for education to be
utilized throughout with regard to task performance.
Overconfidence is related to the domain directly (Moore & Healy, 2008)
and we hypothesized that more relevance of education should lead to less
overconfidence (inaccuracy of one’s belief (Moore & Healy, 2008)) based on the
same reasoning as previously; that relevant education should provide insight to
knowledge also about what one does not know, and thereby decrease
overconfidence about how well one will perform. Our results do not support this
relationship. A possible explanation is that years of education might be more
appropriate seen as years of experience (Sanjram & Kahn, 2011), which in fact
tells us that the relevance of the education is not affecting relationship. Hereby, if
you possess more relevant education, it accounts only as more years of
experience. In line with this reasoning, the main advantage of education, namely
feedback (Friedlander et al., 2011, Ericsson, 1996), is not particularly present in
software development and programming (Jørgensen et al., 2003) were forward
reasoning is used in problem solving (Hærem, 2002). Thereby, education in this
domain does not provide the insight into the meta-knowledge, and individuals
with expertise tend to be overconfident (McKenzie et al., 2008).
Following the reasoning that deliberate practice contributes to expertise,
and has similarities with education by the magnitude of domain related feedback
(Ericsson, 2008), perceived uncertainty will possibly be affected by this element.
Education may provide insight into theories and knowledge that is underlying for
the current topic (Ericsson, 2008; Barnett & Koslowski, 2002), which in turn may
lead to an insight to one’s own knowledge in such a way that one’s limitations is
also understood. Broadness in knowledge acquisition may also lead to an
Side 33
Master Thesis GRA 19003
03.12.2012
assessment of what theory should be deployed (Friedlander et al., 2011) in a
certain task-solving situation. Because education also can be considered as
experience (Sanjram & Kahn, 2011), and that amount of experience predicts the
level of expertise (Chase & Simon, 1973), we assumed that individuals at a low
level of expertise and education of low relevance will have less perceived
uncertainty. This is assumed because education of low relevance at lower degree
of expertise may provide acquaintance to theories and knowledge that contributes
to a general feeling of certainty, (Plous, 1993; Slovic et al., 2004). Individuals at
low levels of expertise with relevant education might not have developed these
heuristics (Solvic, 2004). In our case, we can predict this direction, but without
significant findings on this relationship this cannot be concluded. When the
individual possess low relevance of expertise and a low level of education, deeper
insight to underlying theories will be absent by the lack of both experience and
education (Friedland et al., 2001; Sonnentag et al., 2006).
Perceived uncertainty relates directly to the task and is including
perception of the task’s complexity by two dimensions; the perceived
analyzability and the perceived variability (Hærem & Rau, 2007). Our expectation
was met with regard to low level of education relevance and its influence on
perceived uncertainty when the level of expertise is low. This tells us that
individuals with a lower level of expertise will possess more uncertainty when
relevance of education is low, when they approach programming tasks. Following
the assumption from Sanjram and Kahn (2007), that education and experience can
be merged, and seen as equally contributing, a possible explanation of the result
can relate to Plous' (1993) argumentation, that knowledge and information cues
affects perception of the situation. Plous (1993) argues that trivial information
about a situation increase the feeling of certainty about a specific case. Based on
this, the low relevance of education at a lower level of expertise may provide
information cues that are fostering a higher level of perceived certainty.
The reasoning is that when a programmer with low relevance of education
is solving programming tasks, her/or his knowledge from education has
assumable introduced her/him for knowledge that enhances the feeling of
certainty, not in depth knowledge that contribute to more meta knowledge about
one’s weakness within the domain. Moreover the increasing degree of expertise
can be considered as confirming to one's performance, which in turns foster more
certainty were logic and straight forward reasoning is present (Hærem, 2002).
Side 34
Master Thesis GRA 19003
03.12.2012
Our findings on risk propensity show no significance in the regression nor
for the measure of this construct. This will be discussed under limitations.
6.2. Possible explanation; Task complexity
For the moderating effect of task complexity, we found support for the
expectation that a high level of expertise should lead to increasing performance on
tasks of high complexity, compared to low level of expertise. The same pattern,
with an opposite direction, applies for overconfidence; high level of expertise
leads to less overconfidence when complexity of the task is high, compared to a
low level of expertise. The expectation that a higher level of expertise should
decrease perceived uncertainty on tasks of high complexity, compared to a low
level of expertise was not supported by our results.
Expertise is in our research operationalized as length of experience in the
domain, which in this case is Java programming. We theorized that expertise
develops as a function of time spent within the domain (Sanjram & Kahn, 2011);
longer experience would lead to more domain related knowledge (Schmidt,
Hunter & Outerbridge, 1986), which in turn is an antecedent for programming
skill (Bergersen & Gustafsson, 2011). Because tasks of higher complexity put
more demand on knowledge skills at the task doer (Wood, 1986), the level of
expertise should play an increasingly important role for these tasks to be perform
well. The confirming results tells us that individuals that whit longer experience in
the domain has the ability to use multiple cognitive paths (Campbell, 1988) and
forward reasoning strategies were the breadth of the problem solution is to be
developed (Haerem, 2002), which is needed for solving programming tasks of
high complexity. This distinguishes them from individuals with low experience.
When individuals are overconfident, termed as overprecision, their
estimate will produce a narrower probability distribution than what is the reality
(Moore & Healy, 2008). We found that a high level of expertise reduces the
overconfidence, compared to a low level of expertise, where the individual is
asked to estimate her/his completeness of a given task. As the task of a higher
complexity puts an increased demand on the task doer (Wood, 1986), the
experience should play and increasing role for the prediction of how complete one
is able to solve the current task; this despite that it seems safe to say that experts
are overconfident (McKenzie et al., 2008) and that software programming lacks
contributing feedback (Jørgensen et al., 2003). The results confirm that more
Side 35
Master Thesis GRA 19003
03.12.2012
experience provides the availability for repetition of the stable rules, termed as
programming language that is deployed when solving programming tasks
(Haerem, 2002), which is fundamental for learning by experience (Freidlander et
al., 2011). Conclusively, with regard to our results, more experience improves
perception of ones own capability in relation to a certain task by the end state at
tasks of high complexity (Campbell, 1988).
Perception of uncertainty relates directly to the complexity of a certain
task (Hærem & Rau, 2007). Because more experience provides a more solid
platform for repetition and internalization (Freidlander et al., 2011) of the stable
rules that is used for programming (Hærem, 2002), we expected that individuals
with higher expertise should figure a more logical, systematic and analytical
approach when meeting tasks of high complexity (Hærem & Rau, 2007; Perow,
1967). This implies that a high level of expertise should reduce perceived
uncertainty, however our results did not support this assumption. A possible
explanation can be that even though software programming deploys stable rules
and the same programming language (Hærem, 2002), which in this domain is
Java, the structure of the tasks that are used in our survey may not match with the
experience respondents have. Hereby, individuals might have tended towards an
approach with more insecurity, chances and guesswork, which naturally evoke
perception of uncertainty (Hærem & Rau, 2007), which in turn will affect
participants to report more perceived uncertainty.
6.3. Post hoc
The finding that specialization moderates length of experience is
interesting, especially seen in light of the possibility that participants of the survey
might have been confused when self-reporting about education and length of
experience, which was intended to be distinguished. As it turns out degree of
specialization interacts with length of experience in much the same way as we
anticipated that education would interact with experience, however the findings
regarding high relevance of education was inconclusive. As degree of
specialization is seen as how long the current Java development period is, how
much time is spent coding and self-rating of Java-expertise, day to day Java
programming seem to be important in perceiving uncertainty. The specialization
may contribute to the development of the meta-knowledge that we expected that
relevant education should provide.
Side 36
Master Thesis GRA 19003
03.12.2012
7. Practical Implications
Education among individuals with varying degree of expertise seems to be of less
relevance than expected. Transferring this to the real life scenario, we assume that
recruitment settings are a situation in which the contribution of the research may
be of interest. With regard to education and its influence on the different aspects
presented, our results points towards the same aspect, that relevant education may
be considered as a part of the experience within the domain of programming.
Hereby, job applicants with a certain length of experience within the domain may
be seen equal as those with the same length of relevant education when task
performance and overconfidence is considered. Nonetheless, when individuals
have low relevance of education, those who have a shorter length of experience
tend to possess higher perception of uncertainty when facing programming tasks,
than more experienced programmers. This might potentially have implications for
organizations recruiting for contexts in which chance taking must be at a
minimum level. These contexts can be software programming of medicine
equipment such as x-ray machines etc. were faults are of detrimental
consequences.
The findings of this research may also be of interest for those who
consider taking further education in software programming. The cost of time and
money attending a course may be evaluated against the possibility of learning
through experience, for instance at a current work situation were trial and error
provides knowledge development. Our results show that education does not
provide better task performance; neither do deeper insight into one’s own
knowledge development, which might stem from the stable rules and the high
predictability that this domain are characterized by (Hærem 2002). Because of this
we do not intend to generalize the finding to other domains.
As experience has been proven to be more important on tasks of high
complexity, with regard to performance and overconfidence, this may imply that
recruitment and interest of investing in young talent should not be at the expense
of retainment of more mature programmer.
Side 37
Master Thesis GRA 19003
03.12.2012
8. Limitations
Both practical and theoretical limitations may have affected the results of this
study and it is therefore necessary to point to some of these issues.
Sample wise there may have been a problem with the different participants
coming from different cultures. This was not controlled for but considering that a
large part of the sample came from Vietnam this may be a limitation. Looking at
Hofstedes (1983, 1994) research on differences in national cultures, there is
profound differences between western and Asian cultures. Looking at Norwegian
and Vietnamese cultures in specific, as they made up most of the sample, there are
noticeable differences in uncertainty avoidance, with Vietnamese culture being
much more inclined to avoid uncertainty. This weakness could have been reduced
by the use of participants originating in the same geographical area.
Another limitation is the lack external validity of the expertise measure.
We did establish external validity o the expertise construct as we did not have a
reflective measure to create a MIMIC model as recommended by Diamantopoulos
& Winklhof (2001) and Hair, (2010).
A third limitation to the study was the instrument used to measure risk
propensity. The rephrasing of Calantone, Garcia, and Dröge (2003)’s measure
failed to meet the construct validity level required for validating the rephrasing
aimed towards programming, and furthermore it proved far from significant in the
regression analysis. It was measured in the dataset consisting of 52 respondents,
giving a ratio of 13:1 for the 4 questions chosen to represent risk propensity after
the factor analysis, and as such the sample size needed for validation is met (Hair,
2010). Conclusively, the adjustment of the tool for measuring risk propensity does
not apply for programmers and other options should be explored to find a proper
measure for risk propensity amongst programmers.
Participators might have understood education as a part of their experience
and thereby reported their length of experience including the length of education.
The possibility of this mistake is present despite attempts to phrase the questions
probing this in a manor not easily misunderstood, and may have influenced the
expertise measure with ambiguity.
A wide range of companies with relevance to the current competency was
represented. Despite the vast amount of connections used in this purpose, the
actual outcome in numbers of respondents to the survey was moderate. Several of
the companies that engaged in a dialog for establishing a two-way contribution,
Side 38
Master Thesis GRA 19003
03.12.2012
the current companies were given information about the reward of providing
respondents, but unfortunately the cost of participating seemed to be too high. The
reward was individual feedback on the test, which in fact is insight to how the
respondents would perform on a validated test program for certification of Java
developers. We believe that this was the main drawback that prevented
individuals from participate; time consumption off work with job related activity
might not be too appealing. This is further strengthened by the large number of
paid participants from Vietnam, compared to the amount of voluntary participants.
There are also several limitations to how the experiment was conducted
The experiment in is self was two-fold, first one answered a survey about
background information and risk propensity, then one had to download an
application to complete the programming task. As showed in the missing values
analysis, this in itself was enough to make participants not complete the
experiment. Furthermore, by conducting the experiment over the Internet, there
was no possibility to control the task environment, thus the environment the
respondent was in could have influenced the outcome.
Together the limitations mentioned leads to serious threats to internal
validity. Because of the lack of experimental control it is difficult to whether
extraneous variables influenced the outcome. According to Singelton & Straits
(2010) this threatens the internal validity of the study because it makes it harder to
establish a causal link between the independent and the dependent variables.
Side 39
Master Thesis GRA 19003
03.12.2012
9. Conclusion
The aim of the research was to develop insight into how education and task
complexity affects individuals of varying degree of expertise. Through an
experiment we investigated risk propensity, perceived uncertainty, overconfidence
and task performance in order to reveal tendencies that were affected by education
and the complexity of tasks. The domain that was chosen for the investigation was
Java programming. Our results indicate that education plays a minor role in how
these individuals perceives uncertainty of the tasks, how overconfident they feel
and how they perform when solving these tasks. Education influenced individuals
with low degree of expertise by their perception of uncertainty, which means that
those that had less experience and low relevance of uncertainty, felt more certain
when facing tasks of software programing than those with higher degree of
expertise. We conclude that for this domain, education may be considered as
merged with experience. The theoretical argumentation indicates that our findings
do not necessarily apply for other domains, distinguishable from the current. On
the other hand, we found that individuals with a high degree of expertise had
better performance and lower overconfidence, compared to individual with a low
degree of expertise when solving tasks of high complexity. Summed up, we
conclude that whether or not you have education, it is the length of experience
within the domain that matters the most.
Side 40
Master Thesis GRA 19003
03.12.2012
10. References
Aiken, L. S., & West, S. G. (1991). Multiple Regression. United States of
America: Sage Publications.
Anderson, J. R., Farell, R., & Sausers, R. (1984). Learning to program in LISP.
Cognitive Science, 8: 87 – 129.
Arisholm, E., Gallis, H., Dybå, T., & Søberg, D. I. K. (2007). Evaluating Pair
Programming with Respect to System Complexity and
Programmer Expertise. IEEE Transactions on Software
Engineering, 33 (2): 65-86.
Arisholm, E., & Sjøberg, D. I. K. (2004). Evaluating the Effect of a Delegated
versus Centralized Control Style on the Maintainability of ObjectOriented Software. IEEE Transactions on Software Engineering,
30 (8): 521-534.
Bartlett, M. S. (1954). A note on the multiplying factors for various chi square
approximations. Journal of the Royal Statistical Society, 16, 296–
298.
Barnett, S. M., & Koslowski, B. (2002). Adaptive Expertise Effects of Type of
Experience and the Level of Theoretical Understanding it
Generates. Thinking and Reasoning, 8 (4): 237 – 267.
Baron, C, & van de Werfhorst, H. G. (2011). Education, Cognitive Skills and
Earnings in Comparative Perspective. International Sociological
Association, 26 (4): 483 – 502.
Bergersen, G. R., Dybå, T., Hannay, J. E., Karahasanović, A., & Sjøberg, D. I.
K. (2011). Inferring Skill from Tests of Programming
Performance: Combining Time and Quality. In press, 1 - 10.
Bergersen, G. R., & Gustafsson, J. E. (2011). Programming Skill, Knowledge, and
Working Memory among Professional Software Developers from
an Investment Theory Perspective. Journal of Individual
Differences, 32 (4): 201 – 209.
Burgess, P. W., Veitcha, E., de Lacy Costello, A., & Shallice, T. (2000). The
Cognitive and Neuroanatomical Correlates of Multitasking.
Neuropsychologia, 38 (6): 848 – 863.
Bonner, S. (1994). A model of the effects of audit task complexity. Accounting
Organization and Society, 19 (3): 213 – 234.
Side 41
Master Thesis GRA 19003
03.12.2012
Calantone, R., Garcia, R., & Dröge, C. (2003). The Effects of Environmental
Turbulence on New Product Development Strategy Planning.
Product Innovation Management, 20: 90 – 103.
Campbell, D. J. (1988). Task Complexity: A Review and Analysis. Academy of
Management Review, 13: 40 – 52.
Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual
Information Processing, ed. W. G. Chase, pp. 215 – 81. New York:
Academic.
Christensen-Szalanski, J. J. J., Beck, D. E., Christensen-Szalanski, C. M.,
Koepsell, T. D. (1983). Effects of Expertise and Experience on
Risk Judgment. Journal of Applied Psychology, 68 (2): 278 – 284.
Diamantopoulos, A., & Winklhofer, H. M. (2001). Construction with Formative
Indicators: An Alternative to Scale Development. Journal of
Marketing Research 38 (2): 269 – 277.
Ericsson, K. A. 2005. Recent Advances in Expertise Research: A Commentary on
the Contributions of the Special Issue. Applied Cognitive
Psychology, 19 (2): 233 - 241.
Ericsson, K. A. (2008). Deliberate Practice and Acquisition of Expert
Performance: A General Overview. Academic Emergency
Medicine, 15: 988 – 994. DOI: 10.1111/j.1553-2712.2008.00227.x
Ericsson, K. A., & Lehman, A. C. (1996). Expert and Exceptional Performance:
Evidence of Maximal Adaptation to Task Constrains. Annual
Review of Psychology, 47: 273 – 305. DOI:
10.1146/annurev.psych.47.1.273
Ericsson, K. A., & Smith, J. (1991). Toward a General Theory of Expertise:
Prospects and Limits. Cambridge: Cambridge Univ. Press.
Friedlander, M. J., Andrews, L., Armstrong, E. G., Aschenbrenner, C., Kass, J. S.,
Ogden, P., Schwartzstein R., & Viggiano, T. R. (2011) What Can
Medical Education Learn From the Neurobiology of Learning?
Academic Medicine, 86 (4): 415 – 420.
Garb, H. N. (1989). Clinical Judgment, Clinical Training, and Professional
Experience. Psychological Bulletin, 105 (3): 387 – 396.
Gripsrud, G., Olsson, U. H., & Silkoset, R. (2004). Metode og Dataanalyse.
Norway: Høyskoleforlaget.
Side 42
Master Thesis GRA 19003
03.12.2012
Haerem, T. (2002). Task Complexity and Expertise as Determinants of Task
Perception and Performance, Why Technology-Structure Research
has been unreliable and inconclusive. Series of Dissertations, 5.
Norwegian School of Management BI: Sandvika.
Haerem, T., Kuvaas, B., Bakken, B. T., & Karlsen, T. (2010). Do Military
Decision Makers Behave as Predicted by Prospect Theory? Journal
of Behavioral Decision Making, 24 (5): 482 – 497. DOI:
10.1002/bdm.704
Haerem, T., & Rau, D. (2007). The Influence of Degree of Expertise and
Objective Task Complexity on Perceived Task Complexity and
Performance. Journal of Applied Psychology, 92 (5): 1320 – 1331.
Hair, J. F. JR., William, C. B. Barry, J. B., & Anderson, R. E. (2010).
Multivariate Data Analysis. New Jersey: Pearson Prentice Hall.
Hoffstede, G. (1983). National Cultures in Four Dimensions: A Research-Based
Theory of Cultural Differences among Nations. International Studies of
Management & Organization,13 (1/2): 46 – 74.
Hoffstede, G. (1994). The Business of International Business is Culture.
International Business Review, 3 (1): 1 – 14.
Jørgensen, M., Teigen, K. H., & Moløkken, K. (2003). Better Sure than Safe?
Overconfidence in Judgment Based Software Development Effort
Prediction Intervals. Journal of Systems and Software, 70 (1-2): 79
– 93. http://dx.doi.org/10.1016/S0164-1212(02)00160-7
Kaiser, H. F. A. (1970). Second Generation Little Jeffy. Psychometrika, 35 (4):
401 – 415.
Kaiser, H. F. A. (1974). An Index of Factor Simplicity, Psychometrika, 39 (1): 31
– 36.
Kendel, R. E. (1973). Psychiatric diagnoses: A study of how they are made.
British Journal of Psychiatry,122: 437 – 445. DOI:
10.1192/bjp.122.4.437
Keren, G. (1987). Facing Uncertainty in the Game of Bridge: A Calibration Study.
Organizational Behavior and Human Decision Processes, 39: 98 –
114.
Leigh, B. C. (1999). Peril, change and adventure: Concepts of risk, alcohol use
and risky behavior in young adults. Addiction, 95 (3): 371-383.
Side 43
Master Thesis GRA 19003
03.12.2012
Lejuez, C. W., Richards, J. B., Read, J. P., Kahler, C. W., Ramsey, S. E., Stuart,
G. L., Strong, D. R., & Brown, R. A. (2002). Evaluation of a
Behavioral Measure of Risk Taking: The Balloon Analogue Risk
Task (BART). Journal of Experimental Psychology, 8 (2): 75 - 84.
MacCrimmon, K. R., & Wehrung, D. A. (1990). Characteristic of Risk Taking
Executives. Management Science, 36 (4): 422 – 435.
March, J. G., & Shapira, Z. (1987). Managerial Perspectives on Risk and Risk
Taking. Management Science, 33 (11): 1404 - 1418.
McKenzie, C. R. M., Liersch, M. J., & Yaniv, I. (2008). Overconfidence in
interval estimates: What does expertise buy you? Organizational
Behavior a Human Decision Processes, 107 (2): 179 – 191.
Moore, D. A., & Healy, P. J.. (2008). The Trouble With Overconfidence.
Psychological Review, 115 (2): 502 – 517.
Ng, T. W. H, & Feldman, D. C. (2009). How Broadly Does Education Contribute
to Job Performance? Personnel Psychology, 62 (1): 89 – 134.
Nunnally, J. O. (1978). Psychometric Theory. New York: McGraw-Hill.
Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate
tatistics (4th edn). New York: HarperCollins. Chapter 13.
Perrow, C. (1967). A Framework for the Comparative Analysis of Organizations.
American Sociological Association, 32 (2): 194 – 208.
Plous, S. (1993). The Psychology of Judgment and Decision Making. New York,
United States: McGraw-Hill, Inc.
Sanjram, P. K., & Khan, A. (2011). Attention, polychronicity, and expertise in
prospect memory performance: Programmers´ vulnerability to
habit intrusion error in multitasking. International Journal of
Human-Computer Studies, 169 (6): 428 – 439.
Schmidt, F. L., & Hunter, J. E. (1998). The Validity and Utility of Selection
Methods in Personnel Psychology: Practical and Theoretical
Implications of 85 Years of Research Findings. Psychological
Bulletin, 124 (2): 262 – 274.
Schmidt, F. L., Hunter, J. E., & Outerbridge, A. N. (1986). Impact of Job
Experience and Ability on Job knowledge, Work Sample
Performance, and Supervisory Ratings of job Performance. Journal
of Applied Psychology, 71 (3): 432 – 439.
Side 44
Master Thesis GRA 19003
03.12.2012
Singleton, Jr. R. A. & Straits, B. C. (2010). Approaches to Social Research.
Oxford University Press: New York.
Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2004). Risk as
Analysis and Risk as Feelings: Some Thoughts About Affect,
Reason, Risk and Rationality. Risk Analysis, 24(2): 311 – 322.
Sonnentag, S., Niessen, C. & Volmer, J. (2006). Expertise in Software Design. In
Ericsson, K. A., N. Charness, P. J. Feltovich, R. R. Hoffman.
(2006). The Cambridge Handbook of Expertise and Expert
Performance. Cambridge University Press: United States of
America.
Summers, B., Williamson, T., & Read, D. (2004). Does method of acquisition
affect the Quality of expert judgment? A comparison of education
with on-the-job learning. Journal of Occupational and
Organizational Psychology, 77 (2): 237-258.
Sweller, J., van Merrienboer, J. J., & Paas, E. W. (1998). Cognitive architecture
and Instructional design. Educational Psychological Review, 10
(3): 251 – 296.
Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th edn).
New York: HarperCollins. Chapter 13.
Van De Ven, A. H., Delbecq A. L., Koenig, JR. R. (1976). Determinants of
Coordination Modes within Organizations. American Sociological
Association, 41(2): 322 – 338.
Wood, R. E. (1986) Task Complexity: Defining of the Construct. Organizational
Behavior and Human Decision Process, 37: 60 – 82.
Side 45
Master Thesis GRA 19003
03.12.2012
Appendix 1
Expertise
Please state the length of your experience ( in years and months; for example,
enter 2 years, 6 months for two and a half years)
Years
Months
Total programming experience (All programming languages
that you use):
Total Java programming experience:
Your current continuous work period of programming (all
programming languages that you use). If you are currently not
in a period of programming, please enter 0:
Your current continuous work period of Java programming. If
you are currently not in a period of Java programming, please
enter 0:
Self-reported skill assessment
Please indicate your skills in programming (all programming languages that you
use)
On a scale from 1-10, where 10 is Best and 5 is average, I assess my skills in
programming to be:
Please indicate your skill in Java programming:
On a scale from 1-10, where 10 is Best and 5 is average, I assess my skills in
Java programming to be:
Please asses your skill in estimating the number of work hours needed in
software development projects.
On a scale from 1-10, where 10 is Best and 5 is average, I assess my skills in
estimating the number of work hours needed in software development projects to
be:
How many software development projects have you participated in? Nr. (0-50)
Number of projects
Which software development project rolles have you held? Please tick the
Side 46
Master Thesis GRA 19003
03.12.2012
applicable boxes:
Junior developer
Intermediate developer
Senior developer
Administrator / Project leader
During an average working day, how much time do you, as a programmer spend
on:
Coding (percentage)
Project planning (percentage)
Other (percentage)
Side 47
Master Thesis GRA 19003
03.12.2012
Appendix 2
Rating
År
0,5-1
Aesthetics, Art and Music
Farming, Fishing, and Veterinarian
History, Religion and Philosophy
Physical Education, Sports and Outdoor Activities
Information Technology and Computer Science
Law, and Police education
Teacher education
Pedagogical education
Education_Math, and Science
Media-, Library- and journalistic education
Medicine, Dentistry and Health and Social care
Tourism
Social Science and Psychology
Technology, (civil) engineering and architecture
Language and literature
Economy and administration
OTHER, please enter type here
OTHER_Length
Side 48
1
1
1
1
3
1
1
1
2
1
1
1
1
2
1
1
Rating
År
1-2
1
1
1
1
4
1
1
1
2
1
1
1
1
2
1
2
3
1
1
1
1
5
1
2
1
3
2
1
1
2
3
2
3
4
1
1
1
1
6
1
2
1
4
3
1
1
3
5
3
4
5
1
1
1
1
7
1
2
1
5
3
1
1
3
6
3
4
Master Thesis GRA 19003
03.12.2012
Appendix 3
Risk propensity
Table 3
Item
Factor loading
1
In order to save time when programming at work, I do
quick fixes to code, without a deeper understanding of the
underlying faults
2
When programming at work, I focus on speed over
accuracy, since errors and faults will be detected and fixed
later
4
When programming at work, I have a sensation of
boldness and wide impact on the system under
development
5
Close to shipping date, I fix as many faults as possible, in
order to provide a better software product for delivery to
the customer, even when there is insufficient opportunity
to regression test these fixes.
Eigenvalue
Prc. of Variance
Coefficient Alpha
Extraction method: Principal component analysis
N=52
Items = 4
Side 49
.786
.607
.562
.510
1.563
39.065
.453
Master Thesis GRA 19003
03.12.2012
Appendix 4
Perceived uncertainty
Measured on a scale 1-7 to indicate degree of agreement with each item.
Table 1
Items
Component
Perceived task analyzability
1
2
1 I think I will be able to follow well-defined stages -.032
.908
or steps to solve this task.
2 I can solve this task using a methodology or a
-.091
.681
series of steps that I have used in other
occasions.
3 When I read the task description, a mental
.002
.861
picture formed in my mind that will guide med
while completing the task.
Perceived task variability
1 To what extent did you encounter problems you .889
.080
were unsure about while solving the task?
2 To what did you come up against unexpected
.894
.062
factors while completing the task?
3 To what extent do you feel that your solution is
.766
-.232
different from how you anticipated it to be
before solving the task?
4 To what extent do you feel that your solution is
.820
-.105
unstructured, hard to describe, or unclear?
5 To what extent did you find that it was difficult
.920
-.066
to identify a solution to the task description?
Eigenvalues
3,752
2.059
Pct of variance
46.905 25.732%
Coefficient Alpha
.902
.756
Extraction method: Principal component analysis
Rotation method: Varimax rotation with Kaiser Normalization, N=94
Items: Perceived task analyzability = 3
Perceived task variability = 5
Perceived Task Analyzability
1. To what extent do the requirements reflect structured tasks?
2. To what extent do you feel that the requirements can be solved by use of a
certain method?
3. To what extent do you feel that there are fundamental similarities between the
responses to these requirements?
4. To what extent do you feel that you have a mental picture to guide you in
responding to the above requirements?
Side 50
Master Thesis GRA 19003
03.12.2012
Perceived Task Analyzability
1. To what extent did you come across problems about which you were unsure
while responding to these requirements?
2. To what extent did you come up against unexpected factors in responding to the
above requirements?
3. To what extent do you feel that your solutions were vague and difficult to
anticipate?
4. To what extent do you feel that it is difficult to identify a solution to the
requirements?
5. To what extent did you find that it was difficult to identify a solution to the task
description?
Side 51
Master Thesis GRA 19003
03.12.2012
Appendix 5
Overconfidence
1. How complete do you think you can solve the task (within the time limit)? (In
percentage)
Side 52
Download