THE SOCIAL AMPLIFICATION OF RISK - paul

advertisement
THE SOCIAL AMPLIFICATION OF RISK
FRAMEWORK (SARF) IN AN ORGANIZATIONAL
ENVIRONMENT
1
Table of contents
Abstract ..................................................................................................................................... 4
Introduction ............................................................................................................................... 5
Literature Review ...................................................................................................................... 6
Defining risk and risk perception .......................................................................................... 6
The technical approach ...................................................................................................... 6
The social/perceptual approach ......................................................................................... 7
Bridging the gap ................................................................................................................ 7
Normal risks and extreme risks ......................................................................................... 8
Defining risk perception .................................................................................................... 8
From the psychometric study to the SARF: explaining the choice of framework ................ 9
The technical approach: real risk as basis for risk perception ........................................... 9
The psychometric model: from revealed to expressed preferences .................................. 9
Cultural theory: a brilliant framework that can’t stand the trial of quantitative tests ..... 10
The social amplification of risk framework .................................................................... 11
Risk, risk perception and Information System (IS) ............................................................. 12
Risk as failure .................................................................................................................. 13
Risk as security................................................................................................................ 13
Filling a gap in the literature ........................................................................................... 14
Methodology ........................................................................................................................... 16
Case Study ............................................................................................................................... 17
Introduction ......................................................................................................................... 17
The signal and the sources of information .......................................................................... 18
The signal ........................................................................................................................ 18
The sources and channel of information ......................................................................... 19
The institutions’ influence ................................................................................................... 20
The direct influence of institutions.................................................................................. 20
The influence of the signal through the intervention of institutions ............................... 21
2
The size of the losses associated to a hazard is more important than its probability of
occurrence ....................................................................................................................... 22
The organizational biases .................................................................................................... 22
The defence mechanism: the organization as a way to pass the blame and to manage risk
perception ........................................................................................................................ 23
The cultural span: the coming together of two different cultures .................................. 24
The risk perception differential: Front Office and Middle Office .................................. 25
Denial: This couldn’t happen to us ................................................................................. 26
The duality of the organization ....................................................................................... 27
Personal biases .................................................................................................................... 27
Risks are increasing ......................................................................................................... 27
Risks are losses................................................................................................................ 28
We can do less and less in order to lessen risks .............................................................. 29
Discussion, limitations and conclusion ................................................................................... 30
References ............................................................................................................................... 31
3
Abstract
The literature in the field of Information Security is full of methodologies, good practices, to
do lists and theories designed to help the professional in reducing risks but very rarely does
take a step back in order to analyse what “risk” is per se and why different individuals, in the
same situation, will perceive it in very different ways. We propose to use a framework taken
from the sociology of risk: the Social Amplification of Risk Framework (SARF) and to apply
it to an organizational environment through a case study. We conducted our analysis in a
French bank and tried to examine how it reacted to the dramatic but rare incident that hit one
of its opponents, Société Générale, when a rogue trader lost for 7 billion dollars worth on the
stock market.
4
Introduction
“Can we know the risks we face, now or in the future? No we cannot; but yes, we must act as
if we do” (Douglas & Wildavsky, 1983, p. 1). What is true for individuals in their personal
relationship to risk is especially true in the workplace. Indeed, any risk professional has an
organizational pressure to succeed and has to make decisions in a complex and changing
environment. That is why the literature in the field of Information System (IS) risk is full of
methodologies, good practices, to do lists and theories designed to help the professional in
reducing risks. We do not question that most of these methods are effective but we would like
to stress that very rarely does the IS literature take a step back in order to analyse what “risk”
is per se and why different individuals, in the same situation, will perceive it in very different
ways.
We will define risk as “a situation or an event where something of human value (including
humans themselves) is at stake and where the outcome is uncertain” (Rosa, 2003); and, with
the help of a framework taken from the wide body of social analysis of risk perception, we
will try to shed some light on risk perception in the field of IS. We chose the Social
Amplification of Risk Framework (SARF) as our model for the comprehensiveness of its
analysis (R. E. Kasperson et al., 1988). What is more, we will apply the SARF in an
organizational environment and thus enrich this aspect of the framework with a practical
example from a case study.
The risk perception we want to analyse is that of an extreme event that we define as being
very low in probability of occurring but having a very high and disastrous impact. We have
chosen the IS risk of rogue trading that occurred in Société Générale where trader Jérome
Kerviel took positions which resulted in a 7 billion dollar loss, almost putting the company in
bankruptcy. This dissertation tries to answer how, in another French bank which is working
closely with Société Genérale, risk perception has been altered following these events. Using
the Social Amplification of Risk Framework (SARF) we will show how the media, the
people, the institutions and the bank itself, have resonated so as to amplify this risk signal and
the effects it generated.
5
Literature Review
This review will be divided in four parts. The first one will briefly analyse how the concepts
of “risk” and “risk perception” are defined in the literature. The second one will look at the
different frameworks in the field of risk perception and explain the choice of the SARF for
our study. The third one will focus on how risk and risk perception are treated in the field of
Information Technology (IT). Finally, the fourth part will explain in detail the purpose of our
study and how we think we may help filling a gap in the literature.
Defining risk and risk perception
Even though there has been extensive literature on the subject of risk for decades, there is still
very much debate about what constitutes risk per se (Renn, 1998; Rosa, 2003) and sometimes
“there is an intentional silence about defining risk at all” (Rosa, 2003). Indeed, most of the
literature seems to consider that the term risk is known and therefore needs no definition.
What is more, the field of risk is divided in two between the “technical” approach and the
“social/perceptual” one (Tansey & O'Riordan, 1999).
After detailing both these perspectives we will show how it is possible and important to go
beyond this dichotomy by giving a definition that we will keep for the rest of our study. We
will then make an important analytical distinction between normal and extreme risks before
defining the risk perception concept.
The technical approach
Very positivist in nature, this field considers that risk is intrinsically objective. It is therefore
quantifiable and can help government and professionals in decision making or help risk
assessment. Straub and Welke, for instance, define risk as “the uncertainty inherent in doing
business; technically it is the probability associated with losses (or failure) of a system
multiplied by the dollar loss if the risk is realized” (Straub & Welke, 1998). This very famous
definition is also known as the “ALE” or Annual Loss Expectancy (Anderson, 2003;
Baskerville, 1991; Campbell, 2006; Sjöberg, Moen, & Rundmo, 2004). However, this
definition leaves out any human process in establishing what risk is.
6
The social/perceptual approach
Social theorists have since then challenged this characterization using two main arguments.
First, if risk is the “probability of harm”, the question relies on how to calculate this
probability and if it is at all possible. In order to have a definitive answer, the calculation
requires knowing all the dangers of a given situation and thus having infinite knowledge
about it, which is impossible. Second, as it is unfeasible, we therefore need to select, discard
and classify the hazards that we may face, which is a matter of personal opinion (Douglas &
Wildavsky, 1983; Renn, 1998; Sjöberg, 2000). Building on this, Douglas and Wildavsky
(1983, p. 5) develop their cultural definition of risk which is the fruit of our social
interactions: “a joint product about knowledge of the future and consent about the most
desired prospects”.
Bridging the gap
However, “as with most extreme positions, the objectivists and subjectivists view of risk,
taken separately, are poor descriptions of reality” (Rosa, 2003, p. 55). Campbell, for instance,
stresses the duality of risk. According to him, there is subjectivity in our appreciation of
hazards; for instance, if someone likes rain, he wouldn’t consider it as a potential risk if he
were to go on vacation. However, there is objectivity in the ability to calculate the probability
of occurrence of a subjectively chosen risk (Campbell, 2006). Other authors like Loon go
even further by saying that this is a false debate. If we consider that risks are not “real” but
merely the subjective perception that some dangers may strike us, and even if they do not
objectively constitute real threats, they still require an action from our part and thus should be
considered the same (Loon, 2002, p. 2). However, this view doesn’t help in defining risk per
se and we prefer the definition given by Rosa and used by Sjöberg that risk should be
understood as the mix of both objective and subjective elements, “a situation or an event
where something of human value (including humans themselves) is at stake and where the
outcome is uncertain” (Rosa, 2003, p. 56; Sjöberg et al., 2004). New advances in cognitive
psychology have found evidence backing this position. Authors have established that, when
confronted with a situation, the human mind may use either an “analytical” system (i.e.
algorithmic) or an “experimental” one based on feelings and affect (Slovic, Finucane, Peters,
& MacGregor, 2004).
7
Normal risks and extreme risks
However, risks are not a unified category and could be divided in “normal” risks and
“extreme” ones. A large body of the literature is now focusing on what it calls extreme events
(Basili, 2006; Bier, Haimes, Lambert, Matalas, & Zimmerman, 1999; Lambert, 1994; Olsen,
Lambert, & Haimes, 1998). These differ from “normal” ones as they are “disasters and
catastrophes (seldom windfall gains) that are characterized by very small or ambiguous
probabilities of occurrence” (Basili, 2006). They usually form when, in a complex situation,
one causal element changes slightly but has grave impact on others (Bier et al., 1999).
Another way of looking at it is to say that probabilistically normal events follow a Gaussian
curve whereas extreme ones follow a leptokurtique or Lévy curve. The latter has a higher but
smaller peak than the Gaussian with a “fat tail” leaving more probability for extreme values to
happen.
At this point, we wish to note that the term risk will be used in both its positive and negative
aspect and that there is an intrinsic difference between normal risks and extreme risks. Firstly,
in the economic understanding of the term, risk contains the possibility of a positive outcome.
What is more it may even be desired in the case of thrill seeking, as Rosa points out. This
entails that we have to use the term risk in a neutral manner, to reflect an uncertain outcome,
both positive or negative (Renn, 1998). Secondly, from now on the terms hazards, dangers
and threats will be used as synonyms to the term risk as defined above.
Defining risk perception
We now need to define the concept of risk perception and start with Beck’s view that, if we
consider that risks are entirely subjective and cultural in nature, risks and risk perceptions are
by definition the same (Campbell, 2006). However, we defined risk as being both objective
and subjective and the gap between the two is risk perception. Indeed, the subjectivity of risk
comes from the personal appreciation of the probability of the occurrence of a disaster, which
may differ from its actual objective probability of occurrence (Campbell, 2006). In other
words, “Risk perception is the subjective assessment of the probability of a specified type of
accident happening and how concerned we are with the consequences” (Sjöberg et al., 2004).
Kasperson et al. go slightly further by stating that this perception is a constructed
8
interpretation that is the result of a social process, the SARF process (J. X. Kasperson,
Kasperson, Pidgeon, & Slovic, 2003).
From the psychometric study to the SARF: explaining the choice of framework
Having defined what risk and risk perception are, we still need to explain how risk perception
is formed. The risk perception literature is dominated by three major theories: the
psychometric model, cultural theory and the social amplification of risk framework (SARF)
(Sjöberg, 2000; Wahlberg, 2001). After briefly describing the technical approach, we will
compare the positions of each one to stress why we chose SARF.
The technical approach: real risk as basis for risk perception
Technical risk assessment relies on the idea that people can have a risk perception that is
influenced by a real knowledge or experience of a specific hazard. As Renn puts it, it relies on
the idea of “past as a guidebook for the future”: information from past events can be used for
risk assessment and help predict future dangers, which is analytically poor as history never
repeats itself (Douglas & Wildavsky, 1983; Renn, 1998). What is more, as Sjöberg points out,
most of the time the studies showed an important gap between risk and its perception,
requiring the model to incorporate “heuristics” and “probability judgement bias” in order to
explain the differences. Also, this analysis made no difference if the target was the person
itself or other groups in society, even though this can create a major difference (Sjöberg,
2000).
The psychometric model: from revealed to expressed preferences
The first comprehensive approach to risk perception dates back to the seminal work of Starr in
1969 (Fischhoff, Slovic, Lichtenstein, Read, & Combs, 1978; Slovic, 1987). Starr coined the
term “revealed preferences” that “assumes that, by trial and error, society has arrived at an
“essentially optimum” balance between the risks and benefits associated with any activity”
(Slovic, 1987). However, studies have shown that the way cost-benefit data is interpreted can
lead to very different results than the ones Starr found (Fischhoff et al., 1978; Sjöberg et al.,
2004).
That is why Fischhoff et al. established in 1978 the psychometric model based on “expressed
preferences”. The basis of this theory was to ask people to rate risks on different scales such
9
as New or Old, Voluntary or Involuntary. By applying statistical analysis, these authors have
shown, together with Slovic, that a matrix can be created using two axes (common-dread and
known-unknown) which can map risk perception (Fischhoff et al., 1978; Sjöberg, 2000;
Slovic, 1987). They thus point that “the higher a hazard scores on [the dread] factor, the
higher its perceived risk, the more people want to see its current risks reduced, and the more
they want to see strict regulation employed to achieve the desired reduction in risk” (Slovic,
1987).
This model also has fallen under heavy scrutiny. Sjöberg et al. for instance consider that the
theory is based on a major flaw: the data it uses is means that artificially create correlations.
And, when using raw data, only a 20 to 25% level of variance can be explained by the model
(Sjöberg, 2000; Sjöberg et al., 2004). Another issue is that the methodology for this model
often relies on comparing between the subjective perception of lay persons and the objective
one of professionals. However, studies have shown that even the latter are biased (Sjöberg et
al., 2004). Moreover, as Douglas and Wildavsky point out, scientists often disagree as
“sometimes the data are inconclusive; sometimes their meaning changes radically according
to the state of theory” (Douglas & Wildavsky, 1983, p. 50).
Cultural theory: a brilliant framework that can’t stand the trial of quantitative tests
That is why, Douglas and Wildavsky, in their book Risk and culture, try to find a new way to
analyse risk perception. The authors start by stating that it is impossible for a human to be
able to assess all the risks he is facing because of his limited access to information and
computational power. Douglas and Wildavsky thus posit that, as we still need to define and
prepare for our risks in our day to day life, we select and discard hazards in bulk through the
cultural and social models we adhere to (Douglas & Wildavsky, 1983).
Mary Douglas then went further and stated that there are only four social models or “world
views” that coexist in our world and that each of them can be matched to a specific risk
portfolio of highly perceived hazards. The Figure 1 underneath is the two by two matrix taken
from her 1970 book detailing these word views. According to the theory, the individualists
value mostly initiative in the marketplace and thus put greater emphasis on risks that can
threaten the markets such as wars. The egalitarians, putting the group ahead of personal
interests, are more concerned about technology and how it can affect nature (e.g. nuclear
plants). The hierarchists enjoy living in a society where each person has its place (e.g. cast
system) and are thus more afraid of any threat to law and order. Finally, the fatalists consider
10
that they can’t change any of the elements that can affect them. As such, they don’t focus on a
specific bundle of risks (Douglas, 1970; Thompson, Ellis, & Wildavsky, 1990).
+
Fatalists
Hierarchists
Individualists
Egalitarians
Grid
-
Group
+
Figure 1 Based on (Douglas, 1970)
However robust this theory seems to be a priori, it has always failed the proof of testing.
Sjöberg has the harsh words that “Cultural theory is largely an example of the persuasive
power of speculation” (Sjöberg, 2000). Indeed, numerous studies have shown that the four
cultural biases could be statistically proven to exist as forms of biases but that they could only
explain up to 10% of risk perception (Brenot, Bonnefous, & Marris, 1998; Marris, Langford,
& O'Riordan, 1998; Sjöberg, 1998). Finally, as Sjöberg states, fine-tuning the questionnaires
brought only marginal ameliorations (Sjöberg, 2000).
The social amplification of risk framework
A paradigm invented in 1988 by Kasperson & al. and reassessed since then has tried to bring
the literature together on risk perception: the SARF (J. X. Kasperson et al., 2003; R. E.
Kasperson & Kasperson, 1996; R. E. Kasperson et al., 1988). This framework was designed
to be a sort of meta-framework, capable of being used as basis for most of the social theories
of risk perception “from media research; from the psychometric and cultural schools of risk
perception research; and from studies of organizational responses to risk” (J. X. Kasperson et
al., 2003, p. 13). At the same time, the framework tries to go further than simply acknowledge
that some heuristics and biases exist to fully explain why some objectively minor risks can be
completely over-estimated by the public.
Social amplification, in this theory is defined as “the phenomenon by which information
processes, institutional structures, social-group behaviour, and individual responses shape the
social experience of risk, thereby contributing to risk consequences” (R. E. Kasperson et al.,
11
1988). As detailed in the Figure 2, the basis of this whole phenomenon is a “signal” that can
come from different sources and flow through different channels (e.g. personal network). The
way people perceive this hazard can then be augmented or attenuated by social stations (e.g.
media), individual stations (e.g. heuristics) that then create a behaviour in accordance with
one’s institutional group. Once a perception about risk has been formulated it can have “ripple
effects”, secondary and unplanned consequences of the hazard and thus impacts at different
levels.
Figure 2 Taken from (R. E. Kasperson & Kasperson, 1996)
We chose this framework as the literature has shown that most of the theories that exist have a
level of explaining power and the SARF, by bringing them together, combines all of them into
one very robust theory.
Risk, risk perception and Information System (IS)
The field of risk perception in information system is mostly governed by technical
approaches. Indeed, as a study of the literature of IS by Orlikowski & Iacono showed, almost
70% of the literature carried a positivist bias (2001). We will analyse two technical
approaches to risk: risk as failure and risk as security flaw to show that a gap in the literature
exists for our study.
12
Risk as failure
An important part of the IS literature focuses on the risk of failure that an IT artefact can face.
The term failure here can have very different meanings. Some authors define it as the pure
abandonment of a project (M. Keil, 1995), others as a project that goes over time and over
budget (McManus & Wood-Harper, 2007) and others as a project that is finalised but very
seldom used, if at all (K. Lyytinen, 1988).
Most of these studies integrate the issue of risk perception as one of the reasons why projects
may fail. Indeed the functionalist approach of most of the literature aims at finding critical
success and failure factors to be integrated or taken into consideration in risk management
strategies (M. Keil, 1995; Mark Keil, Li, Mathiassen, & Zheng, 2008; Mark Keil, Tiwana, &
Bush, 2002; Mark Keil, Wallace, Turk, Dixon-Randall, & Nulden, 2000; Kalle Lyytinen,
Mathiassen, & Ropponen, 1998; McManus & Wood-Harper, 2007).
Such perceptions or biases are that, whatever the cost of a project, when implemented it will
bring a large payoff or that if someone already succeeded in a previous project there is no
reason why he should fail in a new one (M. Keil, 1995). Other studies have shown that end
users and developers do not have the same sensitivity to what could put a project in peril, thus
creating conflicts between them, or that using checklists of potential hazards may influence
risk perception (Mark Keil et al., 2008; Mark Keil et al., 2002).
Finally, part of the literature is interested in the behavioural elements that can influence risk
perception. Keil et al. for instance analyse the influence of risk propensity and the influence of
the size of a potential loss on risk perception. Risk propensity here is the general inherent bias
towards either taking risk or avoiding it (i.e. risk aversion) in any of us. What they show is
that risk perception has more influence than risk propensity on decision making but also that
the extent of a potential hazard has more importance than its probability of occurrence (Mark
Keil et al., 2000). Lyytinen et al. have also acknowledged that professionals are mostly lossaverse in their approach to risk (Kalle Lyytinen et al., 1998).
Risk as security
The other part of the risk literature is aimed at understanding how professionals can defend
themselves from security hazards. Firstly, risk is here understood as any element that can
affect the Confidentiality, Integrity or Availability (CIA) of information (Anderson, 2003).
Secondly, authors note that, if security risks can be underestimated by IS professionals (D.
13
Loch, 1992; Straub & Welke, 1998), it is mostly the top management that is ill aware of the
real security risks (Solms & Solms, 2005). Solms & Solms even want information security to
be called business security as it has become the most important asset of any company and
because it would help shedding more light on its intrinsic importance.
Most of the literature is aimed at finding ways to prevent risk from happening for
professionals. Some authors explain how to adopt security strategies such as the General
Deterrence Theory or risk analysis and how it can effectively help reducing attacks and frauds
(Baskerville, 1991; Straub & Welke, 1998). Others focus on how to choose and implement the
proper codes of practices or accreditations (Eloff & Solms, 2000a, 2000b; Fung, Farn, & Abe,
2003). But very little is aimed at understanding risk perception.
However, a few explanations as to why dealing with security issues is also a matter of
perception are present in the literature. Stewart, for instance, uses the risk compensation
theory that states that “after safety measures have been introduced the level of risk is reasserted at the level with which the subject is usually content” (Stewart, 2004). That is to say
that if someone considers that the level of safety of an item is important enough but is forced
to improve it anyway, whatever he does will be at the detriment of something else thus having
a null overall impact. That is why, according to the author, the only way for professionals to
be able to carry out their plans is to institutionalise a higher level of risk perception. Another
theory considers that professionals may believe that they have an important level of control
over security thus decreasing their risk perception (D. Loch, 1992). A final hypothesis is that
the past of professionals (i.e. how much they had to face a hazard) may considerably influence
their risk perception (Straub & Welke, 1998).
Filling a gap in the literature
After reviewing the IT literature on risk, we have to stress that there is very little
consideration for risk perception. Most of the authors have a technical approach and try to
find elements that could help professionals in making decisions. But how the same
professionals could be affected by biases or how their perception of risk comes to be formed
is very rarely analysed. That is why we wish to use a framework taken from social studies, the
SARF to analyse this point in more detail. However, this framework being extremely broad,
we will analyse how one specific IS security incident is analysed in an organizational
environment.
14
As Kasperson et al. (2003) and Freudenburg (2003) point out, there is only little literature
analysing the impact of organizational amplification, especially in concordance with the use
of SARF. According to what does exist however, there are short and long term factors that
can affect an organisation. The short term ones are an organization’s resistance to change due
to its inner culture, the idea that risks “may not happen” to the company, issues of shared
understanding, a pressure for result or even a lack of concern for risk management. The long
term ones are a mix of complacency (i.e. the drop in concern about a risk over time) and cost
control (i.e. periodic needs to reduce costs) especially when safety measures are seen as costly
and non productive (Freudenburg, 1992, 2003).
We will therefore try to conduct an analysis using the SARF of how an organization can make
sense and amplify a risk signal. In order to do so, we have focused on a singular event that
occurred in France in January 2008, when a rogue trader was found in one of France’s leading
banks. We will try to analyse how a competing bank was affected by this risk signal and how
it evaluated it.
15
Methodology
In order to achieve our goal of in depth analysis of a specific occurrence of a SARF process
with particular emphasis on organizational amplification we have chosen to do a case study.
As Yin (1981) explains it, a case study is nothing more than a research strategy that can
contain different types of evidence and data collecting mechanisms. As such we have chosen
to do an interpretive study (Walsham, 1995) using qualitative data as our main source.
Moreover, as our framework has been designed to be usable for very different approaches, it
can accommodate with a case study analysis. In their summarising of the past literature on
SARF, the authors of “The social amplification of risk” even state that their use of qualitative
case studies have helped them in further developing their theory (J. X. Kasperson et al.,
2003).
We collected data from the end of May to mid July 2008 in a French Bank that works closely
with Société Générale. A meeting with the head of the risk department allowed us to get entry
for our research. Following this first interview, 18 one to one semi structured interviews were
conducted in different branches and entities of the bank but also at different levels of
seniority. Most of the interviews lasted for more than one hour. The list of questions was
created in accordance with the SARF model. All participants we asked to rank the risks they
considered the bank was facing, to explain how they came to that ranking, to describe where
their information was coming from, to explain what were the relations with other
organizational members, to detail how institutions such as the Banque de France or the Basel
II committee was affecting their work and how it affected them and finally if they considered
themselves to be of risk averse nature or not. Other documents such as internal
communication about risks and intranet documentation were obtained on the field.
The way we analysed the data was by going back and forth between the theory and the
narratives we took from the field. Indeed, according to Walsham (1995), there are three ways
of using a theory in a case study: as an “initial guide for design and data collection”, as part of
an “iterative process of data collection and analysis” or as the “product of the research”. Since
we want to extend the SARF to Information Systems research we will not use the final way of
using the theory. And, between the two other options Walsham advises to use the second as
using theory as a guide may largely influence the results of the study. That is why, for each
important theme of our analysis we will give our narrative before analysing it.
16
Case Study
Introduction
For confidentiality purposes, the company where we realised our case study will be called
either “the bank”, “the firm”, “the company” or “the group”. The firm is a cooperative bank
that was created almost a century and a half ago. It is divided in 39 independent bodies that
are divided over the whole of France. All of these ensembles own together over 50% of an
umbrella corporation that has opened its capital in the last ten years. Thanks to this new flow
of cash, the company was able to recently buy one of its competitors that was especially
strong in finance and widely developed at an international level. This bank is one of the
leaders in its field and has, in its daily activities of trading or lending, to deal with Société
Générale (SG).
What happened in Société Générale was explained on the 24th of January by Daniel Bouton,
its Chairman and CEO, when he made a public announcement that “Someone has built a
company inside the company and has hidden his positions. He has worked in the group’s back
office for a few years where he learned how to bypass the control mechanisms” (Paul, 2008).
One “rogue trader”, Jerome Kerviel, despite security mechanisms, back office and middle
office scrutiny, was able to buy for 77 billion dollars worth of shares while working for
Société Générale, whose market capitalisation was of approximately 52 billion dollars. When
the bank realised this and sold all of the assets, it lost 7 billion dollars in just three days. The
official report, issued by the bank in May of 2008 (General inspection department, 2008),
shows in fact a mix of Information System (IS) security bypass, the creation of fake e-mails,
the help of middle office personnel and the lack of a strong supervision.
Our analysis of the perception of the Société Générale fraud, building on the SARF, will be
divided in five main parts. The first part will analyse the signal and the sources of information
about the fraud. The second part will study the different institutions’ influence on risk
amplification. The third part will examine how the organization was able to cope with this
information. The fourth part will consider the personal biases that stood out during the case
study. The fifth and final part will briefly detail the ripple effects and consequences of the
Société Générale scandal at different levels.
17
We have to stress that the purpose of our study is mainly to analyse the organizational
amplification station of the SARF. However, as the framework revolves around the
interaction between all of the elements mentioned above, we consider that we have to address
them in order to make our analysis complete.
The signal and the sources of information
In order to show that the general media has had a very strong influence on the employees of
the company, we will analyse the signal and the sources of information they were confronted
with. Our analysis will be based on two elements. First, we will consider the number of
articles in France’s most important journals regarding the subject of the Société Générale
scandal since it was presented in January of this year. Second, we will examine how the
interviewees present the matter as well as the sources of information they claim to be using.
The signal
The information about the “French rogue trader” has been widely publicised in the media thus
creating a wide risk signal. The day the information was made public is January 24th of this
year when the President and Chairman of Société Générale (SG) made a communiqué
explaining the fraud. Since then, the most important French journals have extensively written
on the subject detailing every new aspect of the scandal. For instance, “Le Monde” wrote 182
articles, “Le Figaro” wrote 216 articles, “Les Echos” 238 articles1. Outside of France journals
like the “Financial Times” also wrote 290 articles. Adding to that, there was a lot of media
coverage on television, over the internet and a report issued by the bank itself giving its own
account of the whole situation: the “green commission” report (General inspection
department, 2008).
The effects of such media coverage have been felt during our interviews. First of all, 15 of our
interviewees addressed the subject of SG before we even mentioned it. The terms they used
were sometimes very extreme like “What happened at SG was a true catastrophe” or “What
happened at SG had and AZF effect (a plant that exploded due to negligence in the south of
France killing 30 people and injuring thousands) on negligence and poor controls”. The most
blatant example of this comes from the head of the Interest Rate Derivatives Department who
said the following when asked to list and rank the risks she considered the bank was facing:
1
Information taken from the websites of these different media on August 22nd 2008
18
“According to me the number one risk that we are facing right now is that of market
risk. The number two is the one of counterpart failure. The number three is that of
operational risk. The number four risk is the Société Générale type of risk or, if you
want, the risk of fraud.”
Our analysis is that the interviewees were able to identify with the issue to an almost personal
level. First of all, the Information Systems used in banks around the world are roughly the
same, making it possible to be affected too. Secondly the bank was confronted with a fraud
of the sort, but to a lesser degree, when a trader was able to take risky positions and lost 300
million dollars in a week. That is why people compared both of these elements during our
discussions “what happened in [the bank] is different as everyone was aware of what was
happening” or “what happened in [the firm] was due to non finalised IS more than the effects
of the subprime crisis”. In any case, all of the actors had clearly been flooded with
information and the overlapping of information made for a very strong signal.
The sources and channel of information
In this part we will see that the actors of the bank insisted on the fact that they were not
influenced by what was said in the mass media and how we analyse it. Indeed, when asked
what their primary source of information was, the most quoted answers were personal
experience, talking with co-workers or personnel from other banks, using statistics and
benchmarking and specialised media. In fact, when directly asked about the mass media’s
influence, all of the actors have refuted being influenced by it. Some interviewees even
criticised and discarded these channels of information on grounds that “the media is only
talking about internet risk [...] sometimes it can even be manipulated by lobbies like in the
case of the blackberry”.
However, we can prove that this displayed position hides the real influence of the media.
Indeed, only two of the interviewees have read the official report issued by SG however all of
them talk about the events in detail, but often mistakenly, and contradict each other. Thus they
mostly build their opinion about the SG debacle by recollecting information from different
media sources, mainly the mass media. Indeed, analysis about what truly happened constantly
evolved as time went by and new pieces of information were discovered.
Our understanding is that people are more under a mixed influence of the media, inter-bank
contacts and inter-personal relationships then they claim. Moreover, this event took another
19
understanding for them as they usually compare it to the small case of fraud that happened in
the bank. Finally, we consider that the reason why people deny the media’s influence is that it
might make them look as giving too much credit to other people’s mere opinions. However,
emphasizing their personal experience and statistics makes them look more objective and
analytical.
The institutions’ influence
On top of this already strong signal, institutions such as Basel II, the Banque de France and
the Comité de la Réglementation Bancaire et Financière (CRBF), its regulating body, have
greatly amplified the perception of risk of the actors. In a first part we will analyse the direct
influence these bodies have on the actors. In a second part we will discuss how the risk signal
initiated by the SG has affected these institutions that, in return, have put even more pressure
on the bank’s employees. In a final part we will show how the Basel II model of calculating
risk is overweighed by the potential of loss in people’s mind.
The direct influence of institutions
Institutions such as Basel II’s committee and the Banque de France carry a huge weight on the
persons we interviewed as they establish lots of the rules that the company has to abide by. As
such, virtually all the interviewees have established one or both of these institutions as having
an impact on their activity. Indeed, the Basel II committee, in defining the set of actions banks
have to accomplish, has put an emphasis on operational risk (Bank for International
Settlements, 2006). That is why all of the “three pillars” defined by the committee in order to
curb risk, contain a section about operational risks and how they should be dealt with.
The influence of an institution such as Basel II is so significant that some of the actors define
the way they deal with risk through its framework. One of the interviewees working in the
Information security department even stated “we don’t have a methodology to list our risks
per se but in practice we follow the Basel II principles”. Another one even explains that:
“In my previous bank at some point we only implemented the controls that the Banque
de France required from us. In fact we were doing the bare minimum, sometimes even
less than that”.
20
So, it comes as no surprise that most of the bank personnel consider that what these
institutions have brought is “a greater emphasis on operational risk”. As such, what happened
at SG, through the practice lens of these authorities, takes an even greater importance. Indeed,
nowadays the company is focusing more and more on its operational risks, under the pressure
of very powerful institutions, after one of the worst cases of fraud occurred. This pressure is
very much felt in day to day activity due to regular inspections: “A year ago the regulator
inspected our company to see if we were following the Basel II principles adequately”. What
is more, by building a common methodology for all the industry, Basel II has de facto
standardised the way to do risk in a bank. As such, the way risks are dealt with at SG are
roughly the same as in the firm we study and actors may consider that what happened there
could happen here. These three elements together may explain the greater magnitude of
perception of risk we have seen earlier.
The influence of the signal through the intervention of institutions
Since the fraud in SG, the Banque de France and the CRBF have put a lot of strain on banks
in order to make sure that the same type of fraud wouldn’t happen at home. The Banque de
France issued a statement where it underlined the faulty controls and Information Systems in
the bank. The result was a fine of 4 million Euros for SG (Textes officiels de la Commission
bancaire, 2008). Moreover, the institution has made very specific enquiries to most of the
French banks of the sector:
“The Banque de France, after the SG debacle, has put a lot of pressure on the banks,
including ours, in order to obtain a consolidated view of our activities. Since then we
have established, together with two other banks including SG, a series of new
scenarios that will have to be validated by the regulator.”
Our understanding is that there was a cumulative effect of the risk signal that directly affected
the actors but also through the institution’s reactions to it. Indeed, on top of the already very
vivid risk signal sent by the fraud and its loss of billions, the institutions have tried to play
their part in securing the banking market, thus adding more fear on the actors. What is more,
the institutions’ reactions may have been greater because of the importance the whole
situation took in the media. A paramount point was reached when the French President
demanded that Daniel Bouton resigned from the head of SG. Figure 3 shows in a basic way
how we see this vicious circle unfolding.
21
Institution
Risk Signal
Actors
Figure 3: The influence of the signal through the intervention of institutions
The size of the losses associated to a hazard is more important than its probability of
occurrence
What happened in the company allows validating experimentally the theory according to
which the extent of a potential hazard has more importance than its probability of occurrence.
Indeed, the Basel II model requires for companies to analyse 25 years of data in order to
establish statistics about the operational hazards they face. And, as one personnel of the risk
department points out: “An analysis of 25 years of statistics of trading losses has shown that
they actually have a very low probability of occurrence”. The same person later says that the
losses SG had to face were tremendous, “equalling the benefits of a company like Renault and
other firms, only very few companies could have survived such a blow”.
This shows that the prospect of loss is more important than the probability of occurring in risk
perception. Indeed, we have here a blatant example of a person working in the risk department
who, when using statistical data and applying the Basel II methodology, found that the level
of risk was extremely low, almost insignificant. However, he goes and contradicts it because
of the sheer size of the potential loss only one of these extreme events could have. We
propose that we call this a Lévy effect by the name of the probability law Paul Lévy proposed
that takes extreme events into consideration more than the Gaussian law.
The organizational biases
In this part we will analyse how, after being amplified by the media and the institutions, the
rogue trader risk has been confronted with the organizational station. Firstly, we will stress
how the organization has been used as a way of trying to pass the blame and responsibility in
order to lower the perception that people may be affected, almost as a defence mechanism.
Secondly, we will analyse how the specific nature of the company and the coming together of
22
two very different cultures affects risk perception, almost as a cultural span. Thirdly, we will
examine how the Front Office (FO) and the Middle Office (MO) are in an ongoing conflict
that we will call risk perception differential. Fourthly, we will consider how a sense of “this
could not happen to us” has been present among some specific actors engaging in a sort of
denial. And, fifthly, we will explore how the organization is dual in its way of amplifying or
attenuating risk.
The defence mechanism: the organization as a way to pass the blame and to manage risk
perception
As we have seen, under the influence of different media and institutions, most of the actors of
the firm have perceived the risk of fraud to be extremely important. We will now show that, in
order to cope with that pressure, they used the organization as a way to pass the blame.
Indeed, as what happened at SG was a mix of the ongoing subprime crisis, Information
Security and Management issues, every actor is able to discharge himself from having to bear
the potential risk.
From the beginning of the scandal, the blame was put on the information system. Indeed,
trader Jerome Kerviel was said to have bypassed the security mechanisms that were enforced
in the bank. Thus it would seem that they should carry part of the responsibility. However, for
the actors in the security department, what happened at SG is not an Information Security
issue but a management one as “one of the functions of the head of a department is IT risk
which they forget a bit too often”. On the other hand, “An IT manager is not responsible if
one of his employees loses control”. As the head of the department of information security
puts it:
“Our third likely scenario today is the one of internal fraud made easier by too lax
controls. However, when collusions between two employees take place, this is not our
matter anymore. For instance JK [the rogue trader] was able to use the password of a
collaborator which helped him bypass a system. And as we have agreed with other
CISOs, what happened at SG is not an Information Security issue.”
The view of the IT risk department is thus that because the Front Office is in charge of the
risk it is willing to take and because collusion was made possible at SG due to a lack of proper
supervision, they are not responsible but managers are. And, for the IT personnel (i.e. non risk
23
IT personnel) the failure is a question of Information Security and they “don’t deal with
information security, the CISO does”.
For the department of financial risks, even if they admit that such a danger was extreme and
as such crossed the boundaries between departments, it is an operational risk as “The CRBF
has chosen to qualify that what happened at SG was an operation hazard only”. Here the
actors resort to the decision of a superior body as a result to authority. Indeed, if the Banque
de France itself classified the SG scandal as an operational risk there shouldn’t be any debate
about it. Finally, for the managers, the issue is that “there clearly was an IS failure” and that,
anyway, they too do not have to deal with it: “I don't deal with the risk tools, I am not a risk
specialist and to be honest it is not my cup of tea”. Thus most of the actors take part in a
blame game while only the operational risks manager has to acknowledge that this type of risk
unfolds under his personal domain.
One could argue that our view is too simplistic and that we are pointing out the way the bank
deals with its risk in a silo fashion. However, we would like to stress two elements. First of
all, because what happened at Société Générale is an extremely rare incident, it suggests that
the way most banks deal with their risk is efficient. Moreover, the official report from SG
states that, taken individually, each risk mitigation technique could have stopped the hazard
from happening (General inspection department, 2008). Thus the silo mechanism which is de
facto in place is not criticized at all here. What we are merely saying is that it can be used in a
risk perception process so as to decrease the individual’s perception of risk to place in on
other departments and thus on the company as a whole. It is similar to saying that there is a
natural defence mechanism where the more people are confronted with an issue, the less each
of them feels concerned with it at an individual level.
The cultural span: the coming together of two different cultures
Every company forges its own culture span, determining for instance if it should be more or
less open to risk taking, to instability or to capitalist attitudes. And, in the case of the
company, this span was especially wide with two very different approaches on how, at the
extremes, rogue trading risk may be perceived. Indeed, as we noted in the introduction to our
case study the bank recently bought one of its former competitors on the market and they are
very different in nature. One is owned by a network of cooperative banks, more focused on
retail and development and the other is very capitalistic in nature, more lined up towards
trading. One is big, the other has the feeling that taking more financial risks will avoid or at
24
least postpone the risk of any takeover. Thus the perception of rogue trading varies a lot from
one business unit to the other, all the more as they are quite separated. For the big bank, rogue
trading is the proof of the adverse territory in which it is working, for the other one, it is a sort
a “price to pay” to be independent and increase its value. Today, the big one bought the
capitalist one and, if with time the cultural span of the company may decrease, it is still quite
wide as the merger is not so far away.
During the interviews the manifestation of this cultural span was quite clear. The capitalists’
views were expressed as such “There are some activities [i.e. trading] either you do them or
you don’t. But if you want to start doing them you need a long term vision and shouldn’t
change your strategy every two years”. Another actor stressed that “If today people are yelling
about what happened at SG, one has to remind them that shareholders would have yelled even
more if no risk had ever been taken by the bank”. On the opposite side is a sort of
“cooperative view” that considers that “There is a trader evil that consists in giving huge
amounts of money to people who don’t have anything to lose. This is the result of the AngloSaxon’s push to make a lot of fast profits”. Another way of looking at it for this other
mentality is to emphasize the losses: “People have to acknowledge that the 300 million Euros
that were lost in our bank represent one year of profits of one of our independent bodies”.
Our understanding is that it antagonizes two different visions of trading that both carry a
different bias regarding fraud risk. On the one side is the capitalistic view that considers that
risk is a necessary evil without which there is no profit. This part of the company is thus more
lenient towards the SG risks. On the other side is the cooperative view that considers that the
risks are way too great, given in the hands of overly powerful people who are not supervised
enough. The latter is thus more concerned about internal fraud risks. This explains why there
isn’t a unique culture span or a feeling of unity which may explain the propensity to pass the
blame from one unit to another as well as from one subsidiary to another.
The risk perception differential: Front Office and Middle Office
As “the regulator forces [banks] to have independent Front Offices (FO) and Middle Offices
(MO)”, it has created a binary system that may lead actors towards the extremes more than
towards the consensus. The Front Office is usually more prone to risk claiming for instance
some actor stated that “the bank’s culture and the way it is behaving right now regarding
trading risk is excessive” or that “it is hard to establish controls as they slow down the
activity”.
25
On the other side, the MO considers that “It is the FO that makes the final decision as far as
risks are concerned and if they wish to go against the MO they can only blame themselves if
something happens”. They stress the fact that they are usually underestimated: “The FO is at
the top and the MO is down at the bottom. Less and less traders want to go to the MO as this
behaviour is very badly regarded in the banking industry today”. More importantly the MO
usually considers that it is more right then the FO: “People in the MO department usually
have a better version of the truth”. This usually relies on the assumption that “the FO is
usually more irrational than the MO as they rely mostly on intuition”.
That is why we consider that the nature of the relationship in itself between these two bodies
may lead to a growing antagonism and thus going towards two extreme risk perceptions.
Moreover the MO feels more and more underestimated and treated as a profit stopping
mechanism by the FO while at the same time being more right than them. That is why with
time it would seem that, on one side, the Front Office would be more and more risk taking
whereas, on the other side, the Middle Office would be more and more risk-averse thus
widening a risk differential between the two.
Denial: This couldn’t happen to us
As we have seen in the literature, one of the most dangerous consequences of organizational
amplification of risk perception is the idea that a hazard wouldn’t actually happen to your
company, element which we have found in many occasions. Some criticize the quality of the
IT system of SG: “At SG the systems were not robust enough for the market operations.
Sometimes the traders were forced to use Excel to make some calculus, making it impossible
for the back office to check on them”. Then some authors distance themselves from SG either
by analysing their enterprise culture “There are factors that are specific to SG that make me
think that this couldn't happen to us, at least to such a level. For instance this bank gave an
incredible weight to their BFI [Banque de Financement et d’Investissement in French,
Finance and Investment Bank in English] and the risk staff was filled with very young
personnel”. Others stress what makes the Bank better “Our analysis showed that we do not
have a star system on the trading floor that would allow for an individual such as Jerome
Kerviel [the rogue trader] to go unnoticed”.
Our understanding is not that the actors truly believe that it couldn’t happen to them, but more
that they would want to believe it to be true. Undeniably if they were totally confident about
this perception, they wouldn’t emphasize so much on the actual fraud risks that the bank is
26
facing, and the analysis wouldn’t be congruent with the level of risk perception that we have
witnessed so far. That is why our interpretation is that the employees of the bank use of this
argument of “we are different” as another defence mechanism to try and calm the anxiety
around their level of risk perception through denial.
The duality of the organization
In order to conclude on the organizational amplification process we wish to explore on the
duality of the organization between what is occurring in its inner boundaries and its relation to
the outside world. Indeed, as we have shown there isn’t a unique enterprise culture but more
of a cultural span that is, in our case, rather wide due to the recent merger. What is more, each
Business Unit (BU) has a tendency to create its own specific narrative about risks where it
blames the other units for the risk responsibility. However, out of this inner disorder, at the
company level, there is a sense of unity. In systems theory this could be explained in systems
thinking by the phenomenon of emergence from one level to another. Here, at a company
level, there is a sense of unity, of “us” as compared to “them”, opponents.
We claim that both these processes act as a risk attenuation mechanism making the
organization, as a whole, an even greater attenuator. For a person alone, the risk signal may be
overwhelming to deal with but the organization allows for a division of the risk among all of
its actors, thus reducing each individual’s risk perception and reporting it on the whole. What
is more this whole entity, that the organization is, at a superior level, is considered to be more
robust than the sum of its parts and thus, than its opponents. This is what we call the duality
of the organization, where both effects actively reduce risk perception.
Personal biases
In this part we analyse the institutional groups and personal behaviours to show that three
important types of biases rose up during the case study: the idea that risks are increasing over
time, the perception that risks are only negative or losses and that trying to tackle risks is a
lost battle. We will now analyse these three elements in more detail.
Risks are increasing
Almost unanimously, the interviewees have told us that risks are increasing. The main reasons
that were given by the actors were the new world of interdependencies (e.g. between different
27
banks, with outsourcing companies), the arrival of new technologies or an increase in general
complexity. One person from the IT department stated for instance:
“Nowadays risks are increasing because of a frenetic will to develop networks,
because more and more hacking methods are well-known but also because IT people
usually have the root passwords. The myth of total security is now gone”
Another aspect of the increase of risk is the comparison with a police against robbers
situation: “The issue of the fraud at SG is the case of cops and robbers with the latter being
almost always more creative than the former”. The idea is that the robber will always be one
step ahead of the policeman that will be lagging behind.
We see here a vivid example of what Beck described in his book “risk society”. Indeed, here
most of the actors focus on risks that rise from modernisation. Now a rogue trader incident is
due to an IS failure, that is the price of modernity. What is more, as Beck explains it, the new
deal regarding risks is that they are man made, like here the human intelligence’s capacity to
bypass a security system. And, we can see that, as the author forecasted, the organization is
structuring itself more and more around risk, at a personal level like most of the actors do, but
also at an institutional level with the rise of the importance of Basel II (Beck, 1992).
Risks are losses
All of the actors except one have, as the literature stressed it, a definition of risk which is
always that of losses. For instance one interviewee said that “we have to evaluate how much
we would lose if we didn’t do anything for risks”. Others stressed that “risks are mostly
perceived through the losses they make and analysed a posteriori” or that “we have to
differentiate a risk and the damages it can cause”. One other actor, putting it bluntly, even
said: “Nobody gives a damn about the Information System risks per se. The true issues are the
effects of the IS on the bank’s activity”.
Our view is that two elements come to influence the perception of risk as only a probability of
loss and not a probability of win. First is the influence of the institutions that require banks to
calculate their risks in terms of how much loss they could represent. The idea behind this is
that you can then evaluate how much savings a bank needs in order to be able to cope with
any hazard actually occurring. Banks therefore need to think in terms of potential losses for
their supervising institutions which permeate in how people perceive risk. Second, the term
risk itself has apparently shifted in meaning as a whole in our vernacular and no longer
28
represents the two possibilities, either good or bad, but only the level of dread of a situation.
Another way of looking at it is by analysing the opposite: a profit is not considered a risk at
all. It is a bonus, an advantageous possibility, something intrinsically beneficial, contrarily to
a risk.
We can do less and less to lessen risks
Another widespread bias is a feeling of helplessness regarding the new technological risks
that the company is facing. One element is the increased complexity of the technologies that
was registered many times and is usually related to the use of technology:
“Payment used to be done on a single server. Today it is made out of 3.000 different
excel sheets. The feeling of control has vanished. All the CISO have made the same
acknowledgement that they are powerless.”
“Our tests can’t be done live and are carried out during week-ends and we thus don’t
have time to test everything. [...] For a long time IT consisted of a simple mainframe
but now servers are made out of 10,000 units used to make calculous. [...] What
happened at SG is only an example of a new type of risk”
The other aspect of it is the idea that there is no limit to human nuisance: “Disasters such as
what happened at SG will never be fully mastered because of the human factor” but also a
feeling of abandonment by technology that was seen as a way to reduce risk and that now
seems to have turned against its creators: “The idea that IT allows us to master risks was a
faulty insurance. There is no guaranty in putting everything through IT”.
Our understanding is that this view is once again congruent with Beck’s vision of risk society
as explained above. What is more, the hope that was placed in IT has now shifted to another
extreme of despair. New technologies do not make risk disappear but merely change it into
another type of hazard. Finally, we see here the last natural level of self-defence of the actors
before having to question themselves on a deeper level. By acknowledging that some
elements will never be mastered they leave space for future disasters and mistakes. In a
moment of extreme focus on Information Security risk it would make sense that the
professionals wouldn’t try to make false promises regarding the future of security.
29
Discussion, limitations and conclusion
Our goal was to provide a study that would shed some light on risk perception in the field of
Information Technology by building on a framework taken from the field of social studies.
Using the SARF in an organizational setting we have found that professionals are very much
subject to subjective assessment in the way they relate to risk. What is more, the framework
has allowed us to show the intricate level of interaction between the risk signal, the channels it
travelled through, the amplification it received from institutions, the organizational
attenuation and the actors’ personal biases.
More importantly we have analysed in details how the organizational station could be used to
attenuate risk perception in the SARF. By providing the example of a bank we have found
that, in accordance with the literature, what we called a sense of denial emerged from the
company, but also that there were issues of shared understanding, what we called cultural
span and perception differential that could influence risk perception. But we also went further
and uncovered how the organisation could be used as a defence mechanism and how it was
intrinsically dual, thus providing it with attenuation power on risk perception.
However, our study is limited by three main factors. Firstly, we analysed here what the
literature calls an “extreme risk” which is more likely to influence risk perception. Secondly,
the company we analysed was very specific as it had gone through a merger recently which
played a great role in the cultural span. Thirdly, the bank also went through a type of internal
fraud in one of its foreign branches, thus emphasizing the risk probability.
In regards to these elements, we could conclude that the SARF was a very powerful tool in
explaining risk perception in our case study and that it should be used more in the future.
Moreover, we could only suggest conducting other studies in different banks so as to be able
to generalise our findings. A pertinent approach would be to conduct a cross-study among
different French banks.
30
References
Anderson, J. M. (2003). Why we need a new definition of information security. Computers &
Security, 22(4), 308-313.
Bank for International Settlements. (2006). International convergence of capital measurement
and capital standards. Bank for International Settlements, June.
Basili, M. (2006). A rational decision rule with extreme events. Risk Analysis, 26(6), 17211728.
Baskerville, R. (1991). Risk analysis: an interpretive feasibility tool in justifying information
systems security. European Journal of Information Systems, 1(2), 121-130.
Beck, U. (1992). Risk society : towards a new modernity. London: Sage Publications.
Bier, V. M., Haimes, Y. Y., Lambert, J. H., Matalas, N. C., & Zimmerman, R. (1999). A
survey of approaches for assessing and managing the risk of extremes. Risk Analysis,
19(1), 83-94.
Brenot, J., Bonnefous, S., & Marris, C. (1998). Testing the cultural theory of risk in France.
Risk Analysis, 18(6), 729-739.
Campbell, S. (2006). Risk and the subjectivity of preference. Journal of Risk Research, 9(3),
225-242.
D. Loch, K. (1992). Threats to information systems: today's reality, yesterday's understanding.
MIS Quarterly, 16(2), 173-186.
Douglas, M. (1970). Natural symbols.
Douglas, M., & Wildavsky, A. B. (1983). Risk and culture : an essay on the selection of
technical and environmental dangers. Berkeley: University of California Press.
Eloff, M. M., & Solms, S. H. (2000a). Information security management: a hierarchical
framework for various approaches. Computers & Security, 19(3), 243-256.
Eloff, M. M., & Solms, S. H. (2000b). Information security management: an approach to
combine process certification and product evaluation. Computer & Security, 19(8),
698-709.
Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S., & Combs, B. (1978). How safe is safe
enough? A psychometric study of attitudes towards technological risks and benefits.
Policy Sciences, 9(2), 127-152.
Freudenburg, W. R. (1992). Nothing recedes like success? Risk analysis and the
organizational amplification of risks. Issues in Health & Safety, 3, 1-35.
Freudenburg, W. R. (2003). Institution failure and the organizational amplification of risks:
the need for a closer look. In The social amplifciation of risk (pp. 102-120):
Cambridge University Press.
Fung, A. R.-W., Farn, K.-J., & Abe, C. l. (2003). paper: a study on the certification of the
information security managment systems. Computer Standards and Interfaces, 25,
447-461.
General inspection department (Ed.). (2008). Mission Green, Rapport de synthèse: Société
Générale.
31
Kasperson, J. X., Kasperson, R. E., Pidgeon, N., & Slovic, P. (2003). The social amplification
of risk: assessing fifteen years of research and theory. In The social amplification of
risk (pp. 33).
Kasperson, R. E., & Kasperson, J. X. (1996). The social amplification and attenuation of risk.
Annals of the American Academy of Political and Social Science, 545, 95-105.
Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., et al. (1988). The
social amplification of risk: a conceptual framework. Risk Analysis, 8(2), 177-187.
Keil, M. (1995). Pulling the plug: Software project management and the problem of project
escalation. MIS Quarterly, 19(4), 421-447.
Keil, M., Li, L., Mathiassen, L., & Zheng, G. (2008). The influence of checklists and roles on
software practitioner risk perception and decision-making. The Journal of Systems and
Software, 81, 908-919.
Keil, M., Tiwana, A., & Bush, A. (2002). Reconciling user and project manager perceptions
of IT project risk: a Delphi study. Information Systems Journal, 12, 103-119.
Keil, M., Wallace, L., Turk, D., Dixon-Randall, G., & Nulden, U. (2000). An investigation of
risk perception and risk propensity on the decision to continue a sofware development
project. The Journal of Systems and Software, 53, 145-157.
Lambert, J. H. M., N. C. Ling, C. W. Haimes, Y. Y. Li, D. (1994). Selection of probablity
distributions in characterizing risk of extreme events. Risk Analysis, 14(5), 731-742.
Loon, J. v. (2002). Risk and technological culture : towards a sociology of virulence. New
York: Routledge.
Lyytinen, K. (1988). Expectation failure concept and systems analysts view of information
system failures - Results of an exploratory study. Information & Management, 14(1),
45-56.
Lyytinen, K., Mathiassen, L., & Ropponen, J. (1998). Attention shaping and software risk- a
categorical analysis of four risk management approaches. Information Systems
Research, 9(3), 233-255.
Marris, C., Langford, I. H., & O'Riordan, T. (1998). A quantitative test of the cultural theory
of risk perceptions: Comparison with the psychometric paradigm. Risk Analysis, 18(5),
635-647.
McManus, J., & Wood-Harper, T. (2007). Understanding the sources of information systems
project failure. Management Services, 51(3), 38-43.
Olsen, J. R., Lambert, J. H., & Haimes, Y. Y. (1998). Risk of extreme events under
nonstationary conditions. Risk Analysis, 18(4), 497-510.
Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the
"IT" in IT research - A call to theorizing the IT artifact. Information Systems
Research, 12(2), 121-134.
Paul, N. (2008). Fraude: la Société Générale estime avoir frôlé le pire... Retrieved August
10th,
2008,
from
http://www.latribune.fr/info/IDC9E025F76ECE8574C12573DA004B28C7
Renn, O. (1998). Three decades of risk research: accomplishments and new challenges.
Journal of Risk Research, 1(1), 49–71.
32
Rosa, E. A. (2003). The logical structure of the social amplification of risk framework
(SARF): Metatheoretical foundations and policy implications. In The social
amplification of risk (pp. 47-79).
Sjöberg, L. (1998). World views, political attitudes and risk perception. Risk: Health, Safety
& Environment 9(2), 15.
Sjöberg, L. (2000). Factors in Risk Perception. Risk Analysis, 20(1), 1-11.
Sjöberg, L., Moen, B.-E., & Rundmo, T. (2004). Explaining risk perception: an evaluation of
the psychometric paradigm in risk perception research. Rotunde, 84.
Slovic, P. (1987). Perception of risk. Science, 236(4799), 280-285.
Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2004). Risk as analysis and risk
as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis,
24(2), 311-322.
Solms, B. v., & Solms, R. v. (2005). From information security to... business security?
Computers & Security, 24, 271-273.
Stewart, A. (2004). On risk: perception and direction. Computers & Security, 23(5), 362-370.
Straub, D. W., & Welke, R. J. (1998). Coping with systems risk: Security planning models for
management decision making. Mis Quarterly, 22(4), 441-469.
Tansey, J., & O'Riordan, T. (1999). Cultural theory and risk: a review. Health, Risk & Society,
1(1), 71-90.
Textes officiels de la Commission bancaire. (2008). SOCIÉTÉ GÉNÉRALE. Textes officiels
de la Commission bancaire.
Thompson, M., Ellis, R., & Wildavsky, A. B. (1990). Cultural theory. Boulder, Colo:
Westview Press.
Wahlberg, A. A. E. (2001). The theoretical features of some current approaches to risk
perception. Journal of Risk Research, 4(3), 237-250.
Walsham, G. (1995). Interpretive case studies in IS research: nature and method. European
Journal of Information Systems, 4(2), 74-81.
Yin, R. K. (1981). The case study crisis: some answers. Administrative Science Quarterly, 26,
58-65.
33
Download