1 The practice of exercising stock ... estimates can also be performed in an optimal amount of...

advertisement
1
Improving Monte Carlo Simulation-based Stock
Option Valuation Performance using Parallel
Computing Techniques
Travis High
Abstract—Given the complexity of determining an optimal
action date for complex option types such as American options,
Monte Carlo simulation is typically used as a means of
approximation. However, these simulations are typically
implemented via sequential processing methods, and thus lose
scalability as the number of iterations in the simulation grows. By
leveraging parallel computing techniques, these simulations can
easily scale to accommodate larger numbers of iterations, and
reduce the amount of variance reduction used in the calculation of
optimal option valuation.
Index Terms—Monte Carlo simulation, parallel computing
I. INTRODUCTION
T
HE stock market is comprised of various securities
which potential investors can buy and sell. These
securities can be associated with tangible assets, as is the case
with stocks, or associated with the opportunity to buy/sell
shares of stock at a fixed price, commonly referred to as the
striking price. Stock options, or derivatives as they are
commonly known, are a form of the latter, representing the
opportunity to buy/sell a stock at a striking price for a given
premium per share. There are two main types of options that
are commonly offered on the market; call and put options, and
the way these options are exercised can differ between markets
as well. Both the option types and exercising methods will be
discussed in the background section of this paper.
As one would surmise, it becomes much more difficult to
estimate proper buying/selling opportunities for derivatives by
comparison to traditional shares given the added dimension of
time as well as calculating what the maximum profitability of
selling an option is. Therefore, for the purposes of this paper,
we will be evaluating simulation methods for derivatives, in
both the European and American contexts, in order to more
accurately gauge how parallel computing can assist in the
efficiency of their computation. However, in order to
understand how parallel computing can assist in these types of
simulation, we must first understand the numerical methods
that predated Monte Carlo simulations as well as those that are
used in current simulations. Details on this will follow in the
Background section of this paper.
The practice of exercising stock options requires that
estimates can also be performed in an optimal amount of time
to properly maximize market value. This stems from the fact
that the market forces of supply and demand will typically only
allow a particular pricing opportunity to exist for a short
period of time only. Given the notion that for many financial
models there does not exist a closed-form solution,
approximation through simulation must be not only used but
optimized to arrive at approximate estimations quickly [2].
Additionally, given the fact that parallel programming
techniques introduce implementations that out-perform their
sequential counterparts [7], [11], a combination of simulation
and parallelism serves to meet the demands imposed by the
markets as closely as possible.
II. BACKGROUND
In order to properly understand how stock option
simulations are used, one must know the calculations used to
determine an option price at a given point in time. This price is
based upon the price of the underlying asset, which in typical
cases is represented by stock. The price of an underlying stock
or security at a point in time is denoted by:
St, for 0 ≤ t ≤ T
(1)
where t is a point in time and T is the expiration date of the
option. Using this knowledge as a basis for how stock option
valuations vary over time, we can now discuss the different
types of options that operate upon these securities, as well as
the various methods used to determine their value.
A. Types of Stock Options
A call option is an option that provides the owner the ability
to purchase a stock at the agreed striking price at a future date.
This type of option profits the owner when the current price of
the stock exceeds the striking price. Conversely, a put option
gives the owner the right to sell a stock at the agreed-upon
striking price at a future date. This type of option profits the
owner when the current price of the stock is below the striking
price. As you can see, as an owner of either type of option, an
important part of the valuation process is determining when
the maximum profit can be derived from the exercising of the
2
option(s).
Derivatives are typically sold under different conditions on
different markets, with varying degrees of complexity involved
in calculating their market value. The two types of derivatives
discussed in this paper, European and American, vary
essentially on when they can be exercised. A third type of
option, known as an exotic option, can vary in its definition
based upon the market.
European options are options that are issued to expire at a
future date, and can only be exercised at that future date.
Therefore, the owner’s decision to exercise the option is not
made until the termination date of the option. American
options on the other hand, can be exercised at any point in
time up until the expiration date. This type of option gives the
owner multiple instances in time where the option can be
exercised, and leaves the owner the responsibility of
estimating what policy constitutes the maximum profitability.
Concrete examples of both of these types of options can be
found in detail in [5]. Additionally, supplemental information
on these and the exotic options can be found in [1].
B. Black-Scholes-Merton Model
The dominant theory in practice prior to the use of Monte
Carlo simulations was referred to as the Black-Scholes-Merton
or BSM model. The BSM model for pricing an option
provides an analytical method of producing an option price for
options with a single expiration date [4]. These types of
options are commonly referred to as European options, and
according to recent research [3], the underlying asset
movement models eventually lead to a highly complex partial
differential equation that may be difficult or impossible to
obtain, as well as impractical to solve in more complex
scenarios. The breakdown of the BSM model to the PDE
described above can be found in [3].
Because of the additional challenges not being sufficiently
met with the BSM method, valuations of derivatives that can
be exercised at a random point in time up until the expiration
date, also known as American options, are even more difficult
to determine. As a matter of fact, research done in this area has
indicated that the BSM approach often overprices such options
by about ten percent and that this error increases as the time to
maturity increases [2]. Also, after use of the BSM method in
the field by those in the finance community, it became
apparent that it was only valid in a limited number of cases due
to the assumptions made on aspects such as volatility [8]. This
spurred further research, which lead to the discovery of a
binomial model for determining valuations.
C. Binomial Lattice
The Cox-Ross-Rubinstein, or CRR, binomial model prices
options through the use of a binomial tree structure. This tree
divides time between valuation date and expiration date and
produces a finite number of time steps, where each node
represents an intersection of stock price and time [8]. This
approach involves the use of “discounting”, which calculates
the value through stepping backwards from the expiration date
to the current date in the tree, through the use of a discounting
factor. In essence, the binomial price tree calculations derive a
result that has the desired price as the root node. Use of this
method have proven successful for both European and
American options, but still suffers from the same issue of
estimating volatility that other approaches do. This inherent
weakness in financial estimation becomes a selling point for
introducing larger data sets for simulation, which becomes a
problem when computing power of single workstations
operating in a sequential manner reach their threshold of
suitability.
D. Monte Carlo Simulation
A Monte Carlo simulation is a method for iteratively
evaluating a deterministic model using sets of random numbers
as inputs [12]. In the realm of valuating stock options, it can be
done using the following steps as proposed by [5]:
• Simulation of the underlying financial asset(s) and
perhaps other non-stationary parameters (e.g.
interest rate and stock price volatility)
• Evaluation of the function of those asset(s)
According to Fu [5], the second step is merely the definition
of the derivative at hand, while the first step represents the
area we are most concerned with. Fu [5] continues to state that
this technique is the preferred pricing technique when the
following factors are present:
• Complicated dynamics represent the underlying
stochastic processes
• Dependence of the contract on multiple state
variables
• Path-dependent contracts are present
In modeling options in the realm of finance, these
conditions are met through interest rate/stock price volatilities,
dependence on underlying assets, and price history versus
point-in-time estimates. Therefore, Monte Carlo simulations
can be used to model the valuation of stock options.
Additionally, research has determined that in areas of applying
Monte Carlo simulation to American options is possible
through application of a threshold policy [5]. Other sources of
proof in applying Monte Carlo simulations to American
options can be found in [4].
III. LITERATURE REVIEW
In preparation of this paper, several sources of research in
the area of Monte Carlo simulation and valuation of options
were reviewed. Included in this review were works by Fu as
well as Charnes on the pricing of derivatives through
simulation, which shared both the algorithmic and
experimental details for applying Monte Carlo simulation
techniques to option valuation [1], [5]. Additionally, research
performed by Chen and Hong on Monte Carlo simulation in
financial engineering provided information on path generation,
stochastic mesh theory, duality approaches, and provided the
3
conclusion that growing complexity in financial models
necessitates the increased usage of Monte Carlo simulation
techniques [3]. These works built upon the rudimentary
knowledge imparted through the research of Cobb and
Charnes in real options valuation, where they contrasted
typical option valuation techniques with those of traditional
real option valuations, and directed attention to intersections of
the two approaches [4].
In addition to the works reviewed on the basics and uses of
Monte Carlo simulation in the area of stock option valuation,
research was also performed on alternatives to traditional
simulation methods. An interesting and captivating area of
research on this topic came from Kumar, and Thulasiraman,
who introduced the theory of applying an Ant Colony
Optimization (ACO) algorithm to option valuations [8]. This
work produced the notion that optimization of the simulation
could occur through drastically reducing the size of the
solution space. Though their work did not produce an optimal
shortest-path solution, it did produce encouraging results in the
long-term valuation of option prices.
The final area of research reviewed dealt with the
implementation of Quasi-Monte Carlo simulations. Chen and
Thulasiraman implemented a distributed algorithm for option
pricing utilizing heterogeneous networks of workstations
(HNOWs) using mpC [2]. Their work did not produce a
conclusive result in favor of Quasi-Monte Carlo simulations,
but did detail how taking factors such as network topology and
architecture differences into account can produce significant
speedups over MPI implementations.
Throughout the research performed reviewing current and
historical methods for valuation of options, it has been
concluded that a definitive answer to producing
approximations for stock options does not exist. However,
current simulation methods are advancing to accommodate
larger and more complex financial models, including
leveraging efficiencies gained through distributed computing.
These advances bring promise to the hope that technology will
scale to meet the needs of those in the financial community
that make future business decisions based upon intrinsic
financial models that are implemented on existing
architectures.
IV. MONTE CARLO SIMULATION METHODOLOGY
There are various models in existence for performing Monte
Carlo simulations for valuation of stock options. However, one
thread that runs through each simulation is the fact that there
are typically three steps as detailed by Chen and Hong [3]:
1. Generating sample paths
2. Evaluating the payoff along each path
3. Calculating an overall average to obtain estimation
The combination of these steps produces the final Monte Carlo
simulation that produces our estimation of a stock option. Each
of these steps introduces varying levels of complexity to the
overall simulation. Mitigation of the complexities of these
steps is an area of continuing research.
In Monte Carlo simulation for stock option pricing, there are
numerous methods for generating sample paths in discrete
times. Chen and Hong outline various methodologies in
current practice, including Euler-Maruyama Discretization, the
Milstein scheme, and acceptance-rejection sampling (ARS)
[3]. The details of these schemes are beyond the scope of this
paper, but can be found in [3]. Each of these schemes works to
reduce the amount of discretization error incurred during the
generation of the sample paths, which arises from the
technique being used to generate the discrete times. However,
after reviewing each of the approaches independently, it can be
determined that each of these techniques produces a model that
requires random inputs in order to generate the required output
for our Monte Carlo simulation. This represents an area
suitable for simulation within a larger simulation.
Evaluating payoffs that are non-American occurs in a
straight-forward fashion, where optimizing efficiency is the
sole area of research [3]. As such, American options are an
area of increasing research, and methods proposed to handling
the evaluation of payoff along a path vary from dynamic
programming to duality, which can be researched in greater
detail in [3]. Essentially, both approaches implement recursion
in different ways to arrive at their determination of the result,
and along with the recursion, introduce factors that are
difficult to solve using Monte Carlo simulation. The dynamic
programming approach introduces the problem of evaluating
continuation values, while the duality method introduces the
challenge of comparing additive and multiplicative duals [3].
This represents a significant portion of the calculation
involved in our Monte Carlo simulation.
The calculation of the overall average is a straight-forward
process that bears no need for explanation. However, it is
interesting to consider that this step of the process represents a
convergence of simulated payoff amounts, which in and of
itself bears a remarkable resemblance to parallel execution
techniques, which converge outputs of various computing
nodes into a single result set, and ultimately a computed
answer.
V. APPLYING PARALLEL COMPUTING TECHNIQUES
In order to properly boost performance of our Monte Carlo
simulation, we must first identify what portions of our
simulation can be broken into discrete units and handed off to
individual nodes for processing. The initial hurdle we must
cross in this is the notion of the size of the data set. A data set
that is not large enough will incur a longer execution time due
to a communications overhead required to pass messages
between processing nodes and the host node. A general
equation that can be used to determine the usefulness of
breaking a process into separate threads running in parallel is:
To = ρTp - Ts
(2)
Where To represents the total overhead function, ρTp
represents the total processing time used to solve a problem
summed over all of the processing elements, and Ts represents
the units of time spent doing actual work [6]. If we can solve
4
the problem defined in our Monte Carlo simulation such that
ρTp < Ts, than we can apply parallel computing techniques to
our simulation. Obviously, the challenge involved in
determining this value varies depending upon both the
complexity of the equation being used as well as the
homogeneity of the underlying processing nodes, as machines
with varying processing capabilities introduces additional
noise into the final result [11]. Methodologies for addressing
various inconsistencies in our parallel architecture are
discussed in further detail in [11].
In Monte Carlo simulation of stock prices, the calculations
we perform end up being perfect candidates for parallel
execution due to the way they typically perform averages of
several computed values [11]. In order to see this more clearly,
we can consider the calculations commonly performed in
Monte Carlo simulations to perform stock option valuations.
Chen and Hong [3] introduce a straight-forward formula
often used to perform the path generation step of stock
valuation, known as the Euler-Maruyama discretization,
which they subsequently reduce to the summation of a couple
of integrals. Their reduction of this algorithm is quite lengthy
and is beyond the scope of this paper, but can be found in [3].
It along with other approaches outlined in research [3], [5]
typically result in the calculation and summation of multiple
integrals. It then makes sense that these individual calculations
can occur in separate processes, and then be substituted back
in to the original equation for the final summation and returned
as a result. Parallel programming aides us with this by giving
us the ability to assign individual sub-equations of the entire
equation to individual processing nodes or groups of nodes,
and returning those results back to a single processing node for
performing the final equation and obtaining the result.
Another simple consideration that can be made is the fact
that properly modeling aspects such as volatility need to be
done using large sets of randomly generated values. This is an
area where parallelism excels, as it allows a large population
of random values to be generated in a fraction of the time a
sequential processing node requires by dividing the population
size amongst the number of nodes. In fact, according to [11],
the resulting parallel solution executes with the same bias and
with a reduced variance in comparison to the sequential
solution. This means that we not only produce the results more
quickly, but we get results that are more uniform and thus, give
us the type of random values we need for consistent
estimation.
Typically in computer-based simulations, a single program
is produced that performs the simulation in a sequential
fashion. Just as no two financial simulation programs are alike
in their approaches, neither are most mathematical financial
models due to the lack of closed-form solutions for the
majority of applications. In light of this, a single approach is
not presented in this paper, but rather a series of insights as to
how these simulations can use parallel techniques. The
following three ways have been presented in [7] as ways to
break simulation programs into parallel processes that can
introduce the additional performance gains mentioned earlier:
1. Execution of a single main task that creates a number
of subtasks
2. Divide the program into a set of separate binaries
3. Divide the program into several types of tasks in
which each task is responsible for creating only
certain tasks as needed
As we have seen proposed throughout this paper, there are
numerous algorithms that are used in Monte Carlo simulation
that leverage several complex calculations. By applying the
techniques mentioned above to those calculations, a parallel
program that performs the same calculation can be derived that
will provide the benefits of simultaneous calculation as well as
a significant speedup. Rosenthal points out in [11] that the
same calculation performed sequentially is increased by a
factor of C, where C equals the number of processing nodes
used in the parallel execution scenario. This gives us a linear
speedup of calculations that can be executed in parallel. In
order to see this for ourselves, we will perform an experiment,
simulating an option price using both a serial approach as well
as a parallel approach.
VI. EXPERIMENT
In order to demonstrate how leveraging parallel
computing techniques can benefit the simulations discussed
in this paper, a series of simulations of option prices will be
performed using a method called binomial approximation.
Binomial approximation in the financial sense relates to the
fact that an option’s underlying security follows a path that
over time trends in two directions; up and down. By
simulating all of the possible values of the security up until
the expiration date of the option, we can move back from
the terminal date to a prior date through discounting the
security with the current interest rate. As you can imagine,
the state space that is generated by this method increases
significantly with each step simulated, and can easily require
large numbers of calculations. This difficulty in serial
processing leads to longer delays before decisions are made,
and gives us an area where improvements in efficiency can
translate into money saved and/or earned.
To be able to evaluate the hypothesis that implementing
parallel computing techniques will translate into speedup in
execution time, it is important to take into consideration the
testing environment. For this experiment, a single
workstation is used to execute a program that is written in a
traditional, serial fashion, that calculates each of the steps in
the binomial approximation in a separate iteration. The
parallel program executes the same call, only it operates
against a ring of homogeneous workstations, which each are
assigned an appropriate workload by the host machine. The
host machine in the parallel configuration is the same
machine used for the serial trials. The primary difference is
that the workstation is used as the host node in a cluster, and
5
incurs the additional overhead of communicating with the
other workstations in the ring.
The software library that was chosen to implement the
parallel experiment with is MPICH-2, or MPI for short.
MPI is the de facto standard in the computing industry for
implementing parallel computing, and is also an open source
product. Also freely available was the implementation of the
binomial approximation function, as provided in a C++
“financial recipes” library, as referenced in Dr. Bernt
Odegaard’s work in [10]. The combination of these two
technologies provided the framework for performing the
implementation and execution of the parallel tests.
To perform the experiment, the first and foremost need
was to decide on a proper testing scenario. It was decided
that a meaningful and available example would be to
implement the American option pricing sample referenced
in Dr. Odegaard’s work in [10], in order to have a proper
reference for correctness. For an overview of the exact
algorithm used, Dr. Odegaard explains this in great detail in
both code and mathematical terms in [10]. His example laid
out the following parameters for later use in the
calculations:
S (price of stock) = 100.0
K (strike price) = 100.0
R (interest rate) = 0.10 (10%)
σ (volatility) = 0.25 (25%)
t (time) = 1.0 (years)
no_steps = 100
It was decided that as large an array of sample step sizes
as could be accommodated by the hardware would be
chosen. By choosing a wide enough range, it was the
estimation of the author that trends would indeed start to
surface. Additionally, it was decided that each test should be
performed ten times, in order to produce an average of
several attempts, while still staying within reasonable time
limits. Given that the parallel framework and C++
implementation of the functions were already in place, it
then became a matter of deciding how to perform the
parallel implementation and including timing measures in
each implementation.
In order to best leverage the collective computing power
of the rings in the node, it was decided to fairly distribute
the number of calculations of each step amongst the
processing nodes. Each processing node would perform the
calculations for their share of steps and report their final
result back to the host node, which would then perform the
average of the values and report the final result. This
approach sought to leverage the parallel nature of the
implementation to reduce the execution time in a linear
fashion, in direct relation to the number of processing
nodes. Thus, the desired outcome of the experiment was a
linear speedup of the execution time of the option valuation,
with speedup being defined as the execution time of the
serial approach divided by the execution time of the parallel
approach.
VII. RESULTS
In the execution of the experiment, the stated hypothesis
was that by introducing parallel processing, significant gains
can be made in regard to execution time, given the
mathematical nature of the algorithm used. Thus, the
expectation of the experiment was to notice a linear increase
in speedup as additional processing nodes are introduced
and the number of steps being calculated increased. In
addition to this, it is also important that the results are
approximate to the results we achieve through serial
methods.
In this experiment, the results we received from each run
mirrored those of the previous runs, which determined we
had a consistent and reliable implementation. However, it is
important to note that the serial and parallel values all differ
very slightly, due to the averaging of values that occurred in
the parallel implementations. You can see the evidence of
this in Table 1 below.
Serial
1000
5000
10000
25000
Parallel (2)
Parallel (4)
13.6146
13.6122
13.6073
13.6166
13.6161
13.6151
13.6169
13.6166
13.6161
13.617
13.6169
13.6167
Table 1 – Comparing Results from the Serial and Parallel Executions
Since the values are expected to change as more steps are
evaluated, the change in amounts is suitable for use in our
simulations. This allows us to satisfy the criteria that the
parallel implementation works properly, and thus evaluate
the speedup that took place. As you will note in the final
speedup results in both Table 2 and Fig. 1, the speedup
calculated for both two and four processing nodes executing
in parallel produced results greatly in favor of the stated
hypothesis:
1000
5000
10000
25000
Parallel (2)
Parallel (4)
0.891742
0.350574
18.02945
24.91269
5.315298
68.66313
5.949762
34.67992
Table 2 – Speedup Measures of 2 and 4 Node Configurations
These speedup results allow us to conclude that there are
indeed great gains to be made with a parallel implementation
of the serial algorithm. Though they are not entirely linear,
they follow a linear trend, and at least support the notion being
proposed in regard to the approach previously mentioned.
Along with calculating the speedup that occurred during the
experiment, the efficiency measure of each additional
processing element was calculated as well. This measure
6
indicates a relative measure of gain introduced by each
additional processing element in a parallel simulation. For the
purposes of the experiment, efficiency was measured as the
speedup divided by the number of processors, which produces
the speedup per processor. The results of this calculation are
displayed in Figure 2 below, and surprisingly seem to indicate
that though speedup tends to increase, the rate of increase
seems to decline slightly after 10,000 nodes.
VIII. CONCLUSION
In the simulation of stock option valuation utilizing Monte
Carlo methods, there are various approaches that those in the
financial community use to derive approximations of value.
While these models may differ greatly, many of the underlying
calculations meet conditions for parallel execution, and as
research has provided examples of this in [2], [11], along with
our own experiment. While this paper does perform an
independent study of its application, the example cannot be a
representative for the models and processes in place for the
broader financial community. However, it has been shown
through both logic and associated research that Monte Carlo
simulation of stock option valuation utilizing parallel
computing techniques is both feasible and effective.
Given the affordability of computer workstations, and the
low barrier of entry that programming libraries such as MPI
and OpenMP present, it is the conclusion of this study that
there are not only significant financial gains to be realized
through the application of parallel techniques, but
technological and scientific gains as well. However, given this
study does not go into detail on the mathematical proof of
parallel performance gains as well as specific code examples
of how it can be performed for other processes/models, these
areas could be further explored in a future work.
7
Step Evaluation Speedup of Parallel Processing
80
70
60
40
30
20
10
0
1
2
4
1,000 Steps
0
0.891742392
0.350573525
5,000 Steps
0
18.02944811
24.91269464
10,000 Steps
0
5.315297573
68.66312811
25,000 Steps
0
5.94976201
34.67992471
Number of Processing Nodes
Efficiency of Serial vs Parallel Techniques
20
18
16
14
12
Efficiency
Speedup
50
10
8
6
4
2
0
1000
5000
10000
0
0
0
0
Parallel - 2 Nodes
0.445871196
9.014724054
2.657648786
2.974881005
Parallel - 4 Nodes
0.087643381
6.22817366
17.16578203
8.669981177
Serialr
Number of simulations
25000
8
REFERENCES
[1]
Charnes, J. M. 2000. Options pricing: using simulation for option
pricing. In Proceedings of the 32nd Conference on Winter Simulation
(Orlando, Florida, December 10 - 13, 2000). Winter Simulation
Conference. Society for Computer Simulation International, San Diego,
CA, 151-157.
[2] Chen, G., Thulasiraman, P., and Thulasiram, R. K. 2006. Distributed
Quasi-Monte Carlo Algorithm for Option Pricing on HNOWs Using
mpC. In Proceedings of the 39th Annual Symposium on Simulation
(April 02 - 06, 2006). Annual Simulation Symposium. IEEE Computer
Society,
Washington,
DC,
90-97.
DOI=
http://dx.doi.org/10.1109/ANSS.2006.20
[3] Chen, N. and Hong, L. J. 2007. Monte Carlo simulation in financial
engineering. In Proceedings of the 39th Conference on Winter
Simulation: 40 Years! the Best Is Yet To Come (Washington D.C.,
December 09 - 12, 2007). Winter Simulation Conference. IEEE Press,
Piscataway, NJ, 919-931.
[4] Cobb, B. R. and Charnes, J. M. 2007. Real options valuation. In
Proceedings of the 39th Conference on Winter Simulation: 40 Years! the
Best Is Yet To Come (Washington D.C., December 09 - 12, 2007).
Winter Simulation Conference. IEEE Press, Piscataway, NJ, 173-182.
[5] Fu, M. C. 1995. Pricing of financial derivatives via simulation. In
Proceedings of the 27th Conference on Winter Simulation (Arlington,
Virginia, United States, December 03 - 06, 1995). C. Alexopoulos and
K. Kang, Eds. Winter Simulation Conference. IEEE Computer Society,
Washington,
DC,
126-132.
DOI=
http://doi.acm.org/10.1145/224401.224453
[6] Grama, Ananth, et al. Introduction to Parallel Computing, Second
Edition. 1994. 2nd ed. Essex, UK: Addison-Wesley, 2003.
O'Reilly Safari. O'Reilly. 18 July 2008
<http://safari.oreilly.com/ 0201648652>.
[7] Hughes, Cameron, and Tracey Hughes. Parallel and Distributed
Programming Using C++. Boston: Addison
Wesley Professional, 2003. O'Reilly Safari. O'Reilly. 18 July 2008
<http://safari.oreilly.com/0131013769>.
[8] Kumar, S., Thulasiram, R. K., and Thulasiraman, P. 2008. A bioinspired
algorithm to price options. In Proceedings of the 2008 C3S2E
Conference (Montreal, Quebec, Canada, May 12 - 13, 2008). C3S2E
'08, vol. 290. ACM, New York, NY, 11-22. DOI=
http://doi.acm.org/10.1145/1370256.1370260
[9] Lemieux, C. 2004. Randomized Quasi-Monte Carlo: a tool for
improving the efficiency of simulations in finance. In Proceedings of
the 36th Conference on Winter Simulation (Washington, D.C.,
December 05 - 08, 2004). Winter Simulation Conference. Winter
Simulation Conference, 1565-1573.
[10] Odegaard, Bernt Arne. Financial Numerical Recipes in C++. 11 Aug.
2008
<http://finance.bi.no/~bernt/gcc_prog/recipes/recipes/recipes.html>.
[11] Rosenthal, Jeffrey S. Parallel Computing and Monte Carlo Algorithms.
probability.ca. 18 July 2008
<http://probability.ca/jeff/ftpdir/para.pdf>.
[12] Wittwer, J.W., "Monte Carlo Simulation Basics" From Vertex42.com,
June1,2004,
<http://vertex42.com/ExcelArticles/mc/MonteCarloSimulation.html>
Download