combined file of all of the abstracts

advertisement
6th Complexity in Business Conference
Presented by the
October 30 & 31, 2014
This file is a compilation of all of the abstracts presented at the conference.
They are in order by the first author/presenter’s last name.
Aschenbrenner, Peter
Managing the Endowment of Child Entities in Complex Systems:
The Case of National Banking Legislation, 1781-1846
Peter Aschenbrenner
From 1781 through 1846 officials (acting under Constitutions I and II) wrestled with the problem of
creating national banking institutions which would serve the needs of the national government (among
other constituencies). These were the most prominent of ‘child entities’ created in the interval 17771861 and the most controversial.
A different perspective is offered: I treat a (generic) national bank as a problem faced by legislators with
respect to the kinetics (the dirty details) of endowing a child entity. This approach centers analysis on
the Act of Congress creating/contracting with the entity. I then modestly generalize from this ‘act’ of
creation. The parent solves a perceived governance problem by creating/refining service missions and
assigning them to the child structure to fulfill.
Examples of service missions (1789-1861): funding internal improvements, promoting local and state
education through resource-based funding, advancing science and technology, enhancing public
knowledge and procuring developed talent to officer the armies of the new republic. Congress ‘learned
by doing’ when it came to endowing public, private and semi-public child entities with service missions.
This is not surprising: national government (once independence was declared) was obliged to replicate
many centuries of mother-country experience that had accumulated in creating and managing parent
interactions with (new) child entities.
The twenty-nine official events relevant to national banking arrangements (legislation, presidential
approvals/vetoes, court cases) are divided into ten separate discrete event states, as the national
government attempted to charter or recharter these institutions, along with the relevant sources and
dates. Each may be understood as an ‘information exchange’ and coded as a time step in ABM if the
investigator graphically simulates the ten exchanges, as I will do in my presentation.
In this case I investigated difficulties arising from assumptions made by some but not all parent
actors/bodies. The most problematic assumption was that written instructions (semi-regimented = legal
language) which parents did not craft would assist these parents in endowing child entities which would
operate more successfully in the real world(= the causal inference). In my model I treat the process of
endowing a child entity as a probability space in which many factors are at play, including benefit
distribution, constituency demands, costs, revenue available for endowment. I construct my model to
allow testing what if’s with these factors in play.
My secondary research thesis is that an inverse relationship is at work: service mission fulfillment by the
child entity is degraded by appeals to ‘template’ language that (should have) governed parent behavior
when the parent endowed the child. I suggest that operation of procurement models in complex
systems are optimized when adherence to the ideology of pre-fabricated endowments is minimized.
What remains: how can degradation of performance at the parent and child level be measured?
Complexity theory is offered as a framework to structure the investigation.
1
Babutsidze, Zakaria
A Trick of the Tail: The Role of Social Networks in Experience-Good Market Dynamics
Zakaria Babutsidze and Marco Valente
Among the many changes brought by the diffusion of Internet is the possibility to access the opinions of
a vastly larger number of people than that within the range of physical contact. As a result, the strength
of network-externality effects in individual decision making has been increasing in the last 20 years. In
this paper, we study the effect of an increasing size of local social networks in decision-making setups
where the opinions of others strongly impact on individual choice.
In this work we explore the question of whether and how the increased number of sources providing
opinions is reshaping the eventual distribution of consumers' choices as reflected on market shares. An
apparently obvious result of increasing size of social networks is that any one source of information (i.e.
a single social contact/friend) is less relevant compared to sparser societies. As a consequence, one may
expect that larger networks produces more evenly distributed choices.
In fact the fall of block-buster titles has been predicted few years ago by internet-thinkers. The
prediction was that raise of information technologies together with the sharp increase in number of
titles offered on the market was going to result in longer tail (e.g. larger number of "smallish" (niche)
titles) at the expense of high-earners. The reality only partly confirmed these expectations. True, the
industry showed increased number of niches. However, the concentration at the top end of the market,
expected to pay the price of the lengthening of the tail, has actually also increased instead of decreasing
-- the size of the top selling movies has unequivocally increased.
In other words, with time the market share distribution in titles has become polarized -- and the
"middle" of the distribution (e.g. average earners) has gotten squeezed out. This polarization effect
(consumers concentrating at the top- or bottom-end of options ranked by sales) is quite puzzling and
deserves a close attention.
In this paper we present a stylized model providing a stripped-down representation of an experiencegood market. The model is populated by a number of consumers who need to choose one movie among
many on offer, and that rely exclusively on information obtained by members of their social network. To
highlight the structural contribution by the size and the topology of the network to observed events, we
assume that consumers have identical preferences and that all movies are potentially identically
appreciated. In so doing, we remove the possibility of other factors generating whatever results we
obtain.
Such a simple model is able to reproduce the widely observed phenomenon of increasing polarization
with increasing size of the consumers' social network. We test the model under a wide variety of
network types and parameterization, showing that the increased density of networks is able, on its own,
to account for the disappearing of mid-sized titles and increase in, both, the share of business generated
by block-busters, as well as the share generated by niche titles.
2
Bakken, David
Failure to Launch:
Why Most Marketers Are Not Jumping on the Agent-based Modeling Train
David Bakken
Most marketing decisions (such as whether to launch a new product) are made in relative ignorance of
the complex interactions that will ultimately determine the outcomes of those decisions. Marketers rely
on simple models of processes like adoption and diffusion of innovation. These models often ignore or
assume away heterogeneity in consumer or buyer behavior. The well-known Bass Model of new product
adoption is one such model for new product decision making. Other relatively simple models are used to
make decisions about marketing mix and sales force deployment.
The pharmaceutical industry offers a good example of a market with complex interactions between
agents (e.g., physicians, patients, payers, and competing drug manufacturers). However, critical business
decisions (such as which clinical endpoints to pursue in clinical trials) are made on the basis of fairly
simple multiplicative models (for example: "size of indication market X % of patients treated X % of
patients treated with drug = peak patient volume").
The author has been promoting agent-based models for marketing decision-making for about 10 years
(e.g., my article "Vizualize It" in Marketing Research, 2007). Despite the boom in individual-level data
that reveals the heterogeneity in preferences and behavior (such as the growth in hierarchical Bayesian
models for consumer choice) as well as expressed curiosity about agent-based modeling, the author has
observed few implementations of agent-based modeling to develop the system-level insights that would
lead to decisions based on a more complete picture of a market.
In this paper I'll discuss, based on my own experience, the factors that keep marketers from adopting
ABM despite many potential advantages of ABM for informing decision-making. Even though there is
plenty of evidence that the models that are used to make many marketing decisions are not particularly
effective (they don't consistently lead to better outcomes), marketers seem unwilling to invest much in
developing alternative models.
First and foremost is the tactical nature of most marketing decisions. That is accompanied by a relatively
short time horizon (e.g., 2-5 years) that places a premium on making a "good enough" decision.
A second factor is the preeminence of statistical modeling in academic modeling.
Agent-based models require, at least initially, more effort to develop and test, and because insights
often come from emergent behavior, perhaps a little scary for the average marketer.
I'll share my experience in helping clients take some agent-based modeling baby-steps and suggest some
ways to overcome some of the barriers that keep marketers from adopting and using agent-based
modeling. In his famous paper on managerial "decision calculus," John D. C. Little listed 6 characteristics
that a model needs to satisfy.
3
Burghardt, Keith, et al.
Connecting Data with Competing Opinion Models
Keith Burghardt, William Rand and Michelle Girvan
In this paper, we attempt to better model the hypothesis of complex contagions, well known in
sociology, and opinion dynamics, well known among network scientists and statistical physicists.
First, we will review these two ideas. The complex contagion hypothesis states that new ideas are
adopted between individuals not unlike how diseases are caught from a sick person to a healthy one,
although, unlike most biological diseases, individuals are highly unlikely to adopt a product or
controversial idea if not exposed to it multiple times. In comparison, simple contagions, like viruses, are
thought to be caught easily with as little as one exposure.
Opinion dynamics is the study of competing ideas via interactions between individuals. Although the
protoypical example is voting for a political candidate, anything from competing products and
companies to language diffusion similarly uses local interactions to compete for dominance.
The problems we will address in each field are the following. The complex contagion hypothesis has few
models that can match the behavior seen in empirical data, and is therefore in need of a realistic model.
Unlike complex contagions, opinion dynamics doesn't suffer from a lack of quantitative models,
although models contradict one another when trying to describe very similar behavior on voting
patterns, which, although not necessarily incorrect, suggests that there may be a deeper model that can
combine the empirical observations. Lastly, agents in opinion dynamic models can reach the same
opinion quickly (i.e. the timescale T_cons in random networks typically scales as N^a, where N is the
number of agents and a <= 1). Reaching one opinion is typically known as reaching ``consensus" based
on previous work, therefore we adopt this language for the present paper. Depending on how we define
time-steps, this is in disagreement with observations that competing political parties have lasted a long
time (e.g. > 150 years in the US).
We address these issues with what we call the Dynamically Stubborn Competing Strains (DSCS) model,
introduced in this paper. Our model is similar to the Susceptible-Infected-Susceptible (SIS) model wellknown in epidemiology, where agents can be ``infected" (persuaded) by their neighbors and recover
(have no opinion). Our novel contributions are
- Competing opinions (``strains").
- Ideas are spread outward from agents that ``caught" a new idea.
- Agents are increasingly less likely to change their opinion the longer they hold it.
We interpret the recovery probability as randomly becoming undecided (or having ``self-doubt"). These
differences are essential to create a minimal effective complex contagion model that shares agreement
with real data.
In the following sections, we will show agreement between our model and several empirical studies.
Firstly, we will show agreement between our model and a complex contagion. Secondly, we will show
that our model can help describe the collapse of the probability distribution of votes among several
candidates when scaled by v_0^(-1), where v_0 is the average number of votes per candidate. Next, we
will argue that, because we believe that our model approaches the Voter Model Universality Class
4
(VMUC) in certain limits, it has the same long-range vote correlations observed in several countries.
Lastly, we will show that model parameters allow for arbitrarily long times to reach opinion consensus.
A question that may arise up to this point is why we are looking at complex contagions and opinion
dynamics with the same model? Surely, one may propose, there is no competition in complex
contagions? The argument we make is that humans have a limited cognition and thus can only focus on
a few ideas at a given time (in economics, this is known as ``budget competition"). We take the limit that
we have only one idea being actively spread (alternatively, we can have time-steps small enough that
there is only one idea that we can focus on). Therefore, although we may adopt several coding
languages, for example, we can only focus on one (e.g. Python or Matlab but not both) at a given time,
thus introducing competition into complex contagions.
Next, we may ask what motivation is behind our model parameters? Why is there increasing
stubbornness or a notion of self doubt, aside from convenient agreements with data? Our model is
based off of a few intuitive observations seen in voting patterns. Conservatism (whose name means
``resistance to new ideas") increases with age, implying dynamic stubbornness may exist among voters.
Furthermore, a large stable fraction of ``independent" voters not tied to a single party suggesting an
``unopinionated" state exists in real life. A reduction in poll volatility has been observed before an
election which we can understand as a reduction in self-doubt as agents increasingly need to make a
stable decision before they go to the polls. Lastly, rumors and opinions seem to spread virally meaning
the cumulative probability of adopting an idea increases with the number of exposures to that idea;
therefore a realistic opinion model should have “viral” dynamics. Notice that nothing in our model
explicitly implies the idea of complex contagions. In other words, the complex contagion behavior we
see is not ad hoc, but is instead a natural outcome of our model.
5
Chica, Manuel
Centrality Metrics for Identifying Key Variables in System Dynamics
Modelling for Brand Management
Manuel Chica
System dynamics (SD) provides the means for modelling complex systems such as those required to
analyse many economic and marketing phenomena. When tackling highly complex problems, modellers
can soundly increase their understanding of these systems by automatically identifying the key variables
that arise from the model structure.
In this work we propose the application of social network analysis centrality metrics, like degree,
closeness or centrality, to quantify the relevance of each variable. These metrics shall assist modellers in
identifying the most significant variables of the system.
We have applied our proposed key variable detection algorithm to a brand management problem
modelled via system dynamics. Concretely, we have modelled and simulated a TV show brand
management problem. We have followed Vester's sensitivity model to shape the system dynamics and
structure. This SD methodology is convenient for sustainable processes and enables analysts to simplify
the real world complexity into a simulation and consensus system.
After applying the algorithm and extracting the key variables of the model structure we have run
different simulations to compare the global impact of injecting strategic actions just over top-ranked key
variables. Simulation results show how changes in these variables have an noteworthy impact over the
whole system with respect to changes in other variables.
6
Darmon, David
Finding Predictively Optimal Communities in Dynamic Social Networks
David Darmon
The detection of communities is a key first step to understanding complex social networks. Most
methods for community detection map a static network to a partition of nodes. We propose using
dynamic information via predictive models to generate predictively optimal communities. By better
understanding community dynamics, managers can devise strategies for engaging with and creating
content for social media, providing them with a powerful way to increase brand awareness and
eventually sales.
7
Klemens, Ben
A Useful Algebraic System of Statistical Models
Ben Klemens
This paper proposes a single form for statistical models that accommodates a broad range of models,
from ordinary least squares to agent-based microsimulations. The definition makes it almost trivial to
define morphisms to transform and combine existing models to produce new models. It offers a unified
means of expressing and implementing methods that are typically given disparate treatment in the
literature, including transformations via differentiable functions, Bayesian updating, multi-level and
other types of composed models, Markov chain Monte Carlo, and several other common procedures. It
especially offers benefit to simulation-type models, because of the value in being able to build complex
models from simple parts, easily calculate robustness measures for simulation statistics and, where
appropriate, test hypotheses. Running examples will be given using Apophenia, an open-source
software library based on the model form and transformations described here.
8
Lamba, Harbir
How Much `Behavioral Economics' is Needed to Invalidate Equilibrium-Based Models?
Harbir Lamba
The orthodox models of economics and finance assume that systems of many agents are always in a
quasi-equilibrium state. This (conveniently) implies that the future evolution of the system is decoupled
from its past and depends only upon external influences. However, there are many human traits and
societal incentives that can cause coupling between agents' behaviours --- potentially invalidating the
averaging procedures underpinning such equilibrium models.
I shall present an agent-based framework that is general enough to be able to incorporate the main
findings of psychology and behavioral economics. It can also mimic common momentum-trading
strategies and rational herding due to perverse incentives. Allowing the strength of such non-standard
effects to increase from zero provides a means of quantifying the (in)stability of the orthodox
equilibrium/averaged solution.
Numerical simulations will be presented for both herding and momentum-trading in a financial market.
In each case the equilibrium solution loses stability and is replaced by endogenous `boom-bust'
dynamics whereby a long and gradual mispricing phase is abruptly ended by a cascading process. Note
that this instability is only apparent over a far longer (multi-year) emergent timescale that arises from
the competition between equilibrating and dis-equilibrating forces. However, it occurs at parameter
values far below simple estimates of the strength of these effects in actual financial markets (and,
importantly, the resulting fat-tailed
price change statistics are consistent with those observed in real markets).
I will also outline how similar procedures can be carried out for other standard models (such as DSGE
models in macro-economics and price-formation in micro-economics) with very similar (very negative)
consequences for the stability and validity of their solutions.
Finally, if time allows, I will present a recent mathematical result that applies to plausible network
models of agents influencing, say, each other's inflation expectations or investing behavior. In brief, the
entire network complexity reduces to a single scalar function that
can either be computed or deduced analytically.
9
Lawless, Bill, et al.
Unravelling the Complexity of Teams to Develop a Thermodynamics of Teams
Bill Lawless, Ira Moskowitz and Ranjeev Mittu
In its simplest terms, teams and firms operate thermodynamically far from equilibrium, requiring
sufficient free energy to offset the entropy produced (Nicolis & Prigogine, 1989). If social reality was
rational, a thermodynamics of teams would have been discovered and validated decades ago. However,
teams are interdependent systems (Conant, 1976). Interdependence creates observational uncertainty
and incompleteness, and its effects are irrational.
Multitasking (MT) is an unsolved but key theoretical problem for organizing teams, organizations and
systems, including computational teams of multi-autonomous agents. But because MT involves
interdependence between the members of a team, until now it has been too difficult to conceptualize,
solve and, consequently, even address. Exceptional humans intuit most of the organizational decisions
that need to be made to self-organize a business or organization, except maybe in the case of big data.
But transferring that knowledge to another generation, to business students, or to partners has proved
difficult. Even in the case of big data, where interdependence can increase uncertainty and
incompleteness, unless scientists can construct valid mathematical models of teams and firms that
produce predictable results or constrain them, computational teams of multi-agents will always be
ineffective, inefficient, conceptually incomplete, or all three.
While individuals multitask (MT) poorly (Wickens, 1992), multitasking is the function of groups (e.g.,
Ambrose, 2001). But MT creates a state of interdependence that has been conceptually intractable
(Ahdieh, 2009). Worse, for rational models of teams, using information flow theory, Conant (1976)
concluded that interdependence is a constraint on organizational performance; Kenny et al. (1998)
calculated that not statistically removing the effects of interdependence causes overly confident
experimental results; and unable to resolve the persistent gap he had found between preferences
before, and the choices made when, games were played, speculating that the gap could not be closed,
Kelly (1992) abandoned game theory in experimental social psychology. Never has anyone else closed
this gap, nor do we in this paper; instead, we account for the gap to exploit it.
Game theory itself has not been validated (Schweitzer et al., 2009), likely because its models cannot be
designed to match reality, conceded by two of its strongest supporters (e.g., Rand & Nowak, 2013); yet,
despite this complete disconnect with reality, Rand and Nowak (p. 413) conclude that cooperation
produces the superior social good, a conclusion widely accepted by social scientists, including by Bell et
al. (2012) in their recent review of human teamwork, but wherein they had little to say about
interdependence. And in the computational multi-robot community, no one appears to be close to
addressing the problems caused by interdependence or really being even aware of how conceptually
difficult these problems are to solve (e.g., Schaefer, 2014).
We claim that these ideas are related, limited, and insufficient as guides to the principles of
organization. How does a team form mathematically, how does it recognize structural perfection, and
what does it do after formation?
Briefly, from Ambrose (2001), teams form to solve the problems that an arbitrary collection of
individuals performing the same actions are either ineffective at coordinating among their individual
selves to solve, or once so organized as single individual enterprises, are inefficient at being able to
10
multitask in competitive or hostile environments (Lawless et al., 2013). Firms form to produce a profit
(Coase, 1937); generalizing, teams or firms stabilize when they produce more benefits than costs (Coase,
1960).
But, in contrast to the conclusions drawn from rational models, especially game theory, the results have
by and large, overvalued, misunderstood and misapplied cooperation, contradicting Adam Smith’s
(1776) conclusions about the value of competition. Axelrod (1982, p. 7-8), for example, concluded that
competition reduced the social good. This poor outcome can be avoided, Axelrod argued, only when
sufficient punishment exists to discourage competition. Taking Axelrod’s advice to its logical conclusion,
we should not be surprised to see savagery used as a modern technique to govern societies by making
their subjects more cooperative (e.g., Naji, 2004).
We disagree with Axelrod and game theorists. By comparing night-time satellite photos to see the social
well-being in competitive South Korea compared to its lack under the enforced cooperation demanded
by the leaders of North Korea (Lawless, 2014, slide 10), our theory has led us to conclude that
interdependence is a valuable resource that societies facing competitive pressures exploit with MT to
self-organize teams, to solve intractable problems, to reduce corruption, and to make better decisions
(Lawless et al., 2013). The key ingredient is in using interdependence to construct centers of
competition, which we have relabeled as Nash equilibria (NE), like Google and Apple, or Democrats and
Republicans. NE generate the information that societies exploit to better organize themselves, be it for
competition among politicians, sports teams, businesses, or entertainment.
We go much deeper to understand why game theorists, with their inferior models of reality, take strong
exception to competition without justification. We believe the reason that most scientists are unable to
readily "see" the root of the MT problem and the path to its solution is that human behavior operates in
a physical reality socially reconstructed as an illusion of a rational world (Adelson, 2000). That is, the
brain has a sensorimotor system independent of vision (Rees et al., 1997), the two working together
interdependently to create what appears to be a “rational” world but is actually bistable (Lawless et al.,
2013), meaning that as an individual focuses on improving one aspect of itself, say action (skills), its
observational uncertainty increases. Zell’s (2013) meta-analysis supports our hypothesis: He found that
the relationship between 22 self-reported scales of ability with actual ability to be moderate at best.
Similarly, Bloom et al. (2007) found only a poor association between the views of the managers of
businesses and the actual performance of their businesses.
For our presentation, we will review the problems with current social models plus our mathematical
model of a team. Further, we continue to extend and to develop a new theory of teams based the
interdependence between team members that allows us to sketch conceptually and mathematically
how the tools of least entropy production now and, maximum entropy production in the future, may be
deployed in a nonlinear model of teams and firms and as metrics of their performance.
11
Lhuillier, Ernesto
Peaks and Valleys of Videogame Companies:
An Agent Based Model proposal of Console’s Industry Dynamics
Ernesto Lhuillier
The Videogame Industry has shown an evolution from a simple manufacturer-distribution-consumer
structure to complex interactions between multiple actors such as the independent developer
communities and growth of consumer diversity. These relationships between actors respond to multiple
changes in consumer demands, industry specialization and technology development. Even limiting
ourselves to the home consoles market, we may perceive a complex adaptive system with
interdependent relationships and the emergence of a far from robust business. The following study will
describe the historical dynamics of the industrial organization of the consoles' market; which sometimes
may be dominated by very few firms with monopsony and/or monopoly power. Through Joyce Marcus'
(1998) Dynamic Model of ancient state formation framework it is possible to address the complex
interactions and describe fluctuations of several parameters such as gross sales (e.g. top games,
consoles, etc.), game licenses, software developer alliances or consumers' demographics. This approach
allows us to assess the industry complexity in terms of organization and size. After a fair description of
historical trends an Agent Based Model is proposed to understand such behaviors and complex
interdependencies. The model considers five critical actors (manufacturers, software developers, media
agents, distributors and customer/consumers) displaying their relationships through a network. The
dynamics between these five actors with its implicit heterogeneity allows to understand relevant roles
and actions under specific scenarios; specially customer/consumer habits and preferences diversity. A
specific survey of these dynamics is made in a case study analysis of Nintendo's monopolistic role loss
against Sega.
12
Liu, Xia and Hou, Yingjian
Modelling Customer Churn with Data Mining and Big Data in Marketing
Xia Liu and Yingjian Hou
Recent MSI research priorities call for applying advanced data mining methods to generate marketing
insights from big data sets. As a response to the call, this paper uses random forest to build algorithmic
models that successfully predict when customer churn. It is costly to attract and acquire new customers,
so companies have great financial incentives to take appropriate marketing actions before churn occurs.
Marketing researchers have mainly presented theory-driven stochastic churn models that depend
heavily on mixture distributions: such as Pareto/NBD, Beta-Geometric/NBD, and Gamma-Poisson. In
contrast, random forest has originated in the fields of computer science and statistical learning. It is a
tree-based ensemble classifier that randomly perturbs the training data set for tree building and
aggregates results from the trees. Random forest has superior predictive power compared to the other
learning methods. Since prediction accuracy is critical in customer churn analysis, random forest is an
appealing tool.
Two data sets with rich behavioral information are used in this study: the first one contains 2,824,984
historic mailing records, 226,129 order transactions and 463 detailed offer information; the second one
has 45,000 observations with 20 predictor variables. For the data mining tasks in this paper, the authors
follow The Cross Industry Standard Process for Data Mining (CRISP-DM). Data preprocessing turns out to
be complex and time consuming for the bigger data set. Because of the typical extremely low response
rates in direct marketing, the data are highly skewed. To resolve this issue, down-sampling is adopted to
improve the performance of random forest. The result shows that down-sampling significantly increases
the prediction rate for true positive cases. As the number of decision trees grow quickly in random
forests, computer memory issues are encountered. The authors implement the following strategy:
during the prediction process, the test data are split into smaller subsets, the trained models are run on
the subsets separately, and prediction results are then aggregated. The models are evaluated with
receiver operating characteristics (ROC) curves.
There are three important findings. First, random forest, as an ensemble data mining classifier, performs
well in customer churn prediction and is therefore a viable alternative to stochastic models. The most
important predictor variables chosen by random forest are the same as those selected by the
probabilistic models. The variables with the highest predictive powers are recency, frequency and
monetary value (RFM). Second, the increase in prediction accuracy comes with some cost. Compared
with its stochastic counterparts, the model generated by random forest tends to become complex
quickly and is therefore not as easy to interpret. Third, since the application of data mining in marketing
research is still relatively new, the generic framework for using and evaluating data mining
methodologies can be useful to marketing researchers who study customer lifetime value and RFM
models. Those findings make valuable contributions to the current marketing literature.
13
Mizuno, Makoto
Empirical Agent-Based Modeling for Customer Portfolio Management:
A Case of a Regional B2B
Makoto Mizuno
Recently, a variety of information on individual customers, from financial measures (e.g., profit from
them) to subjective measures (e.g., customer satisfaction: CS), is increasingly available, in particular for
B2B service providers. Some service providers are also collecting employee-side information (e.g.,
employee satisfaction: ES) linked to each customer. How to use these multiple information in an
integrative way is a key concern for advancing service management.
In service researches, the school of Service Dominant Logic has argued that values are co-created in
service processes by both service providers and customers (Vargo & Lusch 2004); the school of ServiceProfit Chain has asserted that customer/employee satisfaction and corporate profits are compatible
(Heskett et al. 1998). Yet it is controversial whether or not these propositions are empirically supported.
The integrative use of multiple information could clarify in what conditions their predictions are realized.
To make it more comprehensive, we should account for interactions between customers. It is often
omitted due to the difficulty in handling, though its importance is well recognized by service researchers
(Libai et al. 2010). Agent-based modeling is a promising methodology to handle this type of
phenomenon (Rand and Rust 2011). If the data indicating the likelihood of interactions between
customers is available, agent-based modeling can be built on more or less empirical foundation.
In this study, we propose a hybrid approach that combines agent-based modeling and traditional
statistical methods used in marketing science. This enables us to respect the fruitfulness of existing
researches and to give some empirical foundation (Rand and Rust 2011). Specifically, our model owes to
Rust, Lemon and Zeithaml (2004) in quantifying the impact of service activities on customer equity; at
the same time, it owes to the customer portfolio model of Homburg, Steiner and Totzek (2009) in
relating customer profiles to profit dynamics.
We obtained a set of data from a regional B2B finance service group in Japan: the individual customers’
profit stream record and their responses for customer/employee surveys. It gives us the empirical
foundation of each agent’s behavior. Also, we could use actual transaction data between customers to
infer a possible network conveying influence over customers. The dynamics of customer profit is
modeled as a Markov chain process; the initial profit levels are stochastically predicted based on
customer satisfaction, etc., via an ordinal logit model.
To incorporate interactions between customers into the model, we simulate the contagion of customer
satisfaction over customers connected by a given network. The marginal effects of each service activity
can be quantified via sensitivity analyses. Moreover, we compare the performance of different customer
prioritization policies and different matching policies between employees and customers. Finally, we
discuss the limitation and the further development of this study.
14
Oh, Young Joon
Simulating Agent and Link Diversity as a Source of
Innovation Network Sustainability – Agent-Based Simulation Approach
Young Joon Oh
Innovation is a main engine for economic growth. It requires collaboration with firms, universities,
governments, etc. In this respect, innovation networks are increasingly recognized as an effective tool
for successful R&D processes in the hi-tech industry. Due to the positive externality of innovation
networks, policymakers want to establish successful innovation networks in their areas. The intent is to
create sustainable and resilient innovation networks that produce knowledge, but all too often their
efforts resulted in failure. So, the question arises as to what is the key to building a sustainable
innovation network. This paper seeks to propose possible keys for sustainable innovation networks
using a simulation framework.
Methodology: To capture the dynamics of the innovative agent behavior, this paper uses an agent-based
SKIN model (Simulating Knowledge Dynamics in Innovation Networks). In the SKIN model, the agents
represent firms who try to sell their innovative products to other agents. The firms need to enhance its
innovation performance to survive in the market. To improve their inherent knowledge, they can choose
some strategies for learning and adaptation such as incremental or radical learning, cooperation and
networking. The SKIN model shows the dynamics of the behavior of strategic agents. However, it is
difficult for the existing SKIN model to capture link dynamics of the network itself. Here, I modify the
SKIN model to capture dynamics of both agents and links.
Findings : As a result, the modified SKIN model uncovers diversity as a source of innovation network
sustainability. There are two types of diversities in the model. First, if an innovation network has agents,
possessing diverse knowledge, the network turns out to be more sustainable. Second, if the network has
topological diversity (i.e. “small world”), a sustainable innovation network is established. Intuitively, we
can understand the results in following way: once a network produces successful outcomes, the agents
in the network become more homogeneous, and then networking is less profitable. The presence of
agents with diverse knowledge however allows for creation of fresh innovation. The topological diversity
produces the beneficial effect by adding more links and arranging them more efficiently, so knowledge
can be easily transferred in the network. As a consequence, both agent and link diversity are key factors
to build a sustainable innovation network.
Practical implications : To make a successful innovation network, it is necessary to create a circumstance
for knowledge diversity. Thus, it is beneficial if an innovation network contains a university and a
research institution as its agent. The outcome could also advise policymakers to nurture and streamline
coordinative actions that aim to maximize innovation by connecting enterprises and academia.
Value : In sum, this study models the role of increasing diversity of agents and their connections in
maximizing network innovation.
15
Rust, Roland, et al.
A Model of Diffusion of Fashion and Cultural Fads
Roland Rust, William Rand and Anamaria Berea
The theory of the ingroup and the outgroup is well known in the social sciences and has been
researched by sociologists and social psychologists both experimentally and empirically in numerous
occasions. Our research explores the ingroup/outgroup dynamic in the diffusion of fashion and cultural
fads. We hypothesize that fashion and cultural fads are transient between the two groups and that the
members of both groups prefer to be seen as being part of the ingroup. We test this hypothesis
theoretically and empirically by building an analytical/ theoretical model and an agent-based simulation.
We show that in an endogenous system where the ingroup and outgroup behaviors only depend on
each other, the adoption curves are highly sensitive to the original sizes of the groups and the
preference parameters of the system. We also show that brand preference heterogeneity in the ingroup
does not change adoption significantly.
16
Sun, Hechao, et al.
Monitoring Dynamic Patterns of Information Diffusion in Social Media:
A Multiresolution Approach with Twitter Data
Hechao Sun, Bill Rand and Shawn Mankad
With advances the increasing popularity of social media, understanding the diffusion of information on
this new media is becoming increasingly important. Social media has significantly changed marketing,
affected the exchange of ideas, and even assisted in social revolution. Thus, it is important to
understand dynamics and mechanisms of information flow through popular social media, such as
Twitter, Facebook, etc. However, most studies of social media data focus on static relationships, such as
following-follower network in Twitter, which is not necessarily well-correlated with actual patterns of
conversation. Therefore, it is necessary to move beyond static network representations; revealing
dynamic patterns of the network is necessary for accurately understanding information diffusion in this
media. In our work, we perform a multiresolution analysis with Twitter data, where we examine, using a
variety of different time windows, the dynamics and properties of mention and retweet networks
around several major news events. We find that network properties stabilize at larger sampling
resolutions, from which we draw new insights for viral marketing and information diffusion.
17
Thomas, Russell
A Topological View of Radical Innovation
Russell Thomas
This research formalizes a Topological View on radical innovation processes. Nelson & Winter
summarize it this way: “The topography of innovation determines what possibilities can be seen from
what vantage points, how hard it is to get from one spot in the space of possibilities to another, and so
forth.” While the Topological View applies to both incremental and radical innovation, its benefits as a
theoretical construct stand out most clearly in the context of radical innovation: 1) it is capable of
modeling emergent, multi-level, and co-evolutionary spaces; 2) it supports modeling both ontological
mechanisms at a meso-level and also cognitive, social, and operational mechanisms at a micro-level; 3)
it can support rigorous theoretical and empirical research, including computational simulations; and
finally 4) it is parsimonious.
As contrast, consider an approach associated with the Carnegie School. Strategic change and innovation
are modeled as a process of search, exploration or adaptation in an a priori space of possibilities (e.g.
‘fitness landscape’) that is pre-specified and fully characterized by global dimensions or state variables.
However, for radical innovation this approach breaks down because the full space of possible
innovations is not pre-specifiable and it is intrinsically emergent. Innovators face ontological uncertainty
-– the desired future state may not even exist within current mental models, ‘dimensions’, or ‘variables’.
In this setting, thinking and doing are reflexive, creative and coevolutionary: how innovators think
shapes what becomes possible and what becomes possible shapes how innovators think. This gives rise
to a paradox of indeterminate agency -– innovators are largely ‘blind’ in their conceptions and actions,
yet they must think and act to make change happen.
The Topological View draws on a several lines of research and precursors, including Theoretical Biology,
Evolutionary Economics, Organization Science, Sociology of Institutions, and Design Science.
An overview of the Topological View will be presented. Firms (or institutions, more broadly) are viewed
functionally and are modeled as bundles of operational resources and capabilities situated in an
environmental context. The space of possibilities is organized locally according to proximity and
neighborhood relations, and not through global dimensions or state variables. ‘Proximity’ is defined
both operationally (i.e. how many changes are needed to transform one into another), and cognitively
(i.e. degree of similarity or difference from the view point of agents). In comparing any two firms, they
would have the same position in topological space if they had the same strategic options, the same
capabilities, and were in essentially the same environment. Far distant points are operationally and
conceptually very different and maybe even incommensurate or unreachable.
Using the Topological View, the paradox of indeterminate agency can defined formally: the desired
future state may lie in a region of possibility space that doesn’t yet exist, but the only way to bring it into
existence is to navigate toward that region, thereby creating new possibilities. Crucially, the Topological
View allows formal modeling of how innovators cope with this paradox, including cycles of social
learning, knowledge artifacts, and supporting institutions. These will be illustrated using two
contemporary cases of radical institutional innovation and institutional entrepreneurship -– synthetic
biology and cyber security.
Finally, early results from computational modeling will be presented. Drawing on Holland’s Dynamic
Generated System (DGS) and Padget & Powell’s Autocatalysis and Multi-level Network Folding Theory,
18
the computational architectural includes three elements: 1) a construct for possibility space; 2)
Mechanism-based Model (MBM) mechanism for coupling micro- and meso-level dynamics; and 3)
Agent-based Model (ABM) of innovators that includes generative conceptual schemata. The formal
model is being implemented computationally to demonstrate its feasibility and usefulness.
19
Veetil, Vipin
A Theory of Firms and Markets
Vipin Veetil
In this paper I develop a theory of why some economic activities are organized within firms and others
through markets. A basic economic problem is how to allocate resources in a system where knowledge
is widely dispersed (Hayek 1945, Hurwicz 1973, Myerson 2008). In some areas of economic activity
allocation happens through the `invisible hand' of markets, whereas in other areas one sees the visible
hands of managers. Markets and firms are different mechanisms for using dispersed knowledge in
allocating resources. In a firm, dispersed knowledge is first communicated to an entrepreneurcoordinator who then makes allocation decisions (Coase 1937). In a market, knowledge is not
communicated to a central authority, rather resources are allocated through a series of exchange
transactions. Though a firm does not have to incur these transaction costs, the allocations made by the
entrepreneur-coordinator will be only as good as the quality of knowledge made available to her. As to
whether an activity happens within a firm or through a market depends on the cost of centralizing the
concerned knowledge.
Differences in economic organization is a reflection of the differences in the `nature' of knowledge
across economic activities. The organization of an economy into markets and firms of different sizes is
not an arbitrary mix, rather it reflects the underlying knowledge problems. In contrast to other
explanations of why firms exist like the `team production' theory (Alchian & Demsetz 1972, Axtell 1999)
and the `asset-specificity' theory (Williamson 1975), the theory developed here provides a motivation
for existence of firms even in a world without moral hazard and adverse selection problems. My theory
explains why there is a tendency to replace markets with planning during wars (Milward 1979), why
there is redundancy in the way knowledge is collected within firms (Feldman & March 1981), and why
barber shops are smaller than Walmart. It predicts that ficeteris paribus firms will grow larger relative to
markets as improvements in technology lowers the cost of collecting knowledge. The theory may be
extended to understand the boundaries of non-market organizations like governments, churches and
other non-profit entities.
20
Wang, Chen
Improving Donor Campaigns around Crises with Twitter-Analytics
Chen Wang
Because of the rising popularity of social media, organizations of all types, including not-for-profits,
monitor the associated data streams for improved prediction and assessment of market conditions.
However, the size and dynamics of the data from popular social media platforms lead to challenging
analysis environments, where observations and features vary in both time and space. For instance, a
communication posted to a social media platform can vary in its physical location or origin, time of
arrival, and content. In this work, we discuss the integration of two geo-located and time varying
datasets that are used to study the relationship between Twitter usage around crisis events, like
hurricanes, and donation patterns to a major nonprofit organization. By combining visualization
techniques, time-series forecasting, and clustering algorithms, we develop insights into how a nonprofit
organization could utilize Twitter usage patterns to improve targeting of likely donors.
21
Zhang, Haifeng, et al.
Predicting Rooftop Solar Adoption Using Agent-based Modeling
Haifeng Zhang, et al.
We present a novel agent-based modeling methodology to predict rooftop solar adoptions in the
residential energy market. Two datasets, i.e., the California Solar Initiative (CSI) dataset and California
property assessor dataset, were merged and used to generate historical observations, from which the
agent-based model was developed entirely ground-up. In particular, we first used the data to calibrate
models of individual adoption behavior, which included. Computing net present value was particularly
challenging, as it requires data about energy utilization (obtained only for a subset of past adopters),
system sizing (again, available only for adopters), and system costs (also only available for adopters, and
only at the time of adoption).
We used a linear model to estimate system size using assessor characteristics, and, we estimate energy
utilization for non-adopters similarly, but also accounting for the differences in the adopter and nonadopter population. Finally, we estimate system costs in part by capturing learning-by-doing effects.
Putting all this data together, we developed a model of individual adoption likelihood, which we
estimated from the individual-level CSI data. We compared our final model to a simple baseline model
which only included a measure of net present value and a simple measure of peer effects.
Our results demonstrate two things. First, that our model is able to forecast actual adoption quite well
(calibrated on the first 48 months in the data, and evaluated through 72 months), and second, that it
does so more effectively that the simple baseline. The results are instructive. First, we find that we can
improve long-term adoption, albeit slightly, as compared to the implemented CSI scheme. As expected,
this can be improved further as greater budget is allocated to the incentive program. However, our
results demonstrate that we can do far better in stimulating adoption by optimally giving away free
systems (in a very limited space of such budget optimization options; presumably, one could do much
better yet by considering more complex spatial-temporal policies of this nature).
22
Download