RMB - Information Systems - Virginia Commonwealth University

advertisement
Relational model-base structures: underpinnings for
decision-centered management systems
John W. Sutherland [Virginia Commonwealth University]
Elizabeth Baker [Virginia Military Institute]
ABSTRACT
Relational model-base structures can provide the technical foundations for the development of decisiondriven (vs. conventional database-dependent) management support systems. The pragmatic import of
decision-driven management support systems, and so also the core rationale for relational model-base
structures, stems most obviously from the increasingly heavy dependency on mathematical models to answer
for the analytical aspects of administration. Rather more significant, however, is that they can accommodate
a novel class of models now starting to appear in both the commercial and governmental sectors: computer
programs acting as Administrative Agents. Like conventional decisions aids, administrative agents are
anchored on ordinary algorithmic constructs. But instead of being bound in service to a human manager,
they themselves will be invested with some measure of managerial decision authority. The greater
significance of relational model-base structures is that, in allowing management systems to accommodate
Administrative Agents, they also accommodate the advance of automation into the administrative arena.
Keywords: Decision systems
Management Support Systems
Real-time systems
INTRODUCTION
Relational model-base structures have their conceptual origin in one of the grounding
propositions of the management science movement: the contention that enterprises can be
meaningfully comprehended as collections of decision points rather than merely as collections of
people. This puts the focus on the analytical aspects of administration in contrast to the sociobehavioral thrust of mainstream management theory or the morphological matters that sit at the
center of concern for organization theorists. So, as elaborated in the first section of this paper, the
general mission for relational model-base structures is to capture and encode, for purposes of
computer apprehension/execution, the assemblage of interrelated administrative decision
requirements entailed in some enterprise. Note that, as the term is used here, an enterprise may
refer either to an activity undertaken entirely within the confines of a singular organization, or to
an inter-organizational or collective endeavor of some kind.
The portrait of an enterprise as conveyed by a relational model-base structure would then show
which decisions are linked with which others, in what ways, at some particular point in time. In
this way, relational model-base structures could come to serve as the substantive foundations for
decision-driven management support systems. These would stand as practical expressions of the
management science emphasis on analytical administrative requirements. They’d stand also as
natural complements to the relational database-centered designs that reflect the mainstream
management information system community’s long preoccupation with the informational aspects
of administration (business administration, in the main).
Much like archetypal MIS of fifty years ago, the systems designs that are today being produced
under the ERP (Enterprise Resource Planning) banner tend to be strongly database dependent,
long on information technology and short on the decision technology side; it’s not unreasonable,
in fact, to think of ERP type products as merely relational database management systems writ
large. Thus, while relational model-base structures are conceived with managerial decision
requirements foremost in mind, they remain something of an afterthought for ERP constructs. For
the latter, whatever an enterprise needs in the way of instances of decision technology must
generally imported from outside vendors as plug-in programs —sometimes imperiously styled as
Business Intelligence Modules— that must subsequently be grafted onto the underlying database.
This points up a key practical distinction between relational model-base and ERP inspired
management support systems. The former are designed to effect direct functional interconnections
among an enterprise’s decision-making apparatus. In ERP systems, in contrast, linkages among
analytical instruments are typically indirect, mediated by middleware operating along the spinal
database structure (in pursuit of what’s referred to in commercial data processing circles as
Interoperability). This indirectness is not a defect of ERP products; it’s just a natural reflection of
their functional focus on data integration. For relational model-base structures, on the other hand,
the focus is firmly fixed on decision interdependencies.
The practical pertinence of decision-driven vs. database centered management systems stems
from the increasingly central role played by formal models in the context of modern managerial
practice. This is a trend that’s been underway for some time. Orman (1995), for example, more
than a decade ago, wrote that “Modern organizations rely extensively on computer-based
mathematical models designed to solve various statistical, optimization, and decision making
problems.” He then goes on to note a deficiency that still remains largely unaddressed: “The
correspondence between software modules and organizational tasks on the one hand, and models
and model components on the other, still needs to be established.” This is a deficiency that
relational model-base structures must necessarily attempt to attack head-on.
2
So far as the case for relational model-base structures is concerned, what matters is not just
that recourse to mathematical models is now so common, nor even that they are being asked to
carry a successively heavier share of the analytical burdens devolving on administrative
authorities. What matters more is that relational model-base anchored management systems should
be able to accommodate a fundamentally new class of models that are starting to claim an
increasingly substantial share of managerial authority: computer-based constructs operating as de
facto administrative functionaries!
Such models may be thought to herald the advance of automation into the administrative arena.
What’s meant here is automation in the serious sense, not just the mechanization of menial clerical
chores or arithmetically tedious counting/accounting applications, or even the more technically
interesting statistical analysis activities (e.g., Exploratory Factor Analysis and Data Mining
(Witten and Frank, 2005)) undertaken within the confines of the extensive data sets qua Data
Warehouses supported by ERP products. That is, rather than being put in service as passive
analytical assistants to some administrative authority, these models will themselves have been
invested with operative (if not always titular) authority over some administrative application.
To put these constructs on a more familiar footing, they can be viewed as a variety of decision
agent (Phillips-Wren and Forgionne, 2002; Klusch, et. al, 2002; Gymtrasiewicz and Parsons,
2002). Hence their designation here as administrative agents. In echo of what seems to be the
pattern of employments for decision agents in general (Davenport and Harris, 2005; Noh and
Gmytrasiewicz, 2005), it’s assumed that the applications that are most likely to wind up being
assigned to administrative agents are the sort that people are least likely to welcome, i.e., tasks that
are long on computational requirements and short on leisure. In so far as computational
requirements are concerned, however, this paper presumes a practical stricture on the domain of
feasible authority for administrative agents. The only sorts of administrative decision situations for
which they can be held responsible will be those that can be resolved by taking recourse to: (a).
mathematical optimization methods that can deliver deterministic conclusions, or (b). statistical
inference based instruments yielding empirically-predicated probabilistic outcomes. This has the
practical effect of confining administrative agents to applications for which an entirely objective
approach is entirely appropriate. But the residual reach of administrative agents is still broad
enough that they may be seen to presage a perhaps seismic shift in the balance of managerial
power from people to computer programs. The practical effect of this is that attempts to deploy
administrative agents may be as apt to run afoul of cultural obstacles as technical challenges. It’s
only the latter that concern us here. Paramount among technical problems is the issue of how to
interconnect decision models to obtain functionally-coherent metamodels that could perform
complex, multifaceted administrative tasks rather than being confined to singular, isolated
decision instances. While some clues as to how this problem might be solved are available from
post-mortems on projects to implement multi-agent networks (c.f., Technical Report WS-04-06
put out by the American Association for Artificial Intelligence in 2004), the second section of this
paper will introduce an approach whose elements are mostly unique to relational model-base
structures. As will be argued there in some detail, relational model-base structures may be seen as
opening up a line of access to a new class of initiatives to which enterprise managements might
turn in an attempt obtain/maintain adequately high levels of lateral integration. Beyond this lies
the more interesting prospect of relational model-base embedded provisions for actively managing
—executing, enforcing or maybe just experimentally manipulating— conditions in the
intersections among administrative agents, and so moving the automation of management one step
further along.
3
The remaining section of this paper then turns to the second technical challenge that relational
model-base structures are intended to address: how they can expedite administrative decision
exercises that must be carried out in real-time. This echoes the argument that the applications that
administrative agents are most likely to be assigned will be characterized by high workloads
coupled with reasonably rigorous response-related or throughput requirements. As a yet broader
rationale, the real-time orientation of relational model-base structures is an acknowledgement of
the general quickening of interest in moving away from traditional planning-based managerial
platforms in favor of more dynamic-adaptive regimes.
RELATIONAL MODEL-BASE BASICS
In echo of their management science origins, relational model-base structures conceive of
enterprises as assemblages of problems to be solved that, in their turn, give rise to decision
requirements. Because their focus is on administrative decision applications, they consider only
the routine problems, or recurrent vs. episodic decision requirements, that might arise within the
confines of some enterprise. Enterprises can appear in either of two guises. An enterprise may,
firstly, be an effort (project, line-of-business, mission) undertaken entirely within the confines of a
single organization. In their second guise, enterprises are collective endeavors involving two or
more otherwise autonomous organizations acting in concert (as with joint-ventures between
business firms or, towards the grander end, international undertakings). In either case, any
enterprise (Si) could be deconstructed as a simple mapping, Si  D x E, where D is a set of
decision requirements distributed in some way over a collection of entities, E. An enterprise might
then be represented in tableau form as shown in Table 1.
Si
Tasks
K1
K2
K3
Kn
Table 1: Decision Tasking Tableau
Organizational Entities
E1
E2
Em
D11 = (d1,1,1…d1,1,i)
D12 = (d1,2,1…d1,2,i)
D1m = (d1,m,1…d1,m,i)
D21 = (d2,1,1…d2,1,i)
D22 = (d2,2,1… d2,2,i)
D2m = (d2,m,1…d2,m,i)
D31 = (d3,1,1…d3,1,i)
D32 = (d3,2,1… d3,2,i)
D3m = (d3,m,1…d3,m,i)
Dn1 = (dn,1,1…dn,1,i)
Dn2 = (dn,2,1… dn,2,i)
Dnm = (dn,m,1…dn,m,i)
The elementary entry in a decision tasking table is a depiction (in the form of what will shortly be
defined as a proper model) of a singular decision instance. In Table 1, a decision instance appears
as some dk,e,p, which identifies it as the pth member of the array of decision requirements, pertinent
to the kth task, consigned to the eth entity. Hereafter, for simplicity’s sake, decision instances will
be fixed with a single subscript, so that d k,e,p becomes dx. Dnm thus specifies the set of decision
instances falling to the mth entity pertaining to the nth task.
The entities (E) heading the columns of a decision tasking tableau need not have any
correspondence with the components that might show up on formal organization charts. Some
disconnect, in fact, may be desirable even when it’s not strictly necessary (Rogers & Blenko,
2006), if only to preclude the possibility of the prevailing organizational structure being imposed
as an implacable a priori constraint on those attempting to design management support systems.
But distinguishing entities from ordinary organizational components will usually be necessary. For
one thing, while the components covered by organizational charts constitute centers of
4
administrative authority, the entities employed in relational model-base structures are meant to
represent repositories of decision responsibilities.
Entities, then, can appear in a considerable variety of forms. They may, certainly, be ordinary
organizational units (the production department of a manufacturing firm, say) or conventional line
management functionaries (department heads). But the term entity may also extend to cover transinstitutional bodies (the U.N. Security Council, for instance), collective executives, steering
committees or contingency management cadres (technical specialists that displace ordinary
administrative officials under certain circumstances, as with the United States Forest Service
transfer of command from bureaucratic to technocratic functionaries in the face of major
conflagrations). But the entities of most relevance to relational model-base structures will be
neither institutional components nor people, but that novel class of computer-constructs spoken of
earlier: models designed to function as decision-makers rather than in their usual capacity of
decision aids or decision support devices.
As for tasks, they are used as conveyances for integral enterprise management activities,
‘integral’ in the sense that they encompass two or more identifiable elementary decision instances.
If, say, the task at hand was Inventory Management, it might include this quartet of subordinate
decisions: d1: What to reorder? d2: From whom to reorder? d3: How much to reorder? and d4:
When to reorder? This task could then be denoted as Kj  {dx = (d1 … d4)}, meaning that the
indicated task is taken to subsume (per ) the set of decision instances included in {dx}.
Operative Decision Models
Decision instances are themselves higher-order constructs. Each is expected to be
anchored on a fundamental decision function of the form F{V}dx or, more simply, F(Vx), where V
is a set of variables (decision factors) thought to have some bearing on the decision at hand, and F
prescribes a set of computational operations over the members of V in pursuit of some outcome
objective (goal, decision criterion). Thus, with respect to the first of the above-cited decision
instances subsumed under the inventory management task, F(d1) would most likely be a
multivariate, regression-type demand projection function, perhaps encoded as a Regression Tree
(Brieman, et. al., 1984). The usual approach to the second decision would be to have F(d2) turn out
to be a multicriteria supplier-selection function that would deliver optima-at-the-margin
resolutions over considerations such as cost, quality, responsiveness and reliability (Ostebee,
2000). As for d3, a common tactic would have F(d3) set up to deliver an EOQ solution that would,
at the same time, provide an answer for d 4 by prescribing the intervals at which replenishment
orders should be placed (Greene, 1997).
When its procedural properties are fully explicated, a function of the F(Vx) variety would serve
as a descriptive model of a decision process. Were then its instrumental provisions encoded for
computer execution, an F-type function would have been transformed into an operative decision
model that would take its place as a core working component of a relational model-base structure.
What’s meant here by an operative decision model is one whose specifications are complete on all
four of the dimensions over which formal models are defined, as per Table 2 (below). As is
perhaps clear from Table 2, any F (Vx) would subsume as many lowest-order (-level) functions
as there are unique, non-null intersects (pairings or otherwise) among the members of the statevariable array: F(Vx)  {(vm:vn)}  (vm ∩ vn) ≠ 0.
5
Table 2: Specification Requirements for Proper Decision Models
DIMENSIONS [LEVELS OF ANALYSIS]
Determinant: Assembles an array of suspected
decision factors qua state-variables (V), which may
be culled to include only a subset of assumedly
most significant factors
Structural
Magnitudinal
Connective: Delineates intra-decision intersect
(∩) conditions via a set of categorical connective
operators (C), so indicating which variables are
expectedly linked with which others, in what ways
(influence patterns), under what conditions
Coefficient: Supplies a computed or imputed
measure of the strength/intensity of the various
inter-variable connections defined above
Parametric: Assigns point-in-time numerical
values (observation and/or function driven) to each
of the determinants/decision variables
NOTATIONALS
{V |  (V) >  } Vx, which restricts
membership to variables whose
impact/pertinence () is expected to
exceed a minimal acceptance level ()
{C | dx} = {c(vm ∩ vn) | vm, vn  Vx}, where
c-type operators define relationships
between/among variables in terms of
categorical (i.e., qualitative, logical)
connectors
Coefficient specifications are of the form
vn = f(vm)  f(vn, vm), where f is typically
a computational function
{vm,t},  vm  Vx, where {vm,t} is a vector
containing the current (time-t) values for all
decision variables
As is clear from the way in which coefficient specifications are set out in Table 2, any F(Vx)
function will subsume as many lowest-order, coefficient-type (-level) functions as there are
unique, non-null intersects (pairings or otherwise) among the members of the state-variable array,
so that F(Vx)  {(vm:vn)}  (vm ∩ vn) ≠ 0.
Lower-Order Relational Operators
The f- and F-level functions have rough counterparts on the relational side of relational
model-base structures. Corresponding in order to the former is an elementary relational operator
(r) used to depict the conditions in the intersect between pairs of variables as r(vn,vm). More
strictly speaking, it’s not usually r that’s an elementary relational operator, but one or another of
its two directional variants that are used to account for asymmetrical relational affects: v m = rnm(vn)
and vn = rmn(vm).
If it’s to be able to depict inter-variable intersect conditions, an elementary relational
operator would have to be able to recognize both categorical (connective) and computational
(coefficient) linkages, so that rnm = ( c  f )nm. A collection of categorical descriptors could include
the following:
if c(v1:v2) =
↑ : v2 is positively and proportionally related to v1
 : v2 is more than proportionally positively related to v1
↓ : v2 is inversely related to (negatively correlated with) v1
 : v2 is strongly negatively (inversely) related to v1
 : v2 is dependent on (dominated by) v1
 : v1 is subordinate to v2
↔ : v1 and v2 are co-determinate
 : v1 and v2 are symmetrically (reciprocally) interrelated
Ф : v1 and v2 are to be kept independent, so that v1 ∩ v2 = 0.
6
Connectives may be compound, e.g., c(vm:vn) = () would describe a case where causality is
both unidirectional and ampliative, such that any change in the value of v m is expected to incur a
more than proportional change in vn in the same direction. A corresponding computational
function would then be of the form f(vn, vm)  vn = vm2, so vn / vm > 1. This suggests there will
be cases, many probably in the decision situations dominating the day-to-day decision making of
most organizations, where qualitative connections can be expressed in mathematical terms. In
such cases, connective conditions can be enfolded into computational expression, so that vn =
r(vm) ≡ vn = ƒ(vm). That is, whenever a computational expression can be used to cover both
connective and coefficient level specifications, the former can be considered equivalent to, and
hence replace, an elementary relational operator.
Relational Model-Base Substructures
The relational counterpart to an F-level function is R(Vx) = (r1 ∩ r2 ∩ …∩ rz), where R is
a first-order relational operator (in that it conjoins two or more elementary relational operators).
First-order relational operators, when deconstructed as shown in Table 3, are the basic building
blocks (substructures) of relational model-base structures.
v1
v2
v3
vm
Table 3: Relational Model-Base Substructure: R(dx)t
1
2
3
m
f(1,2)
f(1,3)
f(1,m)
v1
r(2,1)
f(2,3)
f(2,m)
ν 2
f(3,2)
f(3,m)
f(3,1)  r(3,1)
v 3
rm1= c(m,1) + f(m,1)
f(m,2)
f(m,3)
v m
The cells of a relational matrix (excepting those lying along the major diagonal) can contain either
elementary relational operators (coupling and coefficient specifications) or coefficient-type
computational expressions. Thus, for example, r(m,1) is an elementary relational operator
explicating the categorical and magnitudinal impacts of vm on v1. When connective conditions can
be enfolded into a computational construct, a relational operator can be replaced by a coefficienttype function, as in Cell 3,1. When all the cells in a relational substructure contain computational
(f-type) expressions, the result is essentially a coefficient matrix. This has special significance for
accelerating the execution of decision models that employ linear algebra conventions, as well as
for ‘mechanizing’ interconnections between relational model-base substructures in cases where
F(dx) and a another model/function(s), F(dy) for example, can both be encoded as systems of
linear equations.
The cells to the right of the major diagonal are of material significance only when r(i,j) ≠ r(j,i)
and ƒ(i,j) ≠ ƒ(j,i). The peripheral cells heading the columns hold current parameter values for the
indicated variable, e.g. m = m,t is the prevailing point value for vm  Vx. It’s assumed that the
values assigned variables (and coefficients, also) will most often be observation-based. But
parameter/coefficient values may, however, be projective (extrapolative) or even putative
(notational) in provenance.
The -type functions occupying the cells lying along the major diagonal can be used to force
values for variables (or coefficients). This is a nod to the possibility that it may sometimes be
desirable to directly determine variable values rather than having them remain data-driven. There
might, for instance, be enough regularity to anticipate changes in variable values, or perhaps some
reason to distrust some segments of a data stream; allowing for forced values also provides an
7
opportunity to perform simulation-like operations over the components of relational model base
structures. These -functions could be of various types, e.g., Reflexive (  = Vm|t), Conditional
( = vm|Xj, where Xj is a contingency event) or Complex ( might, for instance, be constructed
as/over a regression tree).
A relational model-base substructure thus encodes all that is known, or thought to be known,
about a particular decision requirement at a particular point in time. A mature relational modelbase structure should then house a separate substructure for each of the regular decision instances
an organization includes, with its composition changing as new decision requirements arise or old
ones are retired. The ‘regular’ qualifier indicates that not all decision situations can be readily (or
meaningfully, it might better be said) accommodated within a relational model-base structure, but
only those that: (a). are recurrent or otherwise routine, and (b). can be expected to admit to a
conventional technical solution along one or another of the lines illustrated in Table 4.
Table 4: Permissible Classes of Decision Situations
Deterministic
Probabilistic
Target Applications
Situations involving
simple (well-bounded
and well-behaved)
subjects amenable to
Positivist apprehension
Tasks associated with
subjects that are in part
empirically inaccessible
and/or have some
stochastic potentiality
Primary Predication
Clinical data, obtained directly from
measuring devices (e.g., sensor
data) or first-hand observations
made by individuals of established
objectivity and expertise
Empirical data, of estimative or
projective purport via a samplingbased scheme (experimental
evidence; results of surveys
or polling processes)
Analytical Instruments
Algorithmic formulations
based on mathematical
methods for generating
optimal or optimum-at-themargin resolutions)
Statistical inference
techniques that can
recognize maximumlikelihood or maximal-utility
decision choices
In echo of a previous point, the ‘regular’ stipulation restricts the effective operating range of
relational model-base structures to decision situations for which an objective approach is both
available and appropriate. It also acts to reserve relational model-base structures (in their
operational vs. informative roles) for applications that can be brought within the practical authority
of an administrative agent; and the implication of this harks back to the suggestion that the main
rationale for relational model-base structures are movements in the direction of the automation of
management.
RELATIONAL MODEL-BASE INSPIRED INTEGRATIVE FACILITIES
Conventional management process models, as derived from organization charts, tend to have a
strong vertical cast to them. Formal administrative authority is, after all, asserted to cluster at the
apex of the managerial hierarchy, and so is exercised dominantly downward along one or more
chains-of-command. The result is a natural emphasis on organizational integration in its vertical
orientation, which involves interconnections among organizational units or administrative
functionaries sitting at different levels of a managerial hierarchy.
Recall, though, the founding formulation from which relational model-base structures are
derived, Si = D x E, where D is an array of (regular) decision instances and the members of E are
entities. Again, entities are unlike ordinary organizational units (which serve as centers of
administrative authority) in that they are repositories of decision responsibilities, and so denotable
8
as EM  {d}  D. Connections among entities could then be depicted in terms of decision
intersects, i.e., Ei  Ej  {(dx  Ei)  (dy  Ej)}. Thus, while it’s indeed necessary that a
relational model-base structure allow for decisions to be arrayed in different patterns/degrees of
influence, there is no need to make any assertions about one entity being more or less superior to
another. What’s assumed, rather, is that the entities involved in an enterprise will all be positioned
as peers; none is bureaucratically ascendant, nor does any enjoy anything resembling plenary
powers. In this way, relational model-base structures are tuned to lateral integrative requirements.
Lateral integration is concerned with the functional or task-related (vs. vertical, commandcontrol type) linkages between entities sitting —actually or effectively— at the same level of the
enterprise hierarchy, be they organizational units, autonomous institutions or artifacts like
administrative agents (Sutherland, 1998). The attraction of higher levels of lateral integration is
then that they can mean more in the way of cooperative, complementary, synergistic or otherwise
constructive interconnections among the entities involved in an enterprise. In this way, more
highly integrated enterprises may come to be blessed with higher aggregate efficiency potentials
than their more loosely-coupled counterparts — albeit, as Perrow (1984) has pointed out, at the
price of greater fragility.
Recall also that one of the principal introductory points in defense of relational model-base
structures is that they can inspire a fundamentally new family of initiatives to which enterprise
administrators might turn in the attempt to obtain/maintain adequately high levels of lateral
integration. Because they can recognize decision interdependencies, relational model-base
structures open up another —a fourth, in fact— dimension on which lateral integration can be
addressed. A four-part taxonomy of lateral integrative initiatives might then be constructed along
the lines of Table 5a.
Table 5a: Categories of Lateral Integrative Initiatives
1. Configurational tactics intended
to encourage positive interactions
among institutional components
2. Communications-related
techniques for enabling
and/or expediting interchanges
among people
3. Facilities for
interconnecting
informational items
4. Instruments targeted at
decision intersects
Establishment of interdepartmental liaison units or cross-agency task forces;
recourse to collective executiveships; concurrent engineering arrangements;
cultivation of kieretsu type interorganizational alliances.
Groupware to support intranet or extranet-based collaborative efforts, teleand videoconferencing provisions, Electronic Town Meeting technologies
(interactive television, eVoting, etc.). Apparatus to implement applications
like long-distance learning and remote medical diagnoses
Provisions like those found in ERP (Enterprise Resource Planning) type
products r for establishing/maintaining extensive data sets (e.g., data
warehouses), setting up information sharing networks, and arranging for
limited interoperability among computer programs connected via a spinal
database structure
In relational model-base structures, responsibility for inter-decision linkages
falls to first-order relational operators
The relational operators incorporated in relational model-base structures are all intelligible as
lateral integrative instruments, with successively higher-order relational operators distinguished by
their successively more extensive breadths of purview.
The respective roles and ranges of effect for relational operators are shown in Table 5b, with
R', R'' and R''' being higher-order variants.
9
Elementary
Primary
1st Order
[InterDecision]
2nd Order
[Intra-Task]
3rd Order
[Trans-task]
Table 5b: Orders of Relational Operators
Defines direction-specific linkages between
variables/decision factors: r(vm : vn)  {V}dX
Conjoins two or more elementary relational operators
within a decision model: R = [ri,j ∩ … rm,n]  dx
Allows for task/entity independent (or ad hoc) links between
decision models or relational model-base substructures:
R' = [di ∩ dj…∩ dm] or R' = [R(dx) ∩ R(dy)…]
Establishes intersects among the array of decision models
entailed in a task: R'' = [d1 ∩ d2 ∩…dz]  KX
Establishes interconnections between/among two or
more tasks: R''' = R′′ (K) ∩ R′′ (K) …
With the introduction of higher-order relational operators, any enterprise could now be defined on
three dimensions as Si = Ř x D x E, where Ř is an array of higher-order relational operators and,
again, D is the array of regular decision requirements included in Si, and the members of E are
entities. The Ř x D component in this formulation is then the generative form for a Relational
Model-Base Structure.
First-Order Relational Operators
Thus, as depicted in Table 5b, a first-order operator (R') is, in effect, a model that establishes
connections between other models as R' = (di ∩ dj…∩ dm). Alternatively, the assignment for a
first-order relational operator can be written as R'  [RX ∩ RY …], where RX and RY are primary
relational operators and the symbol  signifies that R' is superimposed on the primary relational
operators rather than subsuming them (in which case the integrative expression would have
appeared as R'  [Rx ∩ Ry …]). The key distinction between a subsumptive and a
superimpositional role for higher-order relational operators is the greater magnitude of the
instrumental challenge posed by the former relative to the latter. Were a higher-order relational
operator to subsume some number of lower-order relational operators, it would have to absorb all
the intra-decision integrative responsibilities once handled by the lower-order relational operators,
as well as being required to cover any inter-decision connections. Under a superimpositional
scheme, a higher-order relational operator is responsible only for the interconnections among
lower-order relational operators, so that the challenge for a higher-order relational operator is
confined to those integrative requirements that remain to be met after the lower-order relational
operators have done their work.
As with their elementary (r) and primary (R) counterparts, a first-order relational operator
would have both a connective/categorical (C') and computational/functional (F') component, so
that R' = (C'  F'), where F' is a first-order computational function. The former would be used to
convey qualitative intersect conditions. These will tend to be of two broad types, decision
dependencies and interdependencies. Computational constructs will also tend to be of two broad
types, recursive and reciprocal.
Though decision dependencies are rooted in hierarchical (vertically oriented) relationships,
relational model-base structures would deal with them as if they were lateral. Several sorts of
‘lateralized’ decision dependencies are recognizable:
10
a). Serial dependency (dX  dY), where two or several decision models are to be
executed in a definite temporal sequence, with the results of the previously executed
passed as predicates/premises to the next in the sequence.
b). Directive dependency ({d}EX ► {d}EY), where the decision choices made by one
entity (agency) serve to bound the search-solution spaces over which a second is
allowed/expected to wander with respect to some application or area of endeavor.
c). Codependency (dZ▼(dX,dY … )), where two or more decision models are connected
in the sense that they are commonly subject to (but perhaps differentially so) the
influence of some other decision model/agency. Note that a lesser variant on
codependency would cover cases where the connection between two or more decision
models would consist of a shared variable(s), so that F' {V} = (d X ∩ dY).
The most obvious technical tactic for handling decision dependencies is to have first-order
relational operators employ recursive functions, which are themselves hierarchical (Ershov 1998).
Under a recursive protocol, situationally-superior elements —independent variables, decision
models, entities— are positioned upstream of, and so allowed to influence, subordinates
positioned somewhere downstream. Take, as an example, a recursive first-order function of the
sort that might be used to regulate a JIT (Just-in-Time) arrangement between a producer and
supplier. Were dx a model used by the manufacturer to determine a near-term, desired production
schedule (Qp), and dy the model used by the supplier firm to determine its output schedule (Qs),
then F' (dx ►dy) would prescribe a set of algorithmic procedures (each executed via an F-level
decision function) designed to keep Qs properly responsive to, or dynamically well synchronized
with, the producer's desired schedule, Qp. The latter are, however, beyond the direct influence of
the former, as recursive constructs allow for simple feedback loops and reflexivity, but not for
circular causality or mutuality.
Circularity, where decision models alternate being superior and subordinate, is one of the three
main instances of decision interdependency, the other two being Codeterminacy (where the
decision choices arrived at by any one model give due deference to the intents, purposes or
priorities of the others with which it’s interconnected) and Communality (which has two or more
decision models arrayed in cooperative/conciliatory arrangement whereby the choices of each are
determined dominantly by what’s taken to be in the best interests of the collectivity or
organization as a whole (Zhang, Lesser & Wagner, 2006).
In place of a recursive connective construct, decision situations involving interdependency
would require non-recursive relational functions (Berry, 1984; Kline 2004) that can recognize and
regulate the reciprocal interchanges common among entities ‘positioned as peers’. Consider, for
instance, the OPEC cartel, comprised as it is of sovereign nations. OPEC’s administrative problem
is to establish an aggregate production level that’s low enough level to preserve a favorable perbarrel price, yet at the same time high enough to meet the revenue requirements (or desires,
rather) of its several members. Hence the need for an F' function, formulated perhaps over a set of
difference equations (F-level expressions) or set up as a stochastic simulation construct, that can
mediate among the output decisions proposed by the various producer states in an attempt to
realize an allocation schedule that sets production quotas in a way calculated to best serve the
welfare of all, without unduly disserving any.
11
Second Order Relational Operators
The core contribution of second-order relational operators is intra-task integration, per the
notation introduced in Table 5b as R'' = [d1 ∩ d2 ∩ … dz]  KX. Their normal charge, that is, is
the establishment and maintenance of interconnections among the complement of decision models
involved in a particular task. There is, however, a special significance to second-order operators
that derives from the earlier-noted restriction of relational model-base structures to regular
decision instances (in that they can be resolved by taking recourse to a readily programmable,
orthodox algorithmic approach). In addition to allowing the wholesale displacement of elementary
and primary relational operators by their computational correspondents (r  f and R  F), this
stricture should also regularly allow first-order categorical connectives (C') to be folded into a
companion computational expression, so that R'  F'. Second-order replacements, R''  F'',
should also be widely available, where F'' = (F'1 ∩ F'2 …∩ F'n). The special significance of
second-order functions of this form is that they can be used to generate metamodels that
encompassing the full complement of decision models entailed in one or another of the
administrative tasks from which enterprise management processes are ultimately assembled.
The more interesting prospect is, however, the construction of metamodels whose components
are not just decision models, but Administrative Agents. It’s these that can make automated task
management a practicable prospect in the context of relational model-base structures.
TOWARDS AUTOMATED TASK-MANAGEMENT
What immediately comes to mind as a medium for implementing metamodels is a species of
node-arc constructs that, for lack of any better term, will here be referred to as Manifold Network
Models. Reminiscent of conventional decision trees, the skeletal structure of manifold network
models can be made to reflect the procedural properties of an administrative task, specifying
which nodes are to be executed in which order under which conditions. But unlike ordinary
decision trees, whose nodes typically hold imported numerical (or sometimes qualitative) items, or
even their more capable cousins, Classification Trees (Breiman, et. al., 1984; Loh and Shih 1997),
the nodes of manifold network models will contain algorithmic formulations that constitute fullblown models —decision models of the F(dx) variety, in fact — which may perhaps be assigned
to an Administrative Agent for actual execution. Whatever the exact nature of the nodes, the arcs
of manifold network models would then serve as avenues of interconnection among the various
decision models positioned at the nodes. Pathways for task-management protocols could then be
made conditional by having the selection of a particular avenue (or arc), radiating from a particular
node, depend on the results of the current running of the pertinent nodal algorithms or decision
model. The basic characteristics of manifold network models, it may be recognized, are reflective
of the features of a class of constructs known in computer science circles as Cooperative
Distributed Problem Solving or CDPS systems (Durfee, Lesser & Corkill, 1995).
In light of the earlier discussion of the two basic varieties of decision situations, it makes sense
to think of manifold network models as being dominantly either recursive or reciprocal in their
instrumental fittings. Recursive manifold network models would tend to carve out neatly linear
execution paths during any iteration, with the results of a superior (upstream) decision model not
only determining which downstream nodal constructs will subsequently be executed in what order,
but perhaps also passing down decision predicates (as exogenous arguments or constraints) that
direct the decision choices arrived at by the latter. The most promising targets for administration
via manifold network models will then be tasks punctuated by sequential-state processing
12
protocols or trajectory-type decision dynamics. One such area is conventional Supply-Chain
Management (Mahajan, Vakharia and Radas, 2002), marked as it is by an ordinarily neat
succession of procurement-related decisions covering materials requirements, standards and
specifications, pricing and delivery schedules and provider selection. It’s also marked, not
surprisingly, by a rising reliance on computer-based agents to make these determinations (Fox,
Barbuceano & Teigen, 2000; Rodriguez-Aguilar, et. al, 2003). Traffic Management is another
area where recursive constructs are common, e.g., for urban road systems, during outbound rush
hours for example, sensor-data regarding rates of passage and congestion are fed to computer
models controlling more inward intersections, that then go on to direct the timing and duration of
traffic lights at more outlier or downstream intersections. Agent-based traffic management
applications abound (Bazzan, Klügl and Ossowski, 2005), and extend well beyond road-based
systems to cover air and marine-related applications (Davidsson, et. al., 2005), and project still
further outward into areas as diverse as communications routing, workflow control, synchronizing
workstations in assembly lines and load-balancing in power grids.
This is not to say that there have not been attempts to formulate and effect non-hierarchical
approaches to supply chain management (Collins and Gini, 2001; Davidsson and Wernstedt, 2002;
Labarthe, et. al., 2005) and traffic management (Bazzan, de Oliveira & Lesser, 2005). But nonrecursive manifold network models would be most at home in task management contexts where a
decentralized decision protocol is commanded by the nature of their composition. This is clearly
the case with enterprises configured as cooperatives. Take shipping conferences, for example.
These are confederacies formed by ocean transport companies operating along a particular trading
corridor (e.g., New York-Amsterdam). They are formed in pursuit of cartel-type advantages, and
also to obviate mutually destructive competitive initiatives like price wars. Shipping conferences
might then welcome a computer-based collective executive if it could be relied on to manage all
matters in the ostensibly best interests of the general membership. A manifold network model
could conceivably play this role were it programmed to encourage/enforce decisions to comply
with communal welfare criteria throughout the operational management (where determinations are
made regarding the composition of the fleet, allocations of transport jobs among vessels, access to
channel and port facilities, departure and arrival times, loading and offloading priorities and price
schedules, etc.).
Another candidate for management via a non-recursive manifold network model comes from
the military sector in terms of what’s known in Pentagon parlance as the Cooperative Engagement
Capability (Busch and Grant 2003; Korzyk 2003). In prior days, each ship was held individually
responsible for dealing with any directed at it. The CEC, however, transforms fleet security from a
proprietary to a popular problem. What it promises is a computer-mediated, distributivenegotiative approach to the defensive decision process (which requires provisions for threat
recognition and ranking as well as tactical response selection and asset/resource allocation). It’s
objective would be to manage things so that interdictive responsibilities would fall to whatever
element —ship, aircraft, submarine, shore battery or armed drone, etc.— is commonly determined
to be best equipped to defeat the threat most easily or economically. It would be expected, of
course, that current period defensive assignments in the CEC context would consider any
projected/prospective threats foreseen by the members of the battlegroup and other sources.
Finally, as a somewhat more pedestrian example, there is the proverbial smart elevator system.
Physically, of course, elevator systems are comprehensible as non-hierarchical node-arc
constructs. This makes them ideal subjects for probabilistic management by drawing on the
pattern-recognition capabilities of neural-network constructs (Freeman and Skapura 1991), as
these can be configured to replicate the morphological properties of the subject they are intended
13
to control. A more sophisticated variant would have the behavior of an elevator complex
determined with respect to dynamically-modified demand projections that combine historical
usage patterns with both real-time stimuli (people pushing call buttons) and near-term indicators
(probable destination floors for people entering a building based on what their digital ID cards say
about the location of their offices (c.f., United States Patent 6988071 at
www.freepatentsonline.com). When fed such inputs, a non-hierarchical elevator management
model would have all deployment decisions for any car introduced as a proposal to be examined in
concert with the current positioning and routing proposals of all other cars. Deployment
instructions would then be issued to the several elevators that ostensibly best serve the now and
near-future interests of the aggregate population of passengers (with the interests of regular or
recurrent users likely given relatively more weight than those of transients).
RELATIONAL MODEL-BASE MEDIATED REAL TIME OPERATIONS
If the increased reliance on decision models and computer-based decision agents is one of the
key attributes of the contemporary administrative arena, another is the burgeoning interest in
moving away from traditional planning-based (anticipatory) managerial postures towards more
real-time (adaptive-reactive) regimes. This movement owes much to the realization that it’s
improvident to opt for a probabilistic approach to a decision situation for which a non-inferior
deterministic solution is available, and to the companion realization that transitions towards realtime open up opportunities to transform previously probabilistic into deterministic decisions.
More particularly, what real-time transits offer are opportunities for administrative decisionmakers to ease their dependency on projections or expectations in favor of more definite data of
more recent vintage. To exploit these opportunities, administrative authorities can turn to an ever
expanding array of real-time data-capturing devices. These devices, along with regular
improvements in digital data communications capabilities, have shortened the intervals between
the time of the first emergence of informational items and the point at which they become
available as decision predicates. What’s also emerged are new-age conceptual referents, like the
“dashboard” model of business administration (Eckerson, 2005), that suggest how managers
should put accelerated predicates to proper use. So it is, also, that the main message of one of the
most influential of modern management treatises, The Virtual Corporation (Davidow and Malone
1993), is how manufacturers might make their production decisions —and retailers their
restocking decisions— less reliant on demand forecasts and more responsive to revealed demand
(as revealed by actual product purchases recorded and reported by point-of-sale terminals).
Decision choices that wind up owing less to expectations and more to actualities have a
correspondingly better chance of proving favorable or optimal in terms of whatever criterion
applies. This, of course, is the core rationale for today’s essays in the direction of dynamic
resource allocation. Hence the modern military’s embrace of remote sensor platforms and other
facilities that can deliver real-time reconnaissance data, which allows force disposition and other
combat management decisions to be based on authenticated vs. merely anticipated battlefield
conditions. Applications in the public sector are also not uncommon, one of the most impressive
being the United States Forest Service's elicitation of real-time fire location data from the MODIS
Rapid Response system in an effort to make the best possible use of high-impact, low-availability
assets like smoke jumpers and airplanes (Kaufman, et. al., 1998; Giglio, et. al., 2003).
14
Anyway, the focus for the remainder of this section is on how relational model-base structures
can support attempts at adaptive-reactive management, and thereby enable more extensive essays
in dynamic resource allocation.
Centripetal Data Acquisition
What’s proposed by way of a real-time information handling system is a centripetal scheme
like that illustrated in Figure 1.
Figure 1: A Real-Time Information Handling Protocol
Centripetal processing protocols acquire contributions from a multiplicity of monitoring/reporting
reporting sources and channel them into a single data stream directed at a single relational modelbase substructure. A relational model-base structure may be the province of an Administrative
Agent rather that merely a disembodied mathematical function (Davidsson, et. al., 2003). It’s also
possible, probable even, that the decision function/agent to which a relational model-base
substructure pertains will be incorporated in a manifold network model, and so be counted as
aspect of an administrative task.
An important implicit provision of the protocol shown in Figure 1 is that any data items that
are received will have their significance assessed more or less immediately, and can thereafter be
discarded. This pretty much obviates the need for archival databases, much less the huge data
warehouses that serve as the spinal substance for the management systems now being produced
under the ERP banner. For enterprises operating under a real-time regime, historical data is of
substantive significance only in so far as it’s necessary to satisfy externally-imposed reporting or
internal auditing requirements. Thus, enormously popular though they are these days among both
administrators and academics, data mining devices would be seen as having no practical role to
play in the context of real-time management systems. Retrospective pattern-recognition operations
would simply be deemed unnecessary, as whatever patterns might be practically pertinent to a
decision situation would presumably already have been routinely recognized in the normal course
of coefficient estimation exercises. This view of the fate of real-time information is certainly not
15
unprecedented; it’s not really all that different from the approach incorporated in Complex Event
Processing protocols that engine event-driven information systems (Luckham, 2002).
As for information acquisition requirements, these would expectedly first be established with
respect to the factors (determinants, state-variables) over which a decision model is defined.
Thereafter, items may be assigned different acquisition priorities depending on their apparent
importance to the quality of decision choices, or what remains to be achieved in terms of increases
in levels of precision or probity for a particular component (parametric or coefficient entry). The
informational requirements specific to a relational substructure/decision model might then be
denoted as {I}dx <, which sets a maximally-acceptable time interval ( ) between the emergence
of potentially interesting informational items and their presentation as decision inputs. This
phrasing is consistent with the concept of effective vs. strict real-time. Real-time, in its strict
interpretation, requires that information always (unconditionally) be delivered as immediately as
possible. Effective real-time, on the other hand, treats timeliness as relativistic, requiring only that
information be delivered prior to the point where it's required to inform a decision.
Taking steps to reduce the input burdens on real-time systems will make it easier to meet
whatever timeliness requirements are in place. One way to ease data handling demands is to try to
increase levels of information compression. This, it may be clear, will occur as a natural
consequence of any exchange of conventional relational database for relational model-base
structures. Consider, for example, that the informational value of a coefficient sitting in a cell of a
relational model-base substructure would be no less than the sum of the multitude of raw data
items from which it was derived. The happy consequence of higher levels of information
compression is a corresponding reduction in the volume of inputs needed to drive decision
functions under relational model-base vs. database management conventions. Loadings on
communication facilities used to support intra- or inter-organizational information would also be
diminished to the extent that exchanges consist of compressed parameter or coefficient passing
rather than transfers of raw (unconditioned, undigested) data items. A second tactic for
moderating real-time data capturing and communication demands is Redundancy Filtering.
Equipping real-time systems with redundancy filtering facilities is done in response to the
assertion that information is of practicable value only to the extent that it's indicative of a change
of some sort (Dawkins, 1998). Redundancy filtering is thus aimed at the systematic recognition
and elimination of all items that are coincidental with expectations (e.g., in a true real-time Air
Traffic Control System, the only aircraft of which human flight controllers would be aware are
those that have departed in some way from previously-filed flight plans).
Another feature required in real-time contexts is some sort of Input Fusion facility. Such would
be needed whenever decision predicates might possibly be of different orders and/or origins.
Trying to integrate information of different orders quantitative and categorical, apodictic and
anecdotal, logical and axiological, textual and visual, Elint (electronic intelligence) and Humint
(human-originated intelligence) is a daunting challenge, both conceptually and mechanically.
It’s presumed, however, that this challenge will be kept reasonably well contained in the context
of relational model-base structures because of the previously noted restriction to regular decision
situations. This means that the only inputs that need to be accommodated will all be neatly
numerical, such that input fusion may involve nothing more than elementary arithmetic
conditioning initiatives like standardization, normalization, compilation (development of simple
statistical digests like means and modes), minor mathematical as is done in GPS (Global
Positioning System) based navigational systems or the two to three dimensional conversions that
produce topographical charts from aerial photographs.
16
Inputs of different origins means something more than merely inputs from multiple sources. It
means information collected on different dimensions, and so of disparate provenance rather than
just disparate formatting. Contemporary climate control systems now routinely employ a simple
fusion function to generate a composite variable, humiture (an amalgamation of temperature and
humidity), which is then used as a key determinant of heating and cooling decisions. Somewhat
more impressive are the fusion facilities that sit at the front-end of modern surveillance systems
that allow reconnaissance to be carried out simultaneously on several different dimensions: visual,
infrared, auditory, electrical (spectral), etc. Because the sensors in such platforms deliver data of
different origins, the input fusion task is to combine the several output sets elements into coherent
multidimensional portrait of the object of interest (Hall and Llinas, 2001). In some cases it's the
mere mobility of sensors, not their variety, that raises the requirement for a data fusion facility.
The task in these cases is to converge on a composite real-time construct by collating observations
taken by a dispersed array set of similarly-configured sensors operating at different scanningangles and/or distances. In these cases, readings taken from different perspectives are treated as
data of different origins, with the data fusion function usually falling to Multiresolution
Integration Algorithms like those designed to conjoin inputs from migratory agents embedded in
distributed sensor networks (Qi, Iyengar and Chakrabarty, 2001).
GIS Constructs and Templating Tactics
Administrative applications requiring a real-time (dynamic) resource allocation run the gamut
from prosaic activities like just-in-time inventory replenishment to more dramatic functions like
disaster relief or the fielding of rapid deployment forces in the hopes of interdicting looming civil
crises. If relational model-base structures are to properly support such applications, they should be
set up to operate on GIS (Geographic Information Systems) type constructs. These are becoming
ever more widely recognized as a prime medium for portraying phenomena that are multifaceted,
protean and have a geospatial or textural-topological aspect to them (Arctur and Zeiler, 2004;
Kropla 2005). And indeed, in addition to their traditional command of cartographic, demographic
and ecological applications, GIS constructs now stand as primary sources of decision predicates in
a variety of other fields: Civil engineering, urban planning, transportation system architecture,
epidemiology, military targeting, disaster management (e.g., evacuation routing, containment of
oil spills, distribution of relief aid), as well as public and business administration (re: location of
facilities, logistics (Revelle and Eiselt, 2005).
Of most pertinence to relational model-base aided adaptive-dynamic management are GIS
configured as composite mapping constructs (Malczewski 1999). Composite mapping constructs
are intended to show how various properties of interest are distributed within some bounded
domain (model space), and also how their distributions might change in response to natural or
introduced initiatives. Towards this end, each of the properties of a composite mapping construct
would correspond to one of the facets of the subject phenomenon. Each of these properties is then
assigned to a separable layer, where it can be depicted in terms of a density distribution. For
suitably-configured GIS constructs, input integration (data fusion) thus consists of conjunctions
established among different sets of density distribution data (or, less abstractly, the interleaving,
infusion or superimposition of property-specific expository layers). As a further and particularly
technically appealing possibility, composite GIS might be constructed so that each of their
separable layers corresponds to a factor (determinant, state-variable) over which some decision
model is defined. These layers qua decision dimensions would then become the targets of real
time information acquisition efforts.
17
This may, at its most menial, merely require GIS constructs to be reconfigurable, so that
density distributions can be altered and properties added, deleted or repositioned relative to one
another. So, for example, a GIS construct covering a wetland ecosystem could be tuned to use
real-time survey data to chart changes in the distribution of an index-species, along with any
alterations in ancillary factors like water chemistry, comings and goings of water birds, frequency
and the intensity of human intrusions. Thus, as is increasingly recognized by organization or
agencies with ecological interests or missions, emendable GIS constructs can provide a regularly
refreshed complex of predicates to feed multicriteria-multiobjective conservancy management
models or to inform local land-use decisions with sets of at least partially competitive stakeholders (Evans, VanWey & Moran, 2005).
GIS constructs can also host a special class of real-time referents that function as templates. As
an adjunct to adaptive-reactive management support systems, a template is an only partiallyelaborated model that's intended to reduce the volume of decision-related analytical requirements
that need to be met in real-time. It does so by trying to answer, in advance, for as many as possible
of the requirements associated with a decision, in as much detail as possible. Templates thus play
a role in adaptive/reactive management systems similar to that played by contingencies in
planning-based schemes. However, as appendages to GIS models, templates are formulated as
graphic constructs that are designed to be superimposed singly or in company with other
templates on a background map. Templates are perhaps of most obvious pragmatic import for
organizations having emergency or crisis management missions, where they can promise to both
increase responsiveness and preserve the rationality of decision commitments.
As an illustration of the utility of templates, consider a situation where there has been a serious
accidental discharge of toxic chemicals at an exurban industrial facility. Suppose also that, an
array of templates had been developed that described the generic diffusion characteristics of each
of the different chemical compounds the plant produces, as derived from plume projections
developed using modern Atmospheric Dispersion Modeling methods (Barratt, 2001). That is, each
template is a graphic portrayal of the expected behavior of a particular chemical compound given
those properties that constitute constants (volatility, decomposability, particulate predispositions,
etc.). As they stand, these templates are incomplete. Missing are situation-specific details such as
the exact composition, location and magnitude of a release and, of course, details on contextual
variables like wind velocity and humidity, etc. Thus, given a suitable set of templates, what mainly
remains to be done is to get a real-time portrayal of an actual release event is to interpose
particulars about actual/emergent situational conditions as they become apparent. For plume
projection problems, then, templates can mean that the deep structural substance of diffusion
models should already be in place, so that the development of actionable managerial scripts for
evacuation, containment and damage control demands only the introduction of superficialities.
Dynamic Updating of Decision Models
Finally, still in recollection of Figure 1, amendment activities involving relational model-base
substructures would be set in motion by the appearance of an input(s) that causes a change in the
value of a variable. There are several different ways of assigning point-in-time values within the
confines of relational model-base substructures.
To the extent that it's available (and it's only going to be available for decision situations
involving very simple subjects and where there's the luxury of some leisure), a Positivist-type
approach would be the generally preferred option because it offers both high reliability (probative
value) and precision. It has variable values determined by direct measurement; there is no
18
substantive mediation, and conditioning consists solely of simple arithmetic (counting-type)
activities. This results in parameter values meriting considerable confidence, requiring neither
qualification nor dilution. Variable values may also, secondly, be a product of conventional
statistical conditioning. Statistical mediation may mean little more than the numerical compression
to yield artifices like means and standard deviations, or it may involve the use of perhaps quite
elaborate projective (longitudinal or cross-sectional) functions. These will yield either range
estimates (where the likelihood of including the true parameter value somewhere within its
interval can be purchased only at the expense of practicable precision), or an inferential value
whose veracity must itself be treated as a variable.
There is a third alternative that's probably going to be the option of most frequent recourse for
most decision situations or administrative tasks. This would have current variable values
computed via a Bayesian-type procedure similar to one that’s been used for fifty or more years to
provide decision predicates for Dynamic Programming exercises (Beckman, 1968; DeNardo,
2003). Data is derived from a sequence of measurements/observations on variables of interest.
These data are then taken to represent running samples, with sampling frequencies set in reflection
of apparent volatility of the subject/object of interest. Inputs will usually be sought from multiple
sources, with the selection of sources respectful of the rules for devising properly random and/or
adequately representative samples. The resultant data could then drive a Bayesian-type updating
function of this familiar form, ϋm,t = (  {I}t), where ϋ is the current (time-t) Bayesiandominant valuation for Vm,  is the prior data set (an assemblage of all previously-handled
historical values for Vm) and {I}t is a posterior data set housing the values acquired during the
latest iteration of the measurement/observational exercises on V m. The advantage of Bayesian-type
parameter-setting mechanics is that they can be easily tuned and also readily retuned to allow
relatively more recent data to regularly exert relatively more influence.
Not to belabor the obvious, but what’s been said here about dynamic updating of parameter
values could just as well apply to coefficients. Though not expected to arise with any regularity,
there can be cases where coefficient-related data are more accessible than that pertaining to
parametric values for variables. In cases where a change in the value of a coefficient is the
triggering incident, the subsequent effect would be to induce changes in the prevailing values for
the immediately involved variables, which might then go on to excite secondary changes in still
other variables included in a decision substructure.
To return to what’s expected to be the normal course of events, should it happen that the
receipt of new data forces a non-trivial change in the value of a variable, the concatenation of
downstream effects would be determined by the character of relational operators, or coefficient
functions, sitting in any activated cells of the relational substructure in which the originally altered
variable is embedded. Alternatively, things may be arranged so that a change in a variable's value
could induce a change in the value of a coefficient tying it to another variable, or variables, and so
radiate from there throughout the relevant portions of a relational substructure.
In any event, a change in the value of any component variable or coefficient should trigger a
rerunning of the affected decision model, and so possibly lead an alteration to the ongoing courseof-action, or perhaps to the adoption of a disparate approach all together. Beyond this, any
iteration of a decision model can generate impacts that will resonate outward, along channels
interconnecting relational model-base substructures, and perhaps excite changes in other models as
dictated by first-order relational operators/functions. And, finally, for decision models (or
administrative agents) that are embedded in manifold network constructs, second-order relational
operators make it possible for a change in the value of a single variable to resonate throughout the
19
reaches of some administrative task, and so perhaps come to affect the fortunes of an enterprise in
its entirety.
It’s this last feature that makes it practically possible to think of relational model-base
structures as being able to support not just more extensive, but more effective, essays in dynamic
resource allocation. If, in fact, there’s a signature application for relational model-base structures,
it’s emergency management, given the high analytical loadings it imposes and the premium on
timeliness. If there were ever any doubts about the pressing need for improvements in this area,
they were dramatically dispelled by the awful inadequacies of the institutional responses to
Hurricane Katrina.
CONCLUSION
The grounding proposition of this paper was, recall, that the requirement for relational modelbase structures, or something like them, arises from the likelihood of an increasing number of
enterprises becoming increasingly more dependent on decision models, particularly in their more
active vs. passive incarnations. This accounts for the problem the previous pages attempted to
answer, how enterprises’ administrative apparatus might be brought within the comprehension of
computers. To be sure, what these pages put forward was just a single vision of how model-base
structures might be constructed; they illuminated only one avenue of approach to the automation
of management. Other options surely exist; but it remains for others to find them.
While making administrative processes apparent to computers is the main mission for
relational models-base structures, there’s an ancillary role they can play that’s at least worth a
quick mention before closing. This posits an instructional rationale, as relational model-base
structures should be able to offer executives an opportunity to get to know their enterprises in a
new way. What a relational model-base structure can provide, after all, is a finely-drawn portrait
of an enterprise’s actual administrative system (as distinct from the distributions of nominal
decision responsibilities available from organization charts). Equipped with such knowledge,
executives might then institute a systematic search for ways in which their enterprise’s
administrative system might be ‘reengineered’ to make it more streamlined and elegant…or
maybe just more economical.
REFERENCES
Arctur, David and Zeiler, Mike. Designing Geodatabases: Case Studies in GIS Data Modeling. ESRI Press
(2004).
Barratt, Rod. Atmospheric Dispersion Modelling: An Introduction to Practical Applications. Earthscan
2001
Bazzan, Ana, Klügl, Franziska and Ossowski, Sascha (Eds.) Applications of Agent Technology in Traffic and
Transportation, Birkhäuser (2005).
Bazzan, Ana, de Oliveira, D. and Lesser, V. “Using Cooperative Mediation to Coordinate Traffic Lights: a
Case Study”. Proceedings of Fourth International Joint Conference on Autonomous Agents and Multiagent
Systems, pp. 463-469 (July 2005).
Beckman, M. J. Dynamic Programming of Economic Decisions. Springer-Verlag (1968).
Berry, William D. Nonrecursive Causal Models [Sage, 1984].
20
Breiman, L., Friedman, J., Stone C.J and Olshen, R.A. Classification and Regression Trees. CRC (1984).
Busch, Daniel and Grant, Conrad J. Changing the Face of War - The Co-operative Engagement Capability.
United States Navy (March 2003).
Collins, John and Gini, Maria. “Exploring Decision Processes in Multi-Agent Automated Contracting”.
IEEE Internet Computing. (Mar-Apr 2001).
Davenport, Thomas H. and Harris, Jeanne G., “Automated Decision Making Comes of Age”, MIT Sloan
Management Review (Summer 2005).
Davidow, W. H. and Malone, M. S. The Virtual Corporation. HarperBusiness, (1992).
Davidsson, Paul and Wernstedt, Fredrik. “A Multi-Agent System Architecture for Coordination of Just-InTime Production and Distribution”. Proceedings of the 17th ACM Symposium on Applied Computing,
Special Track on Coordination Models, Languages and Applications (2002).
Davidsson, Paul, Johansson, Stefan J., Persson Jan A. and Wernstedt, Fredrik. “Agent-based approaches and
Classical Optimization Techniques for Dynamic Distributed Resource Allocation: A preliminary study.”
AAMAS'03 workshop on Representations and Approaches for Time-Critical Decentralized
Resource/Role/Task Allocation, Melbourne, Australia. (2003).
Davidsson, Paul, Henesey, Larry, Ramstedt, Linda, Törnquist,Johanna and Wernstedt, Fredrik. “An Analysis
of Agent-Based Approaches to Transport Logistics.” Transportation Research Part C: Emerging
Technologies, Vol. 13(4); Elsevier (2005).
Dawkins, Richard. Unweaving the Rainbow. Houghton Mifflin (1998).
DeNardo, Eric V. Dynamic Programming : Models and Applications. Dover (2003).
Durfee, E. H., Lesser, Victor R. and Corkill, Daniel D. “Trends in Cooperative Distributed Problem Solving”
IEEE Trans. on Knowledge and Data Engineering (1995).
Eckerson, Wayne W. Performance Dashboards. John Wiley & Sons Inc. (2005).
Evans, Tom P., Leah K. VanWey, and Emilio F. Moran. “Human-Environment Research, Spatially Explicit
Data Analysis and GIS.” in: Seeing the Forest and the Trees: Human-Environment Interactions in Forest
Ecosystems (Emilio F. Moran and Elinor Ostrom, eds). MIT Press (2005).
Ershov, Yu. L., et. al. (eds), Handbook of Recursive Mathematics. North Holland (1998).
Fox, Mark S., Barbuceanu, Mihai and Teigen, Rune. “Agent-Oriented Supply-Chain Management”,
International Journal of Flexible Manufacturing Systems. 12 (2000).
Freeman, J. and Skapura, D. Neural Networks. Addison-Wesley (1991).
Giglio, L., Descloitres, J., Justice, C. O., and Kaufman, Y. “An enhanced contextual fire detection algorithm
for MODIS.” Remote Sensing of Environment, 47 (2003).
21
Gmytrasiewicz, Piotr and Parsons, Simon. Game Theoretic and Decision Theoretic Agents. American
Association for Artificial Intelligence (2002).
Greene, James H. Production and Inventory Control Handbook. McGraw-Hill (1997).
Hall, D. L. and Llinas, J. Handbook of Multisensor Data Fusion. CRC Press (2001).
Hess, T., Rees, L. and Rakes, T. “Using Autonomous Planning Agents to Provide Model-based Decisionmaking Support.” Journal of Decision Systems , 14; 3 (2005).
Kaufman, Y. J., Justice, C. O., Flynn, L. P., Kendall, J. D., Prins, E. M., Giglio, L., Ward, D. E., Menzel,
W.P., and Setzer, A. W. “Potential global fire monitoring from EOS-MODIS.” Journal of Geophysical
Research. 103:32 (1998).
Kline, Jeffrey D., Alig, Ralph J. and Garber-Yonts, Brian. “Forestland Social Values and Open Spaces
Preservation.” Journal of Forestry. (Dec. 2004).
Klusch, M., Bürckert, H.-J., Gerber, A., Russ, C., “Applications of Information Agents and Systems”, in:
Practical Applications of Intelligent Agents (Jain, ed.), Springer (2002).
Korzyk, Alex D. A. “Cooperative Engagement Capability (CEC) Approach For Homeland Security
Information Operations”. Proceedings of the SCCT/2004.
Kropla, Bill. MapServer: Open Source GIS Development. Apress (2005).
Labarthe, O. Espinasse, B., Ferrarini, A. and Montreuil, B. “A Methodological Approach for Agent Based
Simulation of Mass Customizing Supply Chains.” Journal of Decision Systems. 14; 4 (2005).
Loh, W. y. and Shih, Y. S. “Split Selection Methods for Classification Trees”. Statistica Sinica. 7 (1997).
Luckham, David. The Power of Events: An Introduction to Complex Event Processing in Distributed
Enterprise Systems. Addison Wesley Professional, 2002. l
Mahajan, Jayashree, Vakharia, Asoo J. and Radas, Sonja. “Channel Strategies and Stocking Policies in
Uncapacitated and Capacitated Supply Chains.” Decision Sciences, (Spring 2002).
Malczewski, Jacek. GIS and Multicriteria Decision Analysis. Wiley (1999).
Noh, Sanguk and Gmytrasiewicz, Piotr. “Flexible Multi-Agent decision-Making Under Time Pressure.”
IEEE Transactions on Systems, Man and Cybernetics, 35; 5. (Sept 2005).
Orman, Levent V. “Structural Analysis of Decision Models.” Proceedings of the 28th Hawaii International
Conference on System Sciences (1995).
Ostebee, Arnold. “Multicriteria decision making: advances in MCDM models, algorithms, theory, and
applications.” American Mathematical Monthly (May 2000).
Perrow, C. Normal Accidents. Basic Books (1984).
Qi, Hairong, Iyengar, S. and Chakrabarty, K. “Multiresolution data integration using mobile agents in
distributed sensor networks.” IEEE Transactions on Systems, Man and Cybernetics. 31; 3 (Aug 2001).
22
Phillips-Wren, G., Forgionne G. “Advanced Decision Making Support using Intelligent Agent Technology.”
Journal of Decision Systems. 11; 2 (2002).
ReVelle, C. and H.A. Eiselt "Location Analysis: A Synthesis and Survey". European Journal of Operational
Research. (2003).
Rodriguez-Aguilar, Juan A., Giovanucci, Andrea, Reyes-Moro, Antonio, Noria, Francesc X., and Cerquides,
Jesus. “An agent-based decision support for actual-world procurement scenarios”. Proceeding: IEEE/WIC
International Conference on Intelligent Agent Technology, IAT'03 (2003).
Rogers, Paul and Blenko, Marcia. The Decision-Driven Organization. Bain & Co. (2005).
Sutherland, J. W. “Integrative Systems: Assessing requirements and capabilities for intra- and interorganizational contexts.” IEEE Transactions on Systems, Man, and Cybernetics.28, 2 (March 1998).
Witten, Ian H. and Frank, Eibe. Data Mining: Practical machine learning tools with Java implementations.
Morgan Kaufmann (2000)
Zhang, Xiaoqin; Lesser, V.; Wagner, Tom. “Integrative Negotiation Among Agents Situated in
Organizations.” IEEE Trans. on Systems, Man, and Cybertetics. 36; 1 (2006).
23
Download