Tutorial Slot - Paradigm Shift International

advertisement
UAST and Evolving Systems of Systems in the Age of the Black Swan
Part 2: On Detecting Aberrant Behavior
www.parshift.com/Files/PsiDocs/Pap090901IteaJ-PathsForPeerBehaviorMonitoringAmongUAS.pdf
www.parshift.com/Files/PsiDocs/Pap091201IteaJ-MethodsForPeerBehaviorMonitoringAmongUas.pdf
“There is no difficulty, in principle, in developing synthetic organisms as
complex and as intelligent as we please. But we must notice two
fundamental qualifications; first, their intelligence will be an adaptation
to, and a specialization towards, their particular environment, with no
implication of validity for any other environment such as ours; and
secondly, their intelligence will be directed towards keeping their own
essential variables within limits. They will be fundamentally selfish.
Principles of the self-organizing system, W. Ross Ashby, 1962
Based on a presentation at
UAST Tutorial Session
ITEA LVC Conference,
12 Jan 2009, El Paso, TX.
UAST: Unmanned
Autonomous Systems test
also: L3 Art Brooks
did Masters paper here
rick.dove@stevens.edu, attributed copies permitted
1
Domain Independent Principles Can Inform UAST ConOps
system
Class 2
(federated?)
testing
enterprise
environment
(an ecology)
systems
Class 1
testing
system(s)
UAST
Class 2
systems
under test
Politics
Technology
Govt Procedures
Mil procedures
Military reality
Competitors
Enemies
UASoS
systems
Systems in Context
rick.dove@stevens.edu, attributed copies permitted
2
Problem and Observation
• Self Organizing Systems of Systems are too complex to test beyond
“minimal” functionality and “apparent” rationality.
• Autonomous self organizing entities have a willful mind of their own.
• Unpredictable emergent behavior will occur in unpredictable situations.
• Emergent behavior is necessary and desirable (when appropriate).
• Inevitable: sub-system failure, command failure, enemy possession.
• UAS will work together as flocks, swarms, packs, and teams.
• Even human social systems exhibit unintended “lethal” consequences.
-------In biological social systems, members monitor/enforce behavior bounds.
Could UAS have built-in socially attentive monitoring (SAM) on mission?
Could UAST employ SAM proxies for monitoring antisocial UAS?
Challenges:
1) “Learning” the behavior patterns to monitor.
2) Technology for monitoring complex dynamic patterns in real time.
3) Decisive counter-consequence action.
rick.dove@stevens.edu, attributed copies permitted
3
www.cc.gatech.edu/ai/robot-lab/online-publications/MoshkinaArkinTechReport2008.pdf
Responsibility
Responsibility for
Lethal Errors by
Responsible Party.
The soldier was
found to be the
most responsible
party, and robots
the least.
Responsible Party
Survey on Lethality
and Autonomous
Systems
Lethality and Autonomous Systems:
Survey Design and Results,
Lilia Moshkina, Ronald C. Arkin,
Technical Report GIT-GVU-07-16, Mobile Robot Laboratory,
College of Computing, Georgia Institute of Technology, p. 30, 2007
rick.dove@stevens.edu, attributed copies permitted
4
www.cc.gatech.edu/ai/robot-lab/online-publications/MoshkinaArkinTechReport2008.pdf
Applicability of ethical categories is ranked from more concrete and specific to
more general and subjective.
Lilia Moshkina, Ronald C. Arkin, Lethality and Autonomous Systems: Survey Design and Results,
Technical Report GIT-GVU-07-16, Mobile Robot Laboratory, College of Computing, Georgia Institute of Technology, p. 29, 2007
rick.dove@stevens.edu, attributed copies permitted
5
Four
The Three Laws
of Robotics
(Isaac Asimov)
0) A robot may not harm humanity, or, by
inaction, allow humanity to come to
harm (added later).
1) A robot may not injure a human being
or, through inaction, allow a human
being to come to harm.
2) A robot must obey orders given it by
human beings except where such
orders would conflict with the First Law.
3) A robot must protect its own existence
as long as such protection does not
conflict with the First or Second Law.
This cover of I, Robot illustrates the
story "Runaround", the first to list
all Three Laws of Robotics
(Asimov 1942)
rick.dove@stevens.edu, attributed copies permitted
6
Self Organizing Inevitability
Isaac Asimov's three laws of robotics were developed to allow UxVs to coexist
with humans, under values held dear by humans (imposed on robots).
These were not weapon systems.
Asimov’s robots existed in a peaceful social environment. Ours are birthing into a
community of warfighters, with enemies, cyber warfare, great destructive
capabilities, human confusion, and a code of war.
Ashby notes that a self organizing system by definition behaves selfishly,
and warns that its behaviors may be at odds with its creators.
So – can we afford to build truly self organizing systems?
A foolish question. We will do that regardless of the possible dangers, just as we
opened the door to atomic energy, bio hazards, organism creation,
nanotechnology, and financial meltdown.
Can a cruise missile on a mission be hacked and turned to the enemy’s bidding?
Perhaps we can say that it hasn’t occurred yet. Can a cruise missile get sick or
confused, and hit something it shouldn’t? That’s already happened.
The issue is not “has it happened”. The issue is “can it happen”.
We cannot test-away bad things from happening,
so we better be vigilant for signs of imminence,
and have actionable options when the time has come.
rick.dove@stevens.edu, attributed copies permitted
7
Four Selfish (Potential) Guiding Principles
(for synthetics)
Protection of permission to exist (civilians, public assets)
Protection of mission
Protection of self
Protection of others of like kind
A safety mechanism based on principles,
for we can never itemize
all of the
situational patterns
and the
appropriate response to each
rick.dove@stevens.edu, attributed copies permitted
8
ARTURO MEDINA
rick.dove@stevens.edu, attributed copies permitted
9
ARTURO MEDINA
rick.dove@stevens.edu, attributed copies permitted
10
ARTURO MEDINA
… and here’s the
Cat’s Cradle
rick.dove@stevens.edu, attributed copies permitted
11
Aberrant behavior arising in a stable social system
is detected and opposed
Example: Female penguin attempting to steal a replacement egg
for the one she lost is prevented from doing so by others.
wip.warnerbros.com/marchofthepenguins/
rick.dove@stevens.edu, attributed copies permitted
12
Ganging Up on Aberrant Behavior
T. Monnin, F.L.W. Ratnieks, G.R. Jones, R. Beard, Pretender punishment induced by chemical signaling in a queenless ant, Nature, V. 419, 5Sep2002
http://lasi.group.shef.ac.uk/pdf/mrjbnature2002.pdf
Queenless ponerine ants have no queen caste. All
females are workers who can potentially mate and
reproduce. A single “gamergate” emerges, by
virtue of alpha rank in a near-linear dominance
hierarchy of about 3–5 high-ranking workers.
Usually the beta replaces the gamergate if she
dies. A high-ranker can enhance her inclusive
fitness by overthrowing the gamergate, rather than
waiting for her to die naturally.
(a) To end coup behavior, the gamergate (left)
approaches the pretender, usually from behind or
from the side, briefly rubs her sting against the
pretender depositing a chemical signal, then runs
away, leaving subsequent discipline to others.
(b) One to six low-ranking workers bite and hold
the appendages of the pretender for up to 3–4
days with workers taking turns. Immobilization
can last several days, and typically results in the
pretender losing her high rank. It is not clear why
punishment causes loss of rank, but it is probably
a combination of the stress caused by
immobilization and being prevented from
performing dominance behaviours. Occasionally
the immobilized individual is killed outright.
rick.dove@stevens.edu, attributed copies permitted
13
Promising Things to Leverage
Social pattern monitoring

Relationships (Gal Kaminka, Ph.D. dissertation)

Trajectories (Stephan Intille, Ph.D. dissertation)

Emergence (Sviatoslav Braynov, repurposed algorithm concepts)
Technology and Knowledge

Human expertise (Gary Klein, Phillip Ross, Herb Simon)

Biological feedforward hierarchies (Thomas Serre, Ph.D. dissertation)

Parallel pattern processor (Curt Harris, VLSI architecture)
rick.dove@stevens.edu, attributed copies permitted
14
Accuracy: Decentralized Beats Centralized Monitoring
From: Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, 2000, p. 6.
www.isi.edu/soar/galk/Publications/diss-final.ps.gz.
“We explore socially-attentive algorithms for detecting teamwork failures under
various conditions of uncertainty, resulting from the necessity of selectivity.
We analytically show that despite the presence of uncertainty about the actual
state of monitored agents, a centralized active monitoring scheme can guarantee
failure detection that is either sound and incomplete, or complete and unsound.
[centralized: no false positives (sound) or no false negatives (complete), not both]
However, this requires monitoring all agents in a team, and reasoning about
multiple hypotheses as to their actual state.
We then show that active distributed teamwork monitoring results in sound and
complete detection capabilities, despite using a much simpler algorithm. By
exploring the agents’ local states, which are not available to the centralized
algorithm, the distributed algorithm: (a) uses only a single, possibly incorrect
hypothesis of the actual state of monitored agents, and (b) involves monitoring
only key agents in a team, not necessarily all team-members (thus allowing even
greater selectivity).
rick.dove@stevens.edu, attributed copies permitted
15
Execution Monitoring in Multi-Agent Environments
Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, www.isi.edu/soar/galk/Publications/diss-final.ps.gz.
A key goal of monitoring other agents:
 Detect violations of the relationships
that agent is involved in
 Compare expected relationships to
those actually maintained
 Diagnose violations,
leading to recovery
attacker
incorrectly
scout
flying with
looking
scout
for enemy
enemy
Motivation for relationship failure-detection:
 Cover large class of failures
 Critical for robust performance of entire team
attacker
correctly
waiting for
scout report
Relationship models specify how agents’ states are related:
 Formation model specifies relative velocities, distances
 Teamwork model specifies that team plans jointly executed
 Many others: Coordination, mutual exclusion, etc.
Agent Modeling:
 Infer agents state from observed actions via plan-recognition
 Monitor agents, attributes specified by relationship models
rick.dove@stevens.edu, attributed copies permitted
16
Identifying Football Play Patterns from Real Game Films
Visual Recognition of Multi-Agent Action
Stephen Sean Intille, Ph.D.Thesis, MIT, 1999
http://web.media.mit.edu/~intille/papers-files/thesis.pdf.
Chalk board patterns a receiver can run.
A p51curl play. Doesn’t
happen like the chalk board,
but is still recognizable.
The task of recognizing American football plays was selected to investigate the
general problem of multi-agent action recognition.
This work indicates one method for monitoring
multi-agent performance according to plan
rick.dove@stevens.edu, attributed copies permitted
17
Maybe Even….Detecting Emergent Behaviors in Process
Sviatoslav Braynov, Murtuza Jadliwala, Detecting Malicious Groups of Agents.
The First IEEE Symposium on Multi-Agent Security and Survivability, 2004.
“In this paper, we studied coordinated attacks and the problem of detecting
malicious networks of attackers. The paper proposed a formal method and an
algorithm for detecting action interference between users. The output of the
algorithm is a coordination graph which includes the maximal malicious group of
attackers including not only the executers of an attack but also their assistants.
The paper also proposed a formal metric on coordination graphs that help
differentiate central from peripheral attackers.”
“Because the methods proposed in the paper allow for detecting interference
between perfectly legal actions, they can be used for detecting attacks at their
early stages of preparation. For example, coordination graphs can show all agents
and activities directly or indirectly related to suspicious users.
------------------------- conjecture begging investigation ------------------------This work focused on identifying the members of a group of “perpetrators”
among a group of “benigns”, based on their cooperative behaviors in causing an
event. It is applied in both forensic analysis and in predictive trend spotting.
It may be a methodology for identifying the conditions of specific emergent
behavior after the fact – for “learning” new patterns of future use.
It may also provide an early warning mechanism for
detecting emergent aberrant team behavior, rather than aberrant UAS behavior.
rick.dove@stevens.edu, attributed copies permitted
18
The RPD (Recognition Primed Decision)
model offers an account of situation
awareness. It presents several aspects of
situation awareness that emerge once a
person recognizes a situation. These are
the relevant cues that need to be
monitored, the plausible goals to pursue
and actions to consider, and the
expectancies. Another aspect of situation
awareness is the leverage points. When
an expert describes a situation to
someone else, he or she may highlight
these leverage points as the central
aspects of the dynamics of the situation.
Experts see inside events and objects.
They have mental models of how tasks
are supposed to be performed, teams are
supposed to coordinate, equipment is
supposed to function. This model lets
them know what to expect and lets them
notice when the expectancies are
violated. These two aspects of expertise
are based, in part, on the experts’ mental
models.
Garry Klein (1998) Sources of Power: How people make decisions, 2nd MIT
Press paperback edition, Cambridge, MA. page 152
rick.dove@stevens.edu, attributed copies permitted
19
'Field Sense’ Gretzky-Style
Jennifer Kahn, Wayne Gretzky-Style 'Field Sense' May Be
Teachable, Wired Magazine, May 22, 2007.
www.wired.com/science/discoveries/magazine/15-06/ff_mindgames#
Five seconds of the 1984 hockey game
between the Edmonton Oilers and the
Minnesota North Stars:
The star of this sequence is Wayne
Gretzky, widely considered the greatest
hockey player of all time. In the footage,
Gretzky, barreling down the ice at full
speed, draws the attention of two
defenders. As they converge on what
everyone assumes will be a shot on
goal, Gretzky abruptly fires the puck
backward, without looking, to a
teammate racing up the opposite wing.
The pass is timed so perfectly that the
receiver doesn't even break stride.
"Magic," Vint says reverently. A
researcher with the US Olympic
Committee, he collects moments like
this. Vint is a connoisseur of what
coaches call field sense or "vision," and
he makes a habit of deconstructing
psychic plays: analyzing the steals of
Larry Bird and parsing Joe Montana's
uncanny ability to calculate the
movements of every person on the field.
rick.dove@stevens.edu, attributed copies permitted
20
The Stuff of Expertise
Research indicates that human expertise (extreme domain specific sense-making)
is primarily a matter of meaningful pattern quantity – not better genes.
According to an interview with Nobel Prize winner Herb Simon (Ross 1998),
people considered truly expert in a domain (e.g. chess masters, medical
diagnosticians) are thought unable to achieve that level until they’ve accumulated
some 200,000 to a million meaningful patterns, requiring some 20,000 hours of
purposeful focused pattern development.
The accuracy of their sense making is a function of the breadth and depth of their
pattern catalog.
In biological entities, the accumulation of large expert-level pattern quantities
does not manifest as slower recognition time.
All patterns seem to be considered simultaneously for decisive action. There is no
search and evaluation activity evident.
On the contrary, automated systems, regardless of how they obtain and represent
learned reference patterns, execute time-consuming sequential steps to sort
through pattern libraries and perform statistical feature mathematics.
This is the nature of the computing mechanisms and recognition algorithms
employed in this service.
Philip Ross (1998), “Flash of Genius,” an interview with Herbert Simon,
Forbes, November 16, pp. 98- 104, www.forbes.com//forbes/1998/1116/6211098a.html.
Also: Philip Ross, The Expert Mind, Scientific American, July 2006
rick.dove@stevens.edu, attributed copies permitted
21
Rapid visual
categorization
Reverse
Engineering
the Brain
Visual input can be classified
very rapidly…around 120 msec
following image onset…At this
speed, it is no surprise that
subjects often respond without
having consciously seen the
image; consciousness for the
image may come later or not at
all.
Dual-task and dual-presentation
paradigms support the idea that
such discriminations can occur in
the near-absence of focal, spatial
attention, implying that purely
feed-forward networks can
support complex visual decisionmaking in the absence of both
attention and consciousness.
This has now been formally
shown in the context of a purely
feed-forward computational
model of the primate’s ventral
visual system (Serre et al., 2007).
www.technologyreview.com/printer_friendly_article.aspx?id=17111
www.scholarpedia.org/article/Attention_and_consciousness/
processing_without_attention_and_consciousness
rick.dove@stevens.edu, attributed copies permitted
22
Explaining Rapid Categorization.
Thomas Serre, Aude Oliva, Tomaso Poggio.
http://cbcl.mit.edu/seminars-workshops/workshops/serre-slides.pdf
rick.dove@stevens.edu, attributed copies permitted
23
The Monitoring Selectivity Problem:
Unacceptable Accuracy Compromise
From: Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, 2000, pp. 3-4.
www.isi.edu/soar/galk/Publications/diss-final.ps.gz.
“A key problem emerges when monitoring multiple agents: a monitoring agent
must be selective in its monitoring activities (both raw observations and
processing), since bandwidth and computational limitations prohibit the agent
from monitoring all other agents to full extent, all the time.
However, selectivity in monitoring activities leads to uncertainty about monitored
agent’s states, which can lead to degraded monitoring performance. We call this
challenging problem the Monitoring Selectivity Problem: Monitoring multiple
agents requires overhead that hurts performance; but at the same time,
minimization of the monitoring overhead can lead to monitoring uncertainty that
also hurts performance.
Key questions remain open:
 What are the bounds of selectivity that still facilitate effective monitoring?
 How can monitoring accuracy be maintained in the face of limited knowledge of
other agents’ states?
 How can monitoring be carried out efficiently for on-line deployment?
This dissertation begins to address the monitoring selectivity problem in teams
by investigating requirements for effective monitoring in two monitoring tasks:
Detecting failures in maintaining relationships, and determining the state of a
distributed team (for both faire detection and visualization).
rick.dove@stevens.edu, attributed copies permitted
24
Processor Recognition Speed Independent of
Pattern Quantity and Complexity
Snort chart source: Alok Tongaonkar, Sreenaath Vasudevan, R. Sekar, Fast Packet Classification for Snort by Native Compilation of Rules,
Proceedings of the 22nd Large Installation System Administration Conference (LISA '08), USENIX, Nov 9–14, 2008.
www.usenix.org/events/lisa08/tech/full_papers/tongaonkar/tongaonkar_html/index.html
Processor info source: Rick Dove, Pattern Recognition without Tradeoffs: Scalable Accuracy with No Impact on Speed,
To appear in Proceedings of Cybersecurity Applications & Technology Conference For Homeland Security, IEEE, April 2009.
www.kennentech.com/Pubs/2009-PatternRecognitionWithoutTradeoffs-6Page.pdf.
4000
Nanoseconds
per Packet
8 million real packets run on
3.06 GHz Intel Xenon processor
3000
Snort 2.6
Packet Header
Interpreter
2000
Interpreter
Replaced with
Native Code
1000
Pattern processor
comparative speed
(unbounded) 0
0
40
100
200
300
400
500
600
Number of Rules Employed
Comparison shows pattern processor’s flat constant speed recognition
vs typical computational alternative. Example chosen for ready availability.
rick.dove@stevens.edu, attributed copies permitted
25
Reconfigurable Pattern Processor
Reusable Cells Reconfigurable in a Scalable Architecture
Independent detection cell:
content addressable
by current input byte
If active, and satisfied with
current byte, can activate
other designated cells
including itself
Cell-satisfaction
output pointers
Up to 256 possible features
can be “satisfied” by all
so-designated byte values
Cell-satisfaction
activation pointers
Individual detection cells are configured
into feature cell machines by linking
activation pointers (adjacent-cell
pointers not depicted here)
an unbounded number of feature cells configured as feature-cell machines can extend indefinitely across multiple processors
All active cells have simultaneous access to current data-stream byte
rick.dove@stevens.edu, attributed copies permitted
26
Simple Example: Pattern Classification Method Suitable for Many
Syntactic, Attributed Grammar, and Statistical Approaches
Reinitialization Transforms
Output Register R
Logical Intersection Transforms
Output Register S
Logical Union Transforms
Output Register P
Output Register T
Threshold Counter Transforms
Output Transform Pointers
Multiple Threshold Down Counters
FCM Activation Pointers
Output Transform Pointers
M1 M2 M3 M4 M5
½ Million Detection Cells
Layered Architecture Stack
counter 1
counter 2
counter 3
counter 4
Mn
Partial Conceptual Architecture Stack
classification output
occurs for any down
counter reaching zero
●
●
●
●
FCM-2
Configured FCMs
Weight=3
Class-1
Class-2
Class-3
Class-4
●
●
●
●
FCM-1
Very Simple
Weighted Feature
Example
●
●
●
●
FCM-3
●
●
●
●
FCM-4
Weight=2
output
pointers
●
●
●
●
FCM-5
●
●
●
●
FCM-6
●
●
●
●
●
●
●
●
FCM-7
●
●
●
●
●
●
●
●
●
●
●
●
FCM-n
Additional transforms provide sub-pattern combination logic
Finite Cell Machines, as depicted, could represent sub-patterns or “chunked” features shared by multiple
pattern classes. Padded FCM-7 and FCM-n increase feature weight with multiple down counts.
On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,
www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf
rick.dove@stevens.edu, attributed copies permitted
27
Value-Based Feature Example
A reference pattern example for behavior-verification of a mobile object.
Is it traveling within the planned space/time envelop?
Using GPS position data: Latitude, Longitude, Altitude.
linear, log or other scale
absolute
F
F
S
L
A
T
L
O
N
A
L
T
relative
Output
F = failure
S = success
256
distance
values
minimum
separation
L
A
T
L
O
N
A
L
T
L
A
T
L
O
N
A
L
T
showing acceptable ranges of values
FCM configured to
classify failure/success
On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,
www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf
rick.dove@stevens.edu, attributed copies permitted
28
Example: Monitoring Complex Multi-Agent Behaviors
Packetized data can use multi-part headers
to activate appropriate reference pattern sets for different times
UAS 1002
on task 3018
F
F
F
UAS ID Task ID
001.002 003.018
L
A
T
L
O
N
A
L
T
FCM-49
S
UAS 1002
on task 3002
UAS ID Task ID
001.002 003.002
F
F
F
L
A
T
L
O
N
A
L
T
S
Output
F = failure
S = success
FCM-50
On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,
www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf
rick.dove@stevens.edu, attributed copies permitted
29
Nature has sufficient, but not necessarily optimal, systems – One example:
Hybrid Adaptation Could Improve on Natural Systems
Applies to
Outcome
Time for one loop
to execute
Evolutionary
Populations
Produces new design
features.
Period between
generations –
generally slow
compared to timescale
of actions.
Parallelism of
processing
through
interaction
Highly parallel – every
member of the
population
is a simultaneous
experiment
‘evaluating’ the fitness
of one set of
variations.
Context
sensitivity
In retrospect only –
through some
variations turning out
to be fitter in the
context than others.
Alignment of
fitness and
selection
mechanism
100%
Learning
Individuals
Improves use of fixed
design.
Period for one action
(sense-process-decide-act)
loop, plus the associated
learning (observe action
consequences – process –
make changes) loop.
Serial – an individual
system or organism
experiments with one
strategy at a time.
In anticipation – i.e. before
choice of action or
response, as well as in
retrospect through
feedback from
consequences of action.
Highly variable.
Hybrid or Augmented
Either
May be able to do both, or do
either better.
Could be accelerated.
Could use learning mechanism
to create directed evolution,
and evolutionary strategies to
improve learning. Could also
parallelize learning through
either parallel processing in
single individual, or through
networking a population of
learning systems.
Could extend context
sensitivity to influence design
choices as well as action
choices.
Could improve alignment in
learning systems by
developing better proxies for
fitness to drive selection.
Grisogono, A.M. “The Implications of Complex Adaptive Systems Theory for C2.” Proceedings of the 2006 Command
and Control Research and Technology Symposium, 2006, www.dodccrp.org/events/2006_CCRTS/html/papers/202.pdf
rick.dove@stevens.edu, attributed copies permitted
30
Related Implications and Points
T&E cannot be limited to pre-deployment – it must be an ongoing never-ending
activity built-in to the SoS operating methods.
LVC – Put the tester into the environment – total VR immersion – as a player with
intervention capability (the ultimate driving machine). Humans will “see”
experientially and recognize things in real-time that forensics and remote data
analysis will not recognize.
These things we build are not children that we can watch and guide and correct.
They need to have a sense of ethics and principles that inform unforeseen
situational response.
The biological “expertise” pattern recognition capability needs to exist in both the
testing environment and on-board. We are building intelligent willful entities that
carry weapons.
rick.dove@stevens.edu, attributed copies permitted
31
Status Q1 2010
• Kaminka’s Socially Attentive Monitoring examples are modeled.
• Intelle’s trajectory recognition modeling was started, another approach is wip.
• Serre’s feedforward hierarchy image recognition Level 1 is modeled.
These algorithm models reside with others in a wiki
investigating collaborative parallel-algorithm development.
A processor emulator/compiler exists for algorithm modeling.
• One defense contractor already working on classified project.
• VLSI availability eta Q1 2012.
• ~128,000 feature cells expected for first generation modules.
• Chips can be combined for unbounded scalability.
Pursuits of interesting problems to attack with this new capability…
• x Inc: Collision avoidance in cluttered airspace.
• PSI Inc: Distributed anomaly detection, and hierarchical sensemaking
• OntoLogic LLC: Secure software code verification
This work was supported in part by the
U.S. Department of Homeland Security award NBCHC070016.
rick.dove@stevens.edu, attributed copies permitted
32
Aberrant behavior will not be tolerated!
rick.dove@stevens.edu, attributed copies permitted
33
Download