Cognitive science

advertisement
Brain, Mind, Consciousness
and the Ghost in the Machine
Włodzisław Duch
Department of Computer Science,
School of Computer Engineering,
Nanyang Technological University, Singapore
& Department of Informatics,
Nicolaus Copernicus University, Toruń, Poland
Google: Duch
Toruń
Toruń
Nicolaus Copernicus: born in 1472
Singapore
3 most important questions
Where do we come from? Cosmology, evolutionary biology.
Who are we?
Religion, philosophy, cognitive science.
Where are we going?
What are our values?
William James, Pragmatism and Common Sense (1907):
“... our fundamental ways of thinking about things are
discoveries of exceedingly remote ancestors”.
Steven Pinker, The Blank Slate: The Modern Denial of Human Nature
(2002), talks about 3 dogmas deeply embedded in our thinking:
• The Blank Slate (Tabula Rasa) – no innate traits in human nature.
• Noble Savage – people are born good, society corrupts.
• The Ghost in the Machine – souls make free choices, brains follow.
Many fear that discovery of the true human nature will strip life of
meaning and purpose, dissolve personal responsibility, justify inequality.
Comeback of the soul?
Francis Crick, “The Astonishing Hypothesis: The Scientific Search for the
Soul” (1994): “You, your joys and your sorrows, your memories and your
ambitions, your sense of personal identity and free will, are in fact no
more than the behavior of a vast assembly of nerve cell and their
associated molecules”.
• Whatever Became of the Soul? Scientific and Theological Portraits of
Human Nature (Brown, Murphy and Malony, Eds, 1998).
• 1998 Ethics & Public Policy Center: “Neuroscience and the Human
Spirit” conference; AAAS Dialogue on Science, Ethics, and Religion.
• Steven Pinker (MIT) and Richard Dawkins (Oxford), public discussion
in London, 1999, “Is science killing the soul?”
• Brain, Mind, Consciousness and the Soul discussion (Warsaw, March
2006), between experts in quantum physics, neurophysiology,
psychology, neurocognitive science, philosophy and theology, will be
published in the “Kognitywistyka” (Polish Cognitive Science) journal.
Ancient view
Gilbert Ryle, The concept of mind, Univ. of Chicago Press (1949)
Is there a ghost in the machine? Or is mind a product of the brain?
Is there a horse inside the steam train?
Mind is not a thing, it is a process, succession of brain states.
Bible: psychosomatic unity of human nature.
Duch W (1999) Duch i dusza, czyli prehistoria kognitywistyki.
Kognitywistyka i Media w Edukacji 1 (1999) pp. 7-38
Soul, spirit: dozens of meanings!
Things do not move by themselves, bodies are animated by spirits/souls.
Egyptians: 7 immortal souls, including shadow and personal name!
Aristotle (De anima) and St Thomas (Summa Theologica):
3 souls: vegetative or plant soul (growth), an animal soul (response),
philosopher’s soul (mind) – but these concepts lost their reference.
Cognitive science: foundation for
understanding
How do we known anything?
Franz-Joseph Gall (1758-1828) discovered 26 “organs” on the
surface of the brain which affect the contour of the skull,
including a “murder organ” present in murderers.
Phrenology become a widely accepted theory and cranioscopy
was even more popular than psychological testing today,
psychograph machines were sold untill 1937.
Thousands of observations supported phrenology!
Are we any wiser now?
Belief in miracle cures: is it only a placebo effect?
Is homeopathy effective? No, but it is profitable …
Is Traditional Chinese Medicine effective? Who knows?
Be open, but skeptical! Examine your cognition!
Cognitive math and physics
G. Lakoff & R. E. Nunez,
“Where mathematics comes from: How the Embodied
Mind Brings Mathematics into Being”. Basic Books 2000
Check Wikipedia article on Cognitive Mathematics.
How can a number express a concept?
How can mathematical formulas and equations express general ideas that occur
outside of mathematics, ideas like recurrence, change, proportions, selfregulating processes, and so on?
How can "abstract" mathematics be understood?
What cognitive mechanisms are used in mathematical understanding?“
Can we understand the meaning of Euler’s formula eip+1=0 ?
There is no agreement on the meaning of quantum mechanics.
Can cognitive approach solve the problems in understanding physics?
Few more CS applications
Animal sociobiology is a result of specific brain organization forming
a basis of moral behavior.
Understanding origins of morality and ethics led to a conference
on neuroethics at Stanford University in 2002.
Empathy, the basis of compassion, is closely related to our ability to
imitate and understand other people using mirror neurons.
Neuroaesthetics tries to understand what and why seems to be beautiful to our
brains, why art and music are common in all human societies, and what can we
learn about the brain from artist’s experiments.
S. Zeki, Inner Vision: An Exploration of Art and the Brain (1999).
Neuroaesthetics Institute (UCL), UC Berkeley Department, Harvard’s Institute for
Music and Brain Science … journals, conferences, courses.
Cognitive everything
Has memory always worked as it does now?
Ancient texts suggest significant differences.
Dragon sightings in XI century, UFO in XX’s.
Important for understanding history, ancient beliefs and
religions, but also witness’ reliability, and even one’s own
trustworthiness (false memory syndromes).
M. Persinger, "Neuropsychological Base of God Beliefs" (1987).
ShaktiLight, transcranial magnetic stimulation device, creates
in many people epiphanies, out-of-body experiences etc.
Blanke O, Landis T, Spinelli L, Seeck M, Out-of-body experience
and autoscopy of neurological origin. Brain 127: 243-258, 2004.
Fairly common experiences, although only a few people talk about.
Extensive brain imaging studies show disintegration of personal/extrapersonal
space due to dysfunction of temporoparietal junction.
CS & Business
Neuromarketing: brain imaging tells us more than we know
ourselves about our minds …
Recent fMRI proof of the effectiveness of branding: people
like Pepsi but choose Cola.
Understanding human decision-making is of primary importance
for business and financial world.
Gerald Zaltman, How Customers Think: Essential Insights into
the Mind of the Market (Harvard Business School Press), 2003.
Cognitive approach to understanding how-and why-customers
buy – great introduction to CS for business people.
J. Goldenberg & D. Mazursky, Creativity in Product Innovation.
Cognitive revolution
In many fields of science and art we are going through
Cognitive Revolution!
Understanding of human nature requires serious re-thinking.
Ancient ideas of ghosts in the machine, a nice cover for our ignorance,
is no longer acceptable – we are not so ignorant any more!
Can we trust our senses? Our minds?
Harry McGurk and John MacDonald,
“Hearing lips and seeing voices”,
Nature 264, 746-748 (1976)
Visual illusions: dragon.
Reality is constructed from measurements
and signal processing done by the sensory
subsystems!
Noise and stochastic resonance:
vision, tactile & associative memory!
Cognitive illusions
All we are able to experience are our brain states!
We know much less about cognitive illusions.
N. Nicholls, Bulletin of the American Meteorological Society (1999).
“These illusions, and ways to avoid them impacting on decision making,
have been studied in the fields of law, medicine, and business. The
relevance of some of these illusions to climate prediction is discussed
here. The optimal use of climate predictions requires providers of
forecasts to understand these difficulties and to make adjustments for
them in the way forecasts are prepared and disseminated.”
It is all in your head, although feels “out there”!
Zen story (~800 years): “It’s not the flag that moves, it is not wind,
it is your mind that moves”.
But ... can the brain understand itself?
If the brain was so simple we would be to stupid to understand it.
Complexity of the brain
Simple computational models inspired by neural networks
show many characteristics of real associative memories:
1. Memory is distributed, many neurons participate in
2.
3.
4.
5.
6.
•
•
encoding of each memory trace.
Damage to the network leads to graceful degradation
of performance instead of forgetting specific items.
Memory is content-addressable, recalled from partial cues.
Recall time does not depend on the number of memorized patterns.
Interference (seen in mistakes) and association between different
memory patterns depends on their similarity.
Attempts to memorize too many patterns in short time leads to
chaotic behavior.
Models explaining most neuropsychological syndromes exist;
computational psychiatry is rapidly developing since 1995.
Brain-like computing models provide real brain-like functions.
=> Complexity of the brain is not the main problem! Whole brain sim!
Cognitive robotics & complex devices
Robots need artificial minds, cognitive and affective control.
In fact all complex devices need artificial minds to
communicate with us effectively. Smart phones
will soon have hundreds of functions, but the
complexity of their use should be hidden from us.
Human-Computer Interaction becomes central
engineering problem.
Humanoid robotics
Robots need artificial minds, cognitive and affective control.
Toys – AIBO family is quite advanced, over 100 words,
face/voice recognition, 6 weeks to rise, self-charging.
Most advanced humanoid robots:
Sony Qrio, standing-up, dancing,
running, directing orchestra …
Honda P3
Honda Asimo
Mistsubishi-heavy Wakamaru, first commercially
sold household robot (Sept 2005)!
Qrio: Predicts its next movement in real time, shifts center of gravity in
anticipation, very complex motor control, but little cognitive functions.
Wakamaru: recognizes faces, orients itself towards people and greets
them, recognizes 10.000 words but does not understand much.
Artificial minds in robots and complex devices are still a dream …
Brain-inspired architectures
G. Edelman (Neurosciences Institute) & collaborators, created a series
of Darwin automata, brain-based devices, “physical devices whose
behavior is controlled by a simulated nervous system”.
(i) The device must engage in a behavioral task.
(ii) The device’s behavior must be controlled by a simulated
nervous system having a design that reflects the brain’s
architecture and dynamics.
(iii) The device’s behavior is modified by a reward or value system that
signals the salience of environmental cues to its nervous system.
(iv) The device must be situated in the real world.
Darwin VII consists of: a mobile base equipped with a CCD camera and
IR sensor for vision, microphones for hearing, conductivity sensors for
taste, and effectors for movement of its base, of its head, and of a
gripping manipulator having one degree-of-freedom; 53K mean firing
+phase neurons, 1.7 M synapses, 28 brain areas.
Where does it go?
Does this process converge to the real thing or to a smart calculator?
Is simulated thinking equivalent to real thinking, or is it
like rain in weather simulations?
Will the future AIBO have a dog-like mind,
and future Kismet be like David from AI movie?
Preposterous? Then what is missing?
Allan Turing: consciousness is an ill-defined concept;
just pass the conversation test and “you” are really thinking.
But is this “you” an intelligent person, conscious of its inner world,
or a zombi, mind-less system following its program?
Many philosophers of mind (Jackson, Nagel, Searle, Chalmers ... )
tried hard to show that human mind cannot be simulated.
Chinese room objection
Systems that pass Turing test still do not understand the meaning!
The men inside follows the rules but does not understand a word –
syntactic relations are not sufficient for semantics (J. Searl 1980).
Called “arguably the 20th century's greatest philosophical polarizer”,
this thought experiment has led to hundreds of articles and books!
Solution to the Chinese room
This is a trap! Once you treat it seriously it is hard to get out.
• It is not a test – the outcome is always negative!
If I go into your head I will not understand either.
• Conditions under which human observer could recognize that a
system understands should be discussed – a “resonance” of minds.
• A feeling “I understand” is confused here with real operational
understanding. Some drugs or mental practices induce the illusion of
understanding everything; sometimes we have no feeling of
understanding, but can answer correctly and in fact do understand.
• Searl concludes (wrongly): we know that humans understand,
therefore their neurons must have some mysterious causal powers
that computer elements do not have.
• Correct conclusion: Turing tests is still important, Chinese room fails.
Hard problem of consciousness
Old mind-body problem in new disguise, presented in the
Journal of Consciousness Studies in 1995, and in a book
Chalmers D.J, The Conscious Mind: In Search of a
Fundamental Theory, Oxford University Press 1996 (got > 50 reviews!)
• Easy problems: directing attention, recognizing, commenting, etc.
• Hard problem of consciousness: qualitative character of phenomenal
experience (PE), or qualia – why are we not zombies?
Theoretically all information processing could go on without any
experience – sweetness of chocolate, or redness of sunset.
Qualia = |Conscious perception – Information processing|
Inner experience cannot be explained in words, robots can work without it.
How to program something that does not make a difference?
Hard problem solution
A lot of nonsense has been written on qualia.
Some solutions: there is no problem; we will
never solve it; information processing has dual
aspects, physical and phenomenal;
panpsychism; protophenomena; quantum C ...
• 10 years of discussions led nowhere.
A fruitful way proposed by Thomas Reid (1785), and Indian philosophers
2000 years before him, distinguishes clearly between sensation (feeling)
and perception (judgment, discrimination).
I feel pain: makes an impression that some 'I' has an object 'pain'.
Reification of the process into an object creates a mystery.
It is just 'pain', sensation, a process, activity, system response.
Red color has a particular feeling to it: sure!
It corresponds to real, specific brain states/processes that differ from
brain states associated with other perceptions.
But why do qualia exist?
Imagine a rat smelling food.
In fraction of a second rat has to decide: eat or spit?
•
•
•
•
•
•
•
•
•
•
•
Smell and taste a bit.
Request for comments is send to memory from the gustatory cortex.
Memory is distributed, all brain has to be searched for associations.
Request appears as a working memory (WM) pattern at the global
brain dynamics level.
WM is small, just a few patterns fit in (about 7 in humans).
Resonant states are formed activating relevant memory traces.
Answer appears: bad associations! probably poison! spit!
Strong physiological reaction starts – perception serves action.
The WM episodic state is stored for future reference in LTM.
Rat has different "feelings" for different tastes.
If the rat could comment on such episode, what would it say?
Results of this non-symbolic, continuous taste discrimination have to
be remembered and associated with some reactions: qualia!
More on qualia
Long Term Memory (LTM) is huge, stored by 100T synapses.
Working Memory (WM) is probably based on dynamical brain
states (actualization of LTM potential possibilities).
• Adaptive resonant states: the up-going (sensory=>conceptual) and
the down-going (conceptual=>sensory) streams of information
self-organize to form reverberations, transitory brain/mind states.
• Resonant states are “dressed”: they contain associations, memories,
motor or action components, in one dynamical flow – this is quite
different from abstract states of the Turing machine registers.
What happens to the taste of a large ice-cream?
The taste buds provide all the information; the brain processes
it, but the qualia are gone after a short time.
Why? WM is filled with other objects, no resonances with
gustatory cortex are formed, no reference to taste memories.
Brain-like computing
Brain states are physical, spatio-temporal states of neural tissue.
• I can see, hear and feel only my brain states!
• Cognitive processes operate on highly processed sensory data.
• Redness, sweetness, itching, pain ... are all physical states of brain
tissue.
In contrast to computer registers,
brain states are dynamical, and
thus contain in themselves many
associations, relations.
Inner world is real! Mind is based
on relations of brain’s states.
Computers and robots do not
have an equivalent of such WM.
Automatization of actions
Learning: initially conscious involvement (large
brain areas active) in the end becomes automatic,
subconscious, intuitive (well-localized activity).
Formation of new resonant states - attractors in
brain dynamics during learning => neural models.
Reinforcement learning requires observing and evaluating how
successful are the actions that the brain has planned and is executing.
Relating current performance to memorized episodes of performance
requires evaluation + comparison (Gray – subiculum), followed by
emotional reactions that provide reinforcement via dopamine release,
facilitating rapid learning of specialized neural modules.
Working memory is essential to perform such complex task.
Errors are painfully conscious, and should be remembered.
Conscious experiences provide reinforcement; there is no transfer
from conscious to subconscious.
Why do we feel the way we do?
Qualia must exist in brain-like computing systems:
•
•
•
•
•
•
•
•
•
•
Qualia depend on cognitive mechanisms; habituation,
intensive concentration or attention may remove qualia.
Qualia require correct interpretation, ex: segmentation of visual
stimuli from the background; no interpretation = no qualia.
Secondary sensory cortex is responsible for interpretation; lesions
will lead to change in qualia (asymbolia).
Visual qualia: clear separation between higher visual areas
(concepts, object recognition) and lower visual areas; activity of
lower only should lead to qualia (eg. freezing V4 - no color qualia).
Memory is involved in cognitive interpretation: qualia are altered by
drugs modifying memory access.
Cognitive training enhances all sensory qualia; memorization of
new sounds/tastes/visual objects changes our qualia.
How does it feel to do the shoe laces? Episodic memory (resonant
states) leads to qualia; procedural memory (maps) - no qualia.
Phenomenology of pain: no pain without cognitive interpretation.
Wrong interpretation of brain states – unilateral neglect, body
dysmorphia, phantom limbs controlled by visual stimulation mirrors.
Blindsight, synesthesia, absorption states ... many others.
Requirements for qualia
System capable of evaluation of their WM states, must claim to have
phenomenal experiences and be conscious of these experiences!
Minimal conditions for an artilect to claim qualia and be conscious:
• Working Memory (WM), a recurrent dynamic model of current global
•
•
•
•
•
system (brain) state, containing enough information to re-instate the
dynamical states of all the subsystems.
Permanent memory for storing pointers that re-instate WM states.
Ability to discriminate between continuously changing states of WM;
"discrimination" implies association with different types of responses
or subsequent states.
Mechanism for activation of associations stored in
permanent memory and for updating WM states.
Act or report on the actual state of WM.
Representation of 'the self', categorizing the value of different states
from the point of view of the goals of the system, which are
implemented as drives, giving a general orientation to the system.
Artificial Minds
AM: software and robotic agents that humans can talk to
& relate to in a similar way as they relate to other humans.
AMs need: some perception, inner representation of the world, language
abilities, situated cognition, behavioral control.
Cognitive architecture: the highest-level controller, responsible for executive
functions (corresponding to frontal lobes in the brain).
Functions: recognition (patterns, situations, events) and
categorization, different types of memory, use of associative recall,
decision making, conflict resolution, improve with experience (learn),
select information (pay attention to relevant inputs), anticipate, predict
and monitor events, plan and solve problems, reason and maintain beliefs,
search for additional knowledge and communicate with specialized agents
that may find it ...
Only a few such large-scale architectures exist.
Engineering vision
1. What would be the biggest engineering achievement?
To see results of your research used by almost everyone on Earth.
2. What is it that everyone is using?
Portable phones and pilots to control TVs and other devices, getting
smarter but also more complex and difficult to use every year.
3. What is the stumbling block in developing the new generation of even
more useful smart phones that can do all we ask for?
Their complexity, the difficulty to use all their functions fully, the need for
tedious programming, in short human-machine communication.
4. What is the solution?
Humanized InTerfaces (HITs) that would communicate with us in a
natural way, ask minimum number of questions if the commands are
ambiguous and do what we ask for: control household devices, help us
to remember, communicate, access information and services, advice,
educate, play word games ...
Cognitive informatics again
•
Creation of such HITs with Artificial Minds interfaces is
the greatest challenge to computer engineering.
•
All other layers of software became more or less
standardized, from BIOS hardware level to the graphics
API for Windows user interfaces.
•
Creation of an extensible platform for natural perception, language
processing and behavioral modeling is the single most important subject
left in computer engineering.
•
This requires a concentrated effort of many people in an area that should
be best called “cognitive informatics”: understanding how humans
perceive, create their inner world, communicate and act, and creating
artifacts that behave as “artificial minds”, that understand and interact
with us in similar way as people do.
•
Cognitive architectures created so far are good beginning, but are not
sufficiently flexible to model many tasks.
HIT related areas
T-T-S synthesis
Affective
computing
Learning
Brain models
Behavioral
models
Speech recognition
HIT projects
Talking heads
Cognitive Architectures
AI
Robotics
Graphics
Lingu-bots
A-Minds
VR avatars
Info-retrieval
Cognitive
science
Knowledge
modeling
Semantic
memory
Episodic
Memory
Working
Memory
Brains and understanding
General idea: when the text is read and analyzed activation of semantic
subnetwork is spread; new words automatically assume meanings that
increases overall activation, or the consistency of interpretation.
Many variants, all depend on quality of semantic network, some include
explicit competition among network nodes.
1. How to approximate this process in computer models?
2. How to use it for medical text understanding, correlate information from
texts and genomic research?
3. How to build a practical system?
4. How to improve MDs training, understanding the learning processes.
Work in CCHMF, with John Pestian and Pawel Matykeiwcz.
Word games
Word games that were popular before computer games took over.
Word games are essential to the development of analytical thinking skills.
Until recently computer technology was not sufficient to play such games.
The 20 question game may be the next great challenge for AI, much
easier for computers than the Turing test; a World Championship with
human and software players?
Finding most informative questions requires understanding of the world.
Performance of various models of semantic memory and episodic
memory may be tested in this game in a realistic, difficult application.
Asking questions to understand precisely what the user has in mind is
critical for search engines and many other applications.
Creating large-scale semantic memory is a great challenge:
ontologies, dictionaries (Wordnet), encyclopedias,
collaborative projects (Concept Net) … movie
Words in the brain
The cell assembly model of language has strong experimental support;
F. Pulvermuller (2003) The Neuroscience of Language. On Brain Circuits of
Words and Serial Order. Cambridge University Press.
Acoustic signal => phonemes => words => semantic concepts.
Semantic activations are seen 90 ms after phonological in N200 ERPs.
Perception/action
networks, results
from ERP & fMRI.
Phonological density of words = # words that sound similar to a given word,
that is create similar activations in phonological areas.
Semantic density of words = # words that have similar meaning, or similar
extended activation network.
Query
Semantic memory
Applications, eg.
20 questions game
Humanized interface
Store
Part of speech tagger
& phrase extractor
verification
On line dictionaries
Manual
Parser
DREAM architecture
Web/text/
databases interface
NLP
functions
Natural input
modules
Cognitive
functions
Text to
speech
Behavior
control
Talking
head
Control of
devices
Affective
functions
Specialized
agents
DREAM is concentrated on the cognitive functions + real time control, we plan to
adopt software from the HIT project for perception, NLP, and other functions.
Puzzle generator
Semantic memory may invents a large number of word puzzles that the
avatar presents.
The application selects a random concept from all concepts in the
memory and searches for a minimal set of features necessary to uniquely
define it. If many subsets are sufficient for unique definition one of them is
selected randomly.
It is an Amphibian, it is orange and has black spots.
How do you call this animal?
A Salamander.
It has charm, it has spin, and it has charge.
What is it?
If you do not know, ask Google!
Quark page comes at the top …
Creating new words
A real letter from a friend:
I am looking for a word that would capture the following qualities: portal to
new worlds of imagination and creativity, a place where visitors embark on
a journey discovering their inner selves, awakening the Peter Pan within.
A place where we can travel through time and space (from the origin to the
future and back), so, its about time, about space, infinite possibilities.
FAST!!! I need it sooooooooooooooooooooooon.
creativital, creatival (creativity, portal), used in creatival.com
creativery (creativity, discovery), creativery.com (strategy+creativity)
discoverity = {disc, disco, discover, verity} (discovery, creativity, verity)
digventure ={dig, digital, venture, adventure} still new!
imativity (imagination, creativity); infinitime (infinitive, time)
infinition (infinitive, imagination), already a company name
journativity (journey, creativity)
learnativity (taken, see http://www.learnativity.com)
portravel (portal, travel); sportal (space, sport, portal), taken
timagination (time, imagination); timativity (time, creativity)
tivery (time, discovery); trime (travel, time)
Affect-based Cognitive Skill Instruction in
an Intelligent Tutoring System
• Intelligent Tutoring Systems (ITS)
– Integrating characteristics proper of human tutoring into ITS
performance.
– Providing the student with a more personalized and friendly environment
for learning according to his/her needs and progress.
– A platform to extend the emotional modeling to real life experiments with
affect-driven instruction.
– Will provide a reference for the use of affect in intelligent tutoring
systems.
Towards conscious robots
Do we want to have conscious robots? Perhaps yes.
Few explicit attempts to build them so far.
Stan Franklin, "Conscious" Software Research Group, Institute of
Intelligent Systems, University of Memphis, CMattie project: an attempt to
design and implement an intelligent agent under the framework of Bernard
Baars' Global Workspace Theory.
Owen Holland, University of Essex: consciousness via increasingly
intelligent behavior, robots with internal models, development of complex
control systems, looking for “signs of consciousness”, 0.5 M£ grant.
Pentti Haikonen (Nokia, Helsinki), The cognitive approach to conscious
machines (Imprint Academic 2003). Simulations + microchips coming.
WCCI Panel: Roadmap to human level intelligence, 17 July 2006
IDoCare: Infant Development and Care
for development of perfect babies!
W. Duch, D.L. Maskell, M.B. Pasquier, B. Schmidt, A. Wahab
School of Computer Engineering, Nanyang Technological University
Problem: about 5-10% of all children have a developmental disability
that causes problems in their speech and language development.
Identification of congenital hearing loss in USA is at 2½ years of age!
Solution: permanent monitoring of babies in the crib, stimulation,
recording and analysis of their responses, providing guideline for their
perceptual and cognitive development, calling an expert help if needed.
Key sensors: suction response (basic method in developmental
psychology), motion detectors, auditory and visual monitoring.
Potential: market for baby monitors (Sony, BT...) is billions of $; so far
they only let parents to hear or see the baby and play ambient music.
IDoCare intelligent crib
Revolutionary enhancement of baby monitors: intelligent crib with
wireless suction, motion detector and audio/visual monitoring, plus
software for early diagnostics of developmental problems.
Hardware: embedding pressure and temperature sensors in telemetric
pacifier, for monitoring and feedback of baby's reactions to stimuli.
Software: signal analysis and blind source separation; interpretation of
baby’s responses, selection of stimuli and comments for parents.
Home applications: monitoring, diagnostics, preventive actions by
enhancement of perceptual discrimination by giving rewards for solving
Database of
Wireless
Telemetric
receiver
perceptual problems.
speech sounds
A/D converter
communication
pacifier
la-la … la-ra-ra…
D/A converter
Control unit
Children love to be stimulated,
sound sequencesand IDoCare will be the first active
RAM
environment that will allow themSpeaker
to influence what they see and hear.
Audiovisual
device (reward)
Database of
reward patterns
Non-volatile
memory
Active learning may gently pressure baby’s brain to develop perceptual
and cognitive skills to their full potential achieved now by very few.
Conclusions
Robots and avatars will make a steady progress towards realistic
human-like behavior – think about progress in computer graphics.
• Artificial minds of brain-like systems will claim qualia;
they will be as real in artificial systems as they are in our brains.
• There are no good arguments against convergence of the neural
modeling process to conscious artifacts.
• Achieving human-level competence in perception,
language and problem-solving may take longer than
creation of basic consciousness.
Creation of conscious artilects will open Pandora’s box
What should be their status?
Will it degrade our own dignity?
Is switching off a conscious robot a form of killing?
...
Will they ever turn against us ...
Thank
you
for
lending
your
ears
...
Google: Duch => Papers
Download