Uploaded by Taiba Chaudhry

Movements of the Mind A Theory of Attention, Intention and Action 1st Edition By Wayne Wu

advertisement
Complete Ebook By email at etutorsource@
Complete Ebook By email at etutorsource@
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters For E-books Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download Complete Ebook By email at etutorsource@gmail.com
Movements of the Mind
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
Movements of the Mind
A Theory of Attention, Intention
and Action
WAYNE WU
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
Great Clarendon Street, Oxford, OX2 6DP,
United Kingdom
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark of
Oxford University Press in the UK and in certain other countries
The moral rights of the author have been asserted
All rights reserved. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by licence or under terms agreed with the appropriate reprographics
rights organization. Enquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above
You must not circulate this work in any other form
and you must impose this same condition on any acquirer
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America
British Library Cataloguing in Publication Data
Data available
Library of Congress Control Number: 2023930905
ISBN 978–0–19–286689–9
DOI: 10.1093/oso/9780192866899.001.0001
Printed and bound by
CPI Group (UK) Ltd, Croydon, CR0 4YY
Links to third party websites are provided by Oxford in good faith and
for information only. Oxford disclaims any responsibility for the materials
contained in any third party website referenced in this work.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
Contents
Introduction
Claims by Section
1
14
P A R T I . T HE S T R UC T UR E O F AC T I O N
A N D AT T E N TI O N
1. The Structure of Acting
Appendix 1.1
19
53
2. Attention and Attending
61
P A R T I I . I NT E N T I ON AS PRA C T I C A L M E M OR Y
AND R EMEMBERING
3. Intention as Practical Memory
93
4. Intending as Practical Remembering
125
PART III. MOVEMENTS O F T HE MIND AS
D E P L O Y MEN T S O F AT T E N T I O N
5. Automatic Bias, Experts and Amateurs
157
6. Deducing, Skill and Knowledge
185
7. Introspecting Perceptual Experience
208
Epilogue
231
Bibliography
Index
233
255
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
Introduction
This work would not be possible without my wife, Alison Barth, so at its beginning, I want to thank her. In our youth, we had a “public” debate on a train from
Oxford to London, neurobiologist versus philosopher. As we pulled into our stop,
an older English gentleman sitting across from us leaned over and said, “I agree
with her.” That sums up a lot.
There was a difficult period after I left science, lost and unsure of what to do.
Alison patiently weathered the storm with me. Since then, we have shared the ups
and downs of a rich, wonderful life together, raising two daughters who remind
me every time I am with them how their strength, intelligence, and beauty reflect
their mother’s.
So, Alison, thank you for your companionship and your love. This book is
inadequate to all that, but it is the best I can produce. I dedicate it to you with all
my love.
0.1 A Biologist’s Perspective
The title, Movements of the Mind, plays on the default conception in science and
philosophy of action as bodily movement. On that view, there are no mental
actions. This leaves out much. Pointedly, I focus in what follows on mental
movements such as attending, remembering, reasoning, introspecting, and thinking. There are general features of agency seen more sharply by avoiding the
complexities of motor control. Focusing on mental actions facilitates explanation.
That said, my arguments apply to movements in their basic form, that presupposed in discussions of free, moral, and skilled action. To understand these, we
must understand basic agency, an agent’s doing things, intentionally or not, with
the body or not.
I aspire to a biology of agency, writ large where philosophy plays a part. Such a
broad view theory aims to integrate different levels of analysis: a priori argument,
computational theories of mental processes, psychophysics, imaging and electrophysiology of neurons, and, though not here, the genetic and molecular. To
systematically understand agency as it is lived we must understand it from
multiple levels. The link to biology is necessitated when philosophical theories
posit psychological processes and causally potent capacities that in us are organically realized. Such theories are enjoined to impose empirical friction, to show
Movements of the Mind: A Theory of Attention, Intention and Action. Wayne Wu, Oxford University Press.
Download
Ebook By email at etutorsource@gmail.com
© Wayne Wu 2023.Complete
DOI: 10.1093/oso/9780192866899.003.0001
Download Complete Ebook By email at etutorsource@gmail.com
2
   
that claims generated from the armchair about the living world make contact with
the actual world as we live it.
Philosophical psychology is replete with causal claims about subjects and
their minds derived from thought experiments, dissected by introspection, or
informed by folk psychology and ordinary language. Yet rigorous inquiry into
specific causal claims is the provenance of empirical science. Philosophical
psychology should not theorize about what happens mentally in the complete
absence of empirical engagement. The requirement is not that philosophers
should do experiments. Rather, where philosophical inquiry postulates causal
features of mind, we philosophers should delve into what is known about
relevant biology. Well, I have felt obligated to do so. I see engaging in empirical
work as a way of keeping my own philosophical reflections honest to the way
the world is as we empirically understand it. This is not to say that the
engagement is only in one direction. Ultimately, science and philosophy of
mind should work together as part of biology, for they share a goal: to
understand the world.
I hope to provide a detailed outline of what agency as a biological phenomenon
is. This is a deeply personal project. I began academic life as an experimental
biologist. In college, I gravitated to organic chemistry which describes the movement of molecules that join and alter in principled ways to form other molecules.
A course on genetics introduced me to DNA and the central dogma, a chemical
transition: DNA to RNA to proteins. Biology conjoined with chemistry promised
to explain life through principles of organic interaction and recombination.
Inspired, I took every biology and chemistry course I could.
Graduate school followed. A professor waxed nostalgic of the old days when he
and his colleagues would argue about the mechanisms of life over coffee.
Occasionally, someone would leave to start a centrifuge for an experiment then
return to continue debating. That sounded like the good life, but the reality of
biological research felt different. Centrifuging samples once illuminated important
principles (see Meselson and Stahl), but for me, it was one more tedious part of life
at the lab bench. It was theory that grabbed me, not bench work. After two
unhappy years of experimental tinkering, I dropped out. It was a devastating
loss of an identity I had cultivated. Skipping over the lost years, I simply note that
I found my way to philosophy.
So here we are. This book reflects the distant aspirations of my younger self
though I have inverted my prior explanatory orientation. While my younger self
believed in working from the bottom up, from molecules to life, this book works
initially from the top down, from philosophical reflection on an agent’s doings
toward the activity of neurons. Still the same goal remains, the systematic illumination of living things. Accordingly readers will find many empirical details in
what follows. They are essential. How else can we achieve a systematic understanding of lived agency? Indeed, ignoring empirical work closes off opportunities
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com

3
for new insights as I hope to show by intersecting working memory with intention
(Part II, Chapters 3 and 4). I have focused on research at the center of current
empirical work and have worked hard to make the details clear and accessible.
Please don’t skip them.
That said, the empirical work should not obscure the fact that the central
level of philosophical analysis concerns an agent who is a subject who perceives,
thinks, is consciously aware, loves and hates, is bored or engaged, aims for good
and ill. Most of the empirical work I draw on focuses on the algorithmic,
psychological, and neural processes that constitute subject-level phenomena so
lie at levels below subject-level descriptions. The difficult challenge facing
cognitive science is how to bridge these “lower” levels of analyses with the
subject level we care about in deciding how to live. We should not kid ourselves
that these bridges are simple to construct. The overreliance on folkpsychological vocabulary, and corresponding lack of a technical vocabulary
(here’s looking at you, attention), makes building such bridges seem deceptively
simple. Yet agency as a subject’s doing things is not explained just because
cognitive science sometimes uses subject-level vocabulary in describing basic
processes (consider the concept of decision making).
Scientists, who I hope will read this book, might respond to Part I by noting
there are already detailed theories about action and attention in the empirical
literature. Yet despite the subject-level terms that literature deploys, the related
empirical studies I adduce explicate the mechanisms underlying the subject
attending and acting. The challenge that remains is to deploy empirical accounts
of the brain’s doing things to inform understanding an agent’s doing things
intentionally or not, skillfully or not, reasonably or not, angrily or not, automatically or not, freely or not, and so on. To do so requires that we properly
characterize the ultimate target of explanation: an agent’s acting in the basic
sense. This book aims to explicate that subject-level phenomenon, and if successful, to sharply delineate a shared explanandum for cognitive science and philosophy. A model for the bridging project I have in mind is David Marr’s (1982)
emphasis on a computational theory which provides a unifying principle to link
other empirical levels of analysis. The analysis of action, attention, and intention
that I aim for is of that ilk. I hope scientists and philosophers will through this
book find common ground.
0.2 Central Themes
I argue for four central themes about action. The first is this:
Action has a specific psychological structure that results from solving a Selection
Problem.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
4
   
Parts I and II delineate and detail this structure while Part III shows how the
structure illuminates three philosophically significant forms of mental agency:
mental bias, reasoning, and introspection. These topics are often investigated
without drawing substantively on philosophy of action, yet drawing on the right
theory of action advances understanding. Specifically, the structure of action
unifies the three phenomena as forms of attention.
The structure of action allows us to provide an analysis of automaticity and
control motivated by solving a paradox (Section 1.4). These notions are crucial
because
intentional action is characterized by automaticity and control.
Control is at the heart of intentional agency, but automaticity is a feature of all
action, indeed necessarily so (Section 1.4). Crucially, we must not infer the
absence of agency from the presence of automaticity (cf. talk of reflexes as a
contrast to action; Section 1.2). This fallacy results from overly casual, nontechnical use of these notions in the philosophical literature. To understand
agency, we must use the notions of control and automaticity technically, notions
crucial to understand learning, skill, and bias (Chapters 5 and 6). Here is a
challenge to my friends and colleagues: If clear technical notions of central
theoretical concepts are given, why not use them? Why persist in drawing on
mere folk-psychological conceptions in a philosophical psychology that aims to be
serious psychology?
Philosophers have doubted that we have got action right. Philosophical discussion has focused on a causal theory of action. Yet the persistent problem of deviant
causal chains shows that we have not adequately explained agency in causal terms.
I draw a specific lesson from the failure: crucial parts of the psychological picture
are missing from the causal theory. Specifically,
you can’t get action right if you leave out an essential component: attention.
Action theorists have largely ignored attention (check the index of any book on
action). Sometimes they mention it, but it cannot be merely mentioned. That
yields an incomplete psychology of action. You can’t act in respect of X if you are
not attending to it. Attention guides. Lack of attention promotes failed action. So,
ignoring an agent’s attention is akin to ignoring her belief or desire in the
traditional causal theory. If one fails to discuss central psychological components
of action, one will fail to explain action. Attention illuminates action. It is not a
mere appendage in action but an essential part.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com

5
Finally, even for those aspects of agency that we have discussed since the
beginning, the engagement with biology opens up new avenues for illumination. Drawing on the biology shifts our thinking about intention in the
following way:
Intention is a type of active memory: practical memory for action.
This is, perhaps, the most substantial shift in the theory of agency that I argue
for in this book (Part II, Chapters 3 and 4). It is motivated by the biology,
specifically by research on working memory, along with a philosophical argument that the coherence and intelligibility of intentional action from the agent’s
perspective depends on memory in intention (Section 4.2). Intention reflects an
agent’s activeness that regulates the agent’s solving what I call the Selection
Problem, the need to settle on a course of action among many. In action,
intention constitutes the agent’s remembering (thinking about) what to do as
she does it, and in such remembering, the agent’s intending dynamically biases
the exercise of her action capabilities as she acts. Indeed, her thinking about
what she is doing in her intending to act keeps time with her action through
continued practical reasoning. It provides her a distinctive access to her intentional doings.
0.3 The Book’s Parts
The book is divided into three parts. Part I establishes the structure of action,
explicating its components with emphasis on attention and its interaction with a
bias. Here’s a mantra:
An agent’s action is her responding in light of how she is taking things, given her
biases.
Taking things, a term of art, picks out a myriad mental phenomena that serve
as action inputs such as her perceptually taking things to be a certain way.
Accordingly, an action’s geometry is characterized by three aspects: (1) the agent’s
taking things such as her perception or memory of things, (2) the agent’s responding such as her body’s moving or her mental response in applying a concept or
encoding a memory, and (3) a bias, a factor that explains the specific coupling
which is the causal link between (1) and (2). A bias is the psychological factor that
explains the expression of action. This yields the basic structure:
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
6
   
Figure 0.1 The structure depicts an agent’s acting, each node standing in for a feature
of the agent qua subject, say a state, event, process, capacity, etc. Each solid arrow
indicates an actual causal link between nodes. Such depictions of action structure will
be used throughout the book. In acting in the world, the agent is responding (the
output), guided by how she takes things (the input) given her being biased in a certain
way. Action’s structure is given in the tripartite form, each node a constituent of action.
Here, the agent responds to a stimulus S1. The input’s guiding response is a process
that takes time, so the structure depicts a dynamic phenomenon.
Note that the structure is a blunt way of representing a complicated, dynamic
phenomenon characteristic of a subject as agent. It is not a depiction of parts
within the subject. Rather, it is a structural description that isolates different
aspects of the agent’s being active, each analytically pulled out from an amalgam
of her exercised capacities that normally blend into her acting. It sketches,
coarsely, a dynamical integration of the subject’s different perspectives, say in
her intending and perceiving, and her exercised abilities to respond.
When the agent acts intentionally—in this book that means acting with an
intention—the structure involves intention as a specific type of bias:
Figure 0.2 As in Figure 0.1 but here the agent’s responding to S1 is guided by how she
takes S1 given her intending to act in a certain way.
Download Complete Ebook By email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters For E-books Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download Complete Ebook By email at etutorsource@gmail.com

7
The amalgam of the three nodes is the agent’s intentionally acting. Crucially,
intention and the input are in action, not numerically distinct from it. Other
actions, intentional actions without intentions as when one is driven by emotion
or needs, slips and flops as well as habitual, skilled, expert, incompetent, moral,
immoral, passive, and pathological agency (among others) are explained through
and by building on this structure. In particular, the identity of the bias provides a
crucial differentiating node, individuating different types of action. Accordingly,
the geometry provides a unifying explanatory frame. This book shows how
applying it illuminates disparate agentive phenomena.
Part I summarizes, elaborates, and integrates many of my published articles,
with a greater emphasis on the notion of a bias as well as (hopefully) clearer
presentation of my views which (I believe) have remained mainly unchanged
in essentials (of course, I might be wrong). This first part identifies action as
the solution to a Selection Problem, a problem that must be solved in every
action. The Problem arises in light of an action space that identifies the different
actions available to an agent at a time and in a context. To act, an action among
possible actions must be selected. We can embed the geometry of intentional
action in the agent’s action space constituted by action possibilities, each
possibility constituted by an input linkable to an output, a possible causal
coupling (see Figure 0.3).
The structure depicted in Figure 0.3 explains the agent’s guidance and control in
intentional action. Control is explicated in terms of intention’s role in solving the
Selection Problem. The intention represents an action that is one of the paths in
the action space and brings about that action. Agentive control is constituted by
the agent’s intention biasing solutions to the Selection Problem, specifically
through biasing the input and output capacities to facilitate their coupling. The
concept of automaticity is precisely defined in contrast to control to resolve a
conceptual paradox in the theory of automaticity. The resolution sets down a
technical analysis of these crucial notions, crucial because intentional agency exemplifies a pervasive tug of war between automaticity and control (Section 1.4).
Guidance is explained as the function of the input state set by the agent’s bias.
The input state informs the output response and in doing so constitutes the agent’s
attention in action. While attention in intentional action is set by intention, I argue
that attention is a constituent of every action, intentional or not. Attention is always
biased (Chapter 5).
Part II explores intention as an active memory. It is a practical memory, the
agent’s remembering to act (Chapter 3). Intention’s mnemonic activity is partly
expressed in how it regulates attentional phenomena in light of the agent’s
conception of what is to be done. I argue that empirical work on working
memory probes the activeness of intention. As I will put it, where the agent is
immanently about to act or is acting, in intending to so act, the agent is being
active. While acting, the subject continues to actively remember, that is to think
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
8
   
Figure 0.3 Intention solves the Selection Problem, given an action space that
presents multiple possible actions. The intention solves the Problem by engendering
the action it represents, here the action Φ which is constituted by responding
(R1) to how the subject takes the stimulus S1. Solid arrows indicate actual causal
connections, dotted arrows identify possible ones. The downward solid arrows from
intention directed at both input and output nodes identify relations of biasing,
explicated in the text as cognitive integration (Section 1.7). Intentional action is an
amalgam of (1) the intention, (2) the input taking, and (3) the response guided by
the input. This is the triangular structure in darker lines, top portion of the figure.
Both input states (indicated as active by black circles) are activated by stimuli in the
world (S1 and S2), but only one guides a response. Response R2 is inactive (lighter gray
circle), but it could have been coupled to the subject’s input states. Downward gray
arrows indicate additional inputs and outputs.
about, what she is doing. Intending is the action of practical remembering,
exercised in keeping track of action (Chapter 4). This active remembering
involves sustained practical reasoning as the agent acts (Section 4.5) and is the
grounds of the agent’s distinctive and privileged non-observational access to
what she is intentionally doing (Section 4.6) and how she can keep time with her
action (Section 4.7).
Let me enter a special plea. The approach to intention will, I think, jar many
readers since it will clash with certain philosophical intuitions and frameworks, with how we ordinarily speak about intention, with folk psychology,
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com

9
and perhaps with introspection. The plea is that missing from all this has been
a biological perspective that should be given at least equal weight, indeed
I think more. I hope to show that cognitive science has been doing detailed
investigation of intention though not always with use of that concept/term.
What that work reveals is the dynamics of an agent’s intending, and when the
work is bridged to philosophical concerns, there is remarkable cohesion and
illumination.
Having established action’s psychological structure, Part III draws on the
theory to investigate three specific movements of mind much discussed in the
philosophical literature: (a) implicit, better automatic, bias in actions of ethical
and epistemic concern, (b) deductive reasoning, and (c) introspecting perceptual
experience. While my theory applies to any movement, I choose these three
because they identify central topics of philosophical investigation as well as salient
features in philosophical practice itself. Notably, each is a distinctive way of
attending. I urge readers to work through each of these chapters even if they do
not work on the topics covered. Many of the basic themes in Parts I and II are
further developed in Part III.
First, automatic biases reflect a complex diachronic and synchronic modulation on attention in agency. Experience and learning are common sources of bias
critical to understanding the many positive and negative biases of epistemic
and ethical concern. What drives much biased behavior is biased attention.
Bias often reflects a more or less skilled deployment of attention. This engages
normative assessment of attention: when the agent acts in a negatively biased
way, they are often attending amateurishly or, worse yet, viciously and incompetently. In isolating historical influences on attention, I provide a new way
of understanding the causal structure of automatic biases, including many
implicit biases, and this structure provides a map of precise targets for normative
assessment (see Figure 5.1).
In deductive reasoning, the subject sharpens cognitive attention in moving from
premises to conclusion where premises serve as attentional cues for logically
relevant contents, leading to increased cognitive focus in drawing logically entailed
conclusions. In symbolic logic, a capacity to construct proofs depends on attention
to logical form, this inculcated on the basis of developing attentional skills through
joint attention with an instructor in light of the norms of reasoning. Hence,
deductive action is regulated by rules of inference that, through intention, bias
cognitive attention in reasoning. Importantly, rules are not targets of cognitive
attention as premises. Instead, they regulate reasoning by setting attention as a
bias. We can thereby avoid the regress of rules (Carroll 1895) while providing rules
an explicit role in action (Section 6.4).
Finally, I close with introspection, a crucial source of data for many philosophical and empirical theories. While the use of introspection is central to philosophy
and in arenas that appeal to how things subjectively seem such as the science of
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
10
   
consciousness or medicine, we have no adequate theory of introspection as a
psychological phenomenon. For all we know, introspective deliverances are typically and systematically inaccurate. Whether this is so is an empirical question.
Claims about introspective accuracy or inaccuracy should be informed by understanding introspection as action, hence by the biology. There is a philosophical
consensus that introspection involves attention but with few details regarding
attention’s role. Philosophers often postulate a distinctive type of “internal attention” for which we have no good empirical evidence. The final chapter draws on the
theory of attention to explain introspective action. This provides a concrete basis for
justifying introspection’s use in specific contexts and for rejecting its deliverances in
others, some surprising. There is much work to do to improve our introspective
practices, and this begins with understanding intentional introspection as attention.
0.4 Chapter Summaries
Let me summarize each chapter. A list of propositions argued for in each section is
presented at the end of this introduction and can be read as a detailed summary of
the book.
Chapter 1 establishes action’s psychological structure as an input guiding an
output in solving a necessary challenge facing all agents, the Selection Problem.
Where the agent acts on an intention, intention solves the Problem, establishing
agentive control. The automaticity of action is defined by resolving a paradox of
automaticity and control.
Chapter 2 establishes that attention constitutes guidance in action and that
every action involves attention. Three basic attentional phenomena are identified:
vigilance as a readiness to attend, attention as guiding action, and attending as
action. Attention as the activity of guiding output response has explanatory
priority. It is guidance in action.
Chapter 3 establishes that intention is a type of memory for work and that the
literature on working memory reveals the dynamics of intention as the source of
agentive control. Drawing on the biology, intention is construed as an agent’s
being active, an active memory that works to establish vigilance and maintains
steadfastness in action, preventing distraction and slips.
Chapter 4 identifies intending as an action of thinking about what one is doing
in active remembering. Intending-in-action keeps time with action by updating its
content through continued practical reasoning: fine-tuning of intention’s content.
This explains the agent’s distinctive, privileged, non-observational access to her
action.
Chapter 5 explains that many biased behaviors of epistemic and ethical concern
are rooted in biased attention set by experience and learning. That negative and
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com

11
positive biases in attention are learned places biased attention within a context of
normative assessment such as the standards for skill and expertise in a given practice.
Negative biases reflect an undesirable amateurism, incompetence, or viciousness.
Chapter 6 explains deductive reasoning as the development of the agent’s
cognitive attention where premises serve as cues for logically relevant guiding
features. As capacities for reasoning are learned, the development of abilities to
attend to logically relevant properties is an acquired skill and type of attentional
expertise. The exercise of these abilities can be explicitly controlled by the agent’s
grasp of inferential rules.
Chapter 7 explains introspection of perceptual experience as the distinctive
deployment of attention in accessing the conscious mind. Conditions for reliable
and unreliable uses of introspective attention in accessing perceptual consciousness are detailed. Salient cases of introspection in philosophy and psychology are
shown problematic. Principles for improving introspective practice are presented.
0.5 Acknowledgments
I have many intellectual debts. I am grateful to Steve Lippard, Amy Rosenzweig,
and the late Vernon Ingram for teaching me, years ago, to be a scientist. My work
bears the imprint of their mentorship. Mike Botchan, Barbara Meyer, Don Rio,
and Robert Tjian were among my teachers in graduate school and, at the end, tried
to help me find my way before my exit from science. I appreciate their efforts. I am
grateful to the Howard Hughes Medical Institute for a predoctoral fellowship. My
career didn’t pan out the way expected, but I hope that this book shows the fruits
of that investment in a budding biologist.
The transition from science to philosophy was rough. One of my first philosophy courses was Martin Thomson-Jones’s graduate seminar at Berkeley, taken
right after I dropped out of science. Having never studied philosophy as an
undergraduate, I was in over my head. Martin read one of the worst seminar
papers, written by yours truly, but kindly gave feedback and encouragement over
coffee. Edward Cushman, whom I only knew at the time as one of the philosophy
grad students, stopped by while I was working in the departmental library to
encourage me after I had given an amateurish presentation in Martin’s class. It
was a random, deeply appreciated act of kindness. I am sure many of us have felt
imposter syndrome or uncertainty whether we belong. Moments of encouragement can make a difference, so I want to thank Martin and Eddie in print for those
moments. They aren’t the only ones who helped over the years, but they did so at a
sensitive time.
There are too many people to list, conversations with whom have shaped the
ideas in this book. Many of you are perhaps reading this now. Though you are
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
12
   
unnamed, I hope you’ll know that I’ve learned from all those conversations and
that I look forward to more in the future.
There are many relevant works that I do not discuss in detail. To write a shorter
book (I know, this isn’t that short . . . ), I focus on selective points of clash and
contrast. Regretfully, much is left unsaid. To pick just two topics: on mental action
and cognition, there is important work by Peter Carruthers, Chris Peacocke, Lucy
O’Brian, Joelle Proust, and Matt Soteriou among others (see also a recent book
edited by Michael Brent and Lisa Miracchi Titus 2023) and on attention, work by
Imogen Dickie, Carolyn Dicey Jennings, Jonardon Ganeri, Abrol Fairweather and
Carlos Montemayor, Chris Mole, Declan Smithies, and Sebastian Watzl among
others. I apologize for the lack of sustained engagement and aspire to do so in
print in the future.
John Searle and Jay Wallace advised my dissertation where many of these ideas
began. Hong Yu Wong’s group at many points engaged with the ideas expounded
in the following pages, so thanks to him, Chiara Brozzo, Gregor Hochstetter, Krisz
Orban, and Katja Samoilova for making Tübingen an intellectual focal point for
me. A reading group at the Center for the Philosophy of Science (University of
Pittsburgh) provided helpful feedback. Thanks to Juan Pablo Bermúdez, Arnon
Cahen, Philipp Hauweis, Paola Hernández-Chávez, Edouard Machery, and Adina
Roskies. In London, I worked through the manuscript with Zijian Zhu, Matthew
Sweeney, Jonathan Gingerich, Eileen Pfeiffer Flores, Chengying Guan, and Seth
Goldwasser in an on-line class during the lockdown. Thanks also to Bill Brewer,
David Papineau, Barry Smith, and Matt Soteriou for their help in making London
a great place to write a book, and for discussions.
I presented the material in various places in the US, UK, and Europe during the
pandemic. Thanks to Anita Avramides, Will Davies, Chris Frith, Anil Gomes,
Alex Grzankowski, Patrick Haggard, Zoe Jenkins, Mike Martin, Matthew Parrot,
Chris Peacocke, Harold Robinson, Jake Quilty-Dunn, Nick Shea, Sebastian Watzl,
and Keith Wilson for feedback. Francesca Secco worked through the material with
me and organized a class in the University of Oslo that I taught on the book. I am
grateful to her and the students for comments.
I have benefited greatly from philosophers at the Human Abilities Project,
Berlin. Barbara Vetter and Carlotta Pavese had their reading group dissect
Chapter 5 and Sanja Dembić and Vanessa Carr and their group worked through
an earlier paper on which Chapter 2 is based. Sanja and Vanessa organized an online workshop on my manuscript. I am grateful to the commentators: David
Heering, Vanessa Carr, Helen Steward, Sarah Paul, Chandra Shripada, Carlotta
Pavese, and Christopher Hill. Thanks to Denis Buehler, Steve Butterfill, Kim Frost,
Thor Grünbaum, Aaron Henry, Liz Irvine, Matthias Michel, Myrto Mylopoulos,
and Josh Shepherd for comments. Dan Burnston and Mike Roche later weighed
in. Aaron Glasser and Malte Hendrickx organized a reading group at Michigan to
work through the book, and Gabe Mendlow, Catherine Saint-Croix, Jonathan
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com

13
Sarnoff, and Laura Soter gave helpful feedback. Recent discussions with Denis
Buehler, Liz Camp, Piera Maurizio, Tom McClelland, Jesse Munton, Susanna
Siegel, Sebastian Watzl, and Ella Whiteley on salience helped me bring Chapter 5
into shape.
I thank two referees for helpful feedback, especially reader “X” for detailed and
generous comments that provided timely encouragement. Years ago, Peter
Momtchiloff asked some questions of a young philosopher wandering around
the APA, jotted down a few things in his notebook and would ask about my
proposals on later crossing paths. This book is tenuously related to those grandiose plans. My thanks to Peter for following up and for supporting this project,
to Tara Werger who helped me prepare the manuscript and deal with pesky
permissions with an occasional tidbit about the London theater scene, and to
Rio Ruskin-Tompkins for a fantastic cover (more on that in a moment).
0.6 Family
My wife and I, with our youngest daughter, travelled to London, U.K. to sabbatical
in February of 2020.
It was not the sabbatical we planned for.
Still, there were blessings. The U.K. lockdown had unexpected benefits in
providing space and time to write a book in a quiet, subdued London. Our oldest
daughter, forced out from college due to the pandemic, came to stay as well. We
endured the lockdown together as a family.
In thinking about family, let me complete the circle at last but most assuredly
not least: to my beloved daughters, Madeleine and Eleanor (Pei and Mei), thank
you for making this actual timeline the best possible one. I am also grateful to
Madeleine for the image that graces the cover. Providing a glimpse of a quiet
London Underground station during the pandemic as a train slips by, her
photograph perfectly captures the book’s title and the mood of London during
the time in which much of this work was written.
Ok, let’s begin.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
14
   
Claims by Section
1.1 An agent’s acting intentionally has a psychological structure: an agent responds guided
by how she takes things given her intending to act.
1.2 Action as a structured phenomenon arises from a Selection Problem, a necessary
challenge facing agents, one set by an action space constituted by paths that link
inputs, the agent’s taking things, to outputs, the agent’s capacities for response, where
a path implemented is the agent’s acting.
1.3 The agent’s intentionally acting is a solution to the Selection Problem due to her
intending to act in the relevant way serving as a bias that explains why the Problem is
solved in the way that it is.
1.4 Automaticity and control pervade intentional action and can be rigorously defined:
features are controlled by being intended, and those that are not intended are
automatic.
1.5 Bias is a necessary feature of action and can be tied to control or automaticity.
1.6 Control in intention is revealed behaviorally, biologically, and computationally in
biasing relevant input and output states in accord with the content of the intention.
1.7 Biasing by intention involves the intention cognitively integrating with input states to
solve the Selection Problem consistent with the intention’s content.
1.8 In learning to act, acquiring the appropriate biases, a subject comes to directly intend
to act in shifting the balance between automaticity and control.
1.9 As theories that identify actions in terms of their causes make the subject qua agent a
target of control, not in control, the agent’s causal powers, tied to her intending and to
her taking things, must be internal, not external, to her action.
1.10 The mental elements of the agent’s action identify the agent’s being active though
short of acting, for her being active partly constitutes her acting.
2.1 There are three salient modes of attention: vigilance, attentional guidance, and
attending as action.
2.2 Attention is mental guidance in action, the agent’s taking things informing response.
2.3 Attention as guidance, a necessary part of a solution to the Selection Problem, is
present in every action.
2.4 Attention is not a mechanism modulating neural activity; rather it, as a subject-level
activity of guiding response, is constituted by specific neural modulations.
2.5 Attending as an action is guided by attention, often with improved access to its
target.
2.6 Attention can be both automatic and goal-directed by being sensitive to the agent’s
many biases.
2.7 Attentional capture is a form of passive agency but is distinct from the passivity of
perceiving, a behavior that is never an action.
2.8 Attention is everywhere, largely automatic, mostly unnoticed.
2.9 When a subject attends to a target, she is acting on it.
2.10 Central cases of causal deviance involve the disruption of attention, hence the absence
of appropriate agentive guidance, a necessary feature of action.
2.11 Intention sets the standard for appropriate attentional guidance in intentional
action.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com

15
3.1 Intention is practical memory for work, actively regulating and maintaining action-relevant
capacities.
3.2 The agent’s remembering is her cognitively attending to mnemonic contents.
3.3 Working memory is memory for action, specifically for the control of attention and its
central executive component is the basis of the agent’s intention.
3.4 Empirical investigation of working memory as an executive capacity explicates the
activity of intention in setting attention.
3.5 Intention proximal to action is an active memory for work that modulates vigilance, a
propensity to attend.
3.6 Intention-in-action keeps the agent steadfast, sustaining attention against distraction
and preventing slips.
4.1 Acting on an intention involves the agent’s intending, a simultaneous action of
practical reasoning as she acts.
4.2 Practical memory is the basis of the agent’s conception of her action that renders it
intelligible to her.
4.3 Intending-in-action is constituted by fine-tuning of practical memory in practical
reasoning.
4.4 Fine-tuning is practical memory at work, its dynamics revealed in the dynamics of
working memory.
4.5 As the agent acts, keeping track of what she is doing involves the exercise of practical
reasoning as part of her developing her intending to act, as she acts.
4.6 Intending-in-action maintains distinctive, authoritative, and non-perceptual access to
action through practical reasoning.
4.7 By practical reasoning, the agent keeps time with her action in intending.
5.1 Bias is a critical factor explaining acting well or poorly, and accordingly, attention is a
critical factor in such explanations.
5.2 A central source of bias on attention is revealed in the setting of priority, including that
set by historical influences.
5.3 Epistemic bias often begins with biased attention.
5.4 Every movement involves a mental guide in attention, so no action is “purely bodily”
including overt visual attending.
5.5 Virtuous automatic bias in visual attention can be learned through practice and
training as demonstrated in epistemic skill in medicine.
5.6 Gaze is a good whose distribution is automatically biased in ways that can have negative
consequences in academic and social settings.
5.7 Perception and cognition operate over a biased field, the set of inputs in an action space,
its structure revealed by automatic perceptual and cognitive attention.
5.8 Attention is a target of normative assessment, and the panoply of biases on attention
provides a map for such assessments.
6.1 Reasoning is the deployment of skilled cognitive attention.
6.2 Deducing, on semantical accounts, is constituted by sharpening cognitive attention in
moving from premises to conclusion where said premises provide cognitive cues for
attention.
6.3 Rules typically contribute to control, rather than to guidance, and in this way a reasoner
can explicitly invoke rules.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
16
   
6.4 Taking premises to support a conclusion is grounded in the acquisition of recognitional
capacities through rule-based control during learning that avoids Carroll’s regress of
rules.
6.5 Knowing how, understood as what we acquire in learning and practice, involves the
acquisition of schemas.
6.6 Learning shapes the agent’s knowledge of how to act, and this knowledge provides a
developmentally based bias on the agent’s action, one often coordinated with the
agent’s intention to act in the way learned.
7.1 Introspecting is like any action: there are contexts in which it is reliably successful and
contexts in which it reliably is not.
7.2 As an action, introspection’s reliability is sensitive to task instructions.
7.3 Introspecting perceptual consciousness is guided by perceptual attention.
7.4 Simple introspection draws solely on perceptual experience as constituting introspective attention, and its reliability is a function of the reliability of the components of
perceptual judgment.
7.5 Complex introspection, typically used in philosophy, can be reliable, but it is challenged
by multiple sources of noise.
7.6 It is not clear that introspection can adjudicate metaphysical debates about perceptual
consciousness.
Download Complete Ebook By email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters For E-books Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download Complete Ebook By email at etutorsource@gmail.com
PART I
THE STRUCTURE OF ACTION
AND ATTENTION
Action has a psychological structure with attention as a necessary part.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
1
The Structure of Acting
1.1 Introduction
An agent’s acting intentionally has a psychological structure: an agent
responds guided by how she takes things given her intending to act.
In this book, an agent’s acting intentionally means an agent’s doing things with an
intention. Although I focus on intentional mental agency, my theory applies to all
actions: intentional action writ large, unintentional and automatic actions, actions
done from emotions, implicitly biased actions, pathologies of agency, passivity
behavior, and so on.
Acting with an intention has a psychological structure:
An agent’s acting intentionally is the agent’s responding in a specific way,
guided by how she takes things, given her intending to act.
Given her intending to act on a drink, the agent’s visually taking in the glass guides
her reach for it (bodily action) or her encoding its location in memory (mental
action). Generalizing from intention to bias yields the basic structure of action:
An agent’s acting is the agent’s responding in a specific way, guided by how she
takes things given her biases.
Actions are movements of body and mind, transitions within an action space
constituted by the possible actions available to an agent in a context and time. In
intentional action, that structure has two salient components: guidance in the
agent’s taking things informing her response, and control in the agent’s intending
to act. Control sets guidance. This chapter explains these ideas.
1.2 The Selection Problem and the Structure of Acting
Action as a structured phenomenon arises from a Selection Problem, a
necessary challenge facing agents, one set by an action space constituted
by paths that link inputs, the agent’s taking things, to outputs, the agent’s
capacities for response, where a path implemented is the agent’s acting.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
20
   
A behavior space describes behaviors available to a system at a time and context.
The space is constituted by potential couplings of inputs to outputs, a causal
linking of both. A system’s behavior is the instantiation of a specific coupling, a
path in the space. Behavior spaces describe physical systems whose behavior is
decomposable to input and output mappings where the input explains the output.
Action verbs describe such behaviors: plants turn to the light, machines sort
widgets, a fax transcribes a message. Since these systems lack mentality, they
exhibit mere behaviors. They are not acting in the sense at issue where action
can be rational, reasonable, appropriate, skillful, clever, clumsy, moral, or freely
done. Behavior spaces of minded systems have input takings that are mental
phenomena with intentionality. Taking things is a term of art, referring to
subject-level states, intentional mental phenomena such as perception, memory,
imagination, and thought.
The behavior spaces within which action emerges are thus psychological behavior spaces, specifically action spaces. To be an action space, input-output couplings must meet a certain profile: the intentionality or content of the subject’s
taking guides her response. To take a basic form of guidance, one’s visual
experience identifies a location to respond to. The experience guides by providing
spatial content that informs a movement. Guiding content explains why the
response occurs as it does. How content precisely sets response will be relative
to the action kind in question, say setting parameters for movement or encoding
content in memory.
Action spaces are a subset of psychological behavior spaces which are a subset
of behavior spaces. In the psychological behavior space, the basic structure of
behavior is:
For example, the psychological input might be a perceptual experience of
something X in the world, so a typical case will be:
As perceptual experiences are responses to the world, here stimulus X, we have:
There need not be a perceived stimulus, for some behaviors are driven
by memory or hallucination. All behaviors begin with a mental input, the subject’s
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
21
taking things: a subject’s perceiving an object’s having a property is the subject’s
perceptually taking the object to have a property; a subject’s perceiving an object is
the subject’s perceptually taking in the object and so on.
To uncover action’s structure, contrast action with reflex. An agent undergoing
a reflex evinces behavior but not action. Action admits of the qualifications
noted earlier. Reflexes, however, do not express rationality, grasp of reasons,
skill, or expertise. They are not targets of normative assessment, are never
intentional or done freely. They lie outside the set of actions. I will focus on a
class of reflex where a mental input guarantees a response. This excludes
spinal cord mediated reflexes since these are not guaranteed in the sense
at issue.
What is it about reflexes that rules out action? I suggest that it is the
necessitation of response. Consider an engineer who programs a reflex in a
robot, a system-preserving response to a dastardly danger. The engineer aims
to ensure that the response necessarily occurs if danger is detected, yet the
system can fail to escape. In response, the engineer rejiggers the system to
eliminate such failures. Although a physical system can never be so modally
robust as to rule out all possible failures, the engineer’s ideal is to asymptote to a
limit where one has a reflex which could not possibly fail, a necessitation that
the engineer aspires to but can never attain. It is not attained by biological
reflexes.
To render salient the necessitation at the core of the contrast to action, focus on
reflex at the idealized limit, a pure reflex which eliminates all other behavioral
possibilities. The corresponding behavior space is a single path:
Pure reflexes rule out agency by eliminating alternative behavioral possibilities.
The subject is purely a patient suffering a change. So, consider any world in which
a subject generates a behavior.
1.
If a subject undergoes a pure reflex, this behavior is not her acting.¹
Equivalently:
2.
If a subject’s behavior is her acting, then her behavior is not a pure reflex.
If a behavior is not a pure reflex, it lacks the individuating feature of necessity. If
so, another mapping is available, another possibility, most simply (though other
mappings are possible):
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
22
Figure 1.1 In a branched behavior space, the input is mapped to two possible
responses to it. The input is darker because it is an active response to a stimulus, the
responses lighter because they are not yet engaged. Dotted lines indicate possible
couplings.
A behavior space that excludes pure reflexes involves additional behavioral
possibilities, a branched space. So
3.
If a behavior is not a pure reflex, it occurs within a branched behavior
space.
Thus,
4.
If a subject’s behavior is her acting, then it occurs within a branched
behavior space.
Every instance of the agent’s acting is an actual input-output coupling among
possible couplings.
In real life, behavior spaces are highly branched. As shown in Figure 1.2,
a common space is a many-many mapping (in cognitive science, see Desimone
and Duncan 1995; Miller and Cohen 2001, 167–8; and Appendix 1.1). Branching
raises a Selection Problem. To act, the agent must respond to how she takes things.
By hypothesis, that link is not a pure reflex so it is one among other possibilities.
Thus, branches delineate a space of possible behaviors. The Selection Problem
requires selection among possibilities where the solution, the path taken, just is the
agent’s acting, the coupling of an input to an output. The Problem underscores
that we cannot instantiate all behavioral combinations. For example, an agent can
multi-task, say perform two actions at a time, but also perform each action singly.
The agent cannot both multi-task and singly-task. One of these options, among all
options, must be instantiated to solve the Selection Problem. Otherwise, no action
is performed (cf. Buridan Action Spaces and death, Section 1.5). For exposition,
I focus on a single path.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
23
Figure 1.2 The Selection Problem in its many-many version, linking many inputs to
many response outputs, these possible couplings defining the action space. In the
figure, only two inputs and two outputs are depicted, but typically, there are many
more inputs and responses (imagine two very long vertical columns and a mass of
connections linking them). The inputs are darker because they are active responses to
the stimuli, the responses lighter because they are yet to be activated.
5.
For a behavior to occur within a branched behavior space, that behavior
must involve a coupling that is one among others.
If we understand the instantiation of a coupling to be what is meant by “selection”
of one among many behavioral paths—where such talk does not suggest an
additional action of selecting—then
6.
For a behavior to occur within a branched behavior space, that behavior is a
selected path, one that is actualized among other possible paths not taken.
Selection is just the mapping from a Problem to its solution in an action
performed. Then,
7.
If a subject’s behavior is her acting, then that behavior is a selected path that
is one selected among others.
This provides the structure of an agent’s acting in the worlds we are considering,
every possible world where there is non-reflex behavior by subjects. This behavior
is constituted by coupling the agent’s taking some X to the agent’s response to X
against the background of a branched action space. We have arrived at a necessary
structure of action, mental or bodily.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
24
   
Figure 1.3 An action is a solution to the Selection Problem. The particular act is
categorized as of the type Φ. Φ-ing is the response R1 mapped to how the agent takes
the stimulus S1. The input guides the output in that its content explains the production
of R1. The action is a process that takes time, the input’s guiding response (dark solid
arrow). Both inputs are active but only one guides behavior. Dotted arrows indicate
possible paths not taken. Dark circles indicate active nodes, lighter circles inactive
nodes.
Let me make a few general comments. First, applied to our world, the Selection
Problem captures a structural challenge for all agents. It presents a structure of
causal possibility that an agent must navigate to act. Second, a causal action space
identifies what the agent can objectively do at a time yet is detached from the
agent’s conception of her options. To capture her perspective, we focus on a
doxastic action space largely delineated by her beliefs about her options.
A human agent typically only knows about a proper subset of the actual causal
possibilities. If some of her beliefs are false, the space, or portion of it, based on
what she falsely believes differs from the space anchored on what she knows.
Reasoning and learning can expand and alter the bounds of doxastic action spaces
relative to the causal action space.
In addition, among the actions the agent believes available, only some are
advisable or obligated. A normative action space delineates the actions an agent
ought to do, in the relevant sense of “ought.” Such spaces, defined by relevant
normative requirements, might contain only one action as when Martin Luther
protested, “Here I stand. I can do no other” (these paragraphs owe much to David
Heering). We can gloss the contrast in action spaces by saying that what one
ought, or has reason, to do is less than what one can in actuality do. Indeed, the
agent might not know that she has such obligations. Decision making occurs
within doxastic and normative action spaces, but the nature of action in the basic
sense is explicated within the causal action space.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
25
Figure 1.4 A map of different types of behavior spaces and their relations. The map is
coarsely rendered since a single action space might have paths that occupy one region,
say a required action (normative space), and other paths that occupy another space,
say an action one believes falsely that one can do (doxastic space). Caveats noted, here
are key points. Behavior spaces present the things that an agent can do in the broadest
sense, so include reflexes. Action spaces present things that the agent can actually do
intentionally, so effectively are causal action spaces. Doxastic action spaces present
what the agent thinks (or knows) she can do intentionally, some of which might lie
outside of behavior space, so she cannot actually do them (e.g. fly by flapping arms) or
within the behavior space, but outside of the causal action space because she must learn
to do them (e.g. fly an airplane). An action space lying outside of behavior space is an
“action” space by courtesy (e.g. some of its paths might be within the causal action
space). Some doxastic spaces identify actions that are within behavior space but not
within the causal action space, things the agent can do but has not yet learned how to
do (Section 1.8). Learning brings a behavior into action space. Normative action spaces
identify things the agent ought to or must do, relative to a normative system. This
includes things that she has not yet learned how to do (outside of her causal action
space) and things that she might not realize that she has to do (outside of her doxastic
action space).
There are ways of bypassing specific premises or weakening the conclusion that
are sufficient for my purposes in this book. For example, one can enter the
argument at premise (4) which identifies a branched structure for action. That
premise recapitulates a common idea that, in general, an agent could have done
otherwise than she did. On my picture, this requirement is not a condition on free
action but on action in the basic sense (cf. Steward 2012a). Alternatively, one can
endorse a weaker conclusion: the actions kinds of interest to philosophy and
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
26
   
science are solutions to Selection Problems even if not all actions are. Of course,
I think the argument goes through! Necessarily, all actions are solutions to the
Selection Problem.
1.3 Intentions and Intentional Action
The agent’s intentionally acting is a solution to the Selection Problem
due to her intending to act in the relevant way serving as a bias that
explains why the Problem is solved in the way that it is.
“Intention” as I use the term refers to the agent’s representing an action as to be
done so as to bring about that action. In experiments, intentions are set by task
instructions. Philosophers also treat intentions as bound up with practical reasoning. I focus on a basic form of practical reasoning, what I shall call fine-tuning,
the breaking down of the intended action to enable an agent to perform the action
directly (e.g. instructions in teaching; Section 1.6 and Chapter 4). This allows that,
in humans, intentions are also linked to a more substantial form of practical
reasoning, namely planning (Bratman 1987).
Intentions explain why the agent acts as she does: a specific path is taken
because it is what the agent intends to do. The content of the agent’s intention
specifies a solution to the Selection Problem. Intention prioritizes one path over
others in action space, specifically, the path corresponding to the kind of action
intended. Consider a basic case: for visually guided acting, the content of intention
sets what visual target, hence visual taking, guides response. The agent’s intention
biases input processing and required output response. “Bias” refers to the sources
that influence solving the Selection Problem while “biasing” refers to that source’s
shifting priorities in action space (Appendix 1.1 describes a classic connectionist
implementation of similar ideas for the Stroop effect).
Intention is one kind of bias. In visually guided intentional action, intention
biases appropriate visual selection, the agent’s visual attunement to a guiding
feature of the action’s target. Guiding features inform response, say the spatial
contour of an object that guides a grasping movement or the object’s individuating
feature that informs categorization (“That’s a bald eagle!”). In light of a broad
sense of “why,” guiding features explain why the response occurs in the way that it
does. Attunement to guiding features is attention (Chapter 2).
Causal theorists of action have long worried about deviant causal chains,
counterexamples to proposed sufficient causal conditions for intentional action
(Section 2.10). Thus, the murderous nephew driving to kill his uncle is so
unnerved by his intention that he drives recklessly and kills a pedestrian who
happens to be his uncle (Chisholm 1966). His intention caused the death, but
philosophers agree that the killing was not intentional. If so, the standard analysis
fails to explain agentive control and guidance.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
27
What is it for an agent to be in control and to guide her behavior? Control is
constituted by how one’s intention biases the inputs and outputs that are coupled
when the agent acts as intended while guidance is expressed in the subject’s taking
things informing response. The agent’s intention imposes a form on action by
representing a solution to the Selection Problem and biasing a relevant coupling
between input and output. This coupling instantiates the action intended. The
structure gives a geometry of intentional agency, a “triangular” form embedded in
a network of behavioral possibilities that defines an action space for an agent:
Figure 1.5 Intention provides a bias in solving the Selection Problem. The action path
is chosen because the agent intends to do just that. In that way, intentions “solve the
Problem.” The downward arrows from intention directed at both input and output
identify relations of biasing, explicated in the text as cognitive integration (Section 1.7).
The intentional action is an amalgam of (1) the intention, (2) the input taking of S1,
and (3) the response R1 guided by the input (dark solid horizontal arrow). Action is
represented in the triangular structure at the top in darker lines. The input taking
responding to S2 is active but does not guide behavior (e.g. the subject sees S2 but does
not respond). Dotted lines indicate couplings not taken, darker arrows indicate causal
processes.
This structure will be used to explain specific types of intentional action: If
perception provides the input (S1), we have perceptually guided action; if memory, then mnemonically guided action; if cognition, then cognitively guided
action, and so on. Chapter 2 argues that the input that guides action constitutes
Download Complete Ebook By email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters For E-books Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download Complete Ebook By email at etutorsource@gmail.com
   
28
attention, say perceptual, mnemonic, or cognitive attention. Further, different
mental phenomena can bias which input and output are coupled including
emotion (Section 1.5), memory (Chapters 3 and 4), experience and learning
(Chapter 5), and knowledge of rules (Chapter 6) in addition to intention. In the
next section, I argue that bias from intention identifies agentive control.
1.4 Control and Automaticity
Automaticity and Control pervade intentional action and can be
rigorously defined: features are controlled by being intended, and
those that are not intended are automatic.
In intentionally acting, the agent expresses control. She can intentionally move
objects, come to conclusions, and, in general, change the world. At the same time,
action exhibits substantial automaticity. Consider recalling events from the previous evening’s soiree. You express control in remembering that specific event
rather than last week’s party. Yet much of memory is automatic as when recalling
a cutting remark overheard or the foul flavor of the appetizers. These memories
just spring to mind (Strawson 2003; Mele 2009).
An adequate specification of skilled action and learning requires technical
notions of automaticity and control for which no adequate analysis in philosophy
or psychology exists (see Moors and de Houwer 2006). Psychologists have abandoned a systematic analysis, opting instead for rough-and-ready lists of attributes
of each (Palmeri 2006). Philosophers of action have aimed to explain control, and
recent work on skill has highlighted automaticity in action (e.g. Fridland 2017).
Yet the concepts as used lead to a paradox. Begin with two truths about action:
1.
2.
Acting intentionally exemplifies agentive control.
Acting intentionally is imbued with automaticity.
At the same time, cognitive science affirms a Simple Connection.
3.
Control implies the absence of automaticity and vice versa.
If intentionally acting involves control yet control implies the absence of automaticity, then the second proposition is false. The theory of automaticity and control
in agency is inconsistent.
Shouldn’t we reject (3)? The claim is central, fixing a specific conception of
control by tethering it to automaticity’s absence. Jonathan Cohen notes: “the
distinction between controlled and automatic processing is one of the most
fundamental and long-standing principles of cognitive psychology” (2017, 3).
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
29
In foundational papers, Walter Schneider and Richard Schiffrin (1977) proposed
that automaticity involves the absence of control and that control should be
understood in terms of the direction of attention. Yet attention can be automatic,
as when it is captured by a salient stimulus (e.g. a flash of light or loud bang;
Section 2.5). As time went on, psychologists failed in their attempts to specify
sufficient or necessary conditions for control and automaticity. Where one psychologist explicated control (or automaticity) by appeal to feature F, say consciousness or task interference, another would empirically demonstrate that F was
also exemplified by automatic (or controlled) processes (see Appendix 1.1).
Psychologists consequently shifted to a gradualist rather than a categorial
characterization, affirming that automaticity or control can be more or less. This
idea led to proliferating lists of features correlated with automaticity and control
(Wu 2013a; Moors 2016; Fridland 2017). Yet even gradualists retain the Simple
Connection, for they organize correlated features in terms of automatic and
controlled processes. Thomas Palmeri (2006) in an encyclopedia entry on automaticity lists 13 correlated pairs of features along the dimensions of automatic or
controlled processes. Similarly, the much discussed Type 1 / Type 2 and System 1 /
System 2 distinction draws on such a division.² We cannot reject the Simple
Connection without rejecting substantial psychological work.
An advance in the theory of agentive control can be secured if the three claims
can be rendered consistent. They can. The problem is that (3) is generally read
as distinguishing kinds of processes, yet given (1) and (2), intentional actions are
a counterexample to this interpretation. The solution is then simple: reject dividing processes as automatic or controlled. Rather, automaticity and control
in agency are to be explicated in terms of features of processes. Accordingly, a
single process can exemplify both automaticity and control. Intentional action
certainly does.
(1) is explained through the role of intention as providing the agent’s conception of her action. Elizabeth Anscombe’s (1957) conception of an action being
intentional under a description highlights how certain features of an agent’s doing
something are tied to what she intends to do, to her conception of her action. The
features that are the basis of such descriptions are represented by the agent’s
intention. In intending to Φ, the agent does something that has the property of
being a Φ-ing, say her intentionally pumping water, poisoning the inhabitants of a
house, or saving the world. This yields the following implementation of the first
proposition, for S’s Φ-ing at a time t (I suppress the temporal variable):
S’s Φ-ing exemplifies S’s control in action in respect of Φ-ing iff S is Φ-ing
because she intends to Φ.
The agent’s doing something describable as Φ is controlled in respect of Φ when
she does it because she intended to. Controlled features are intended features.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
30
   
We now affirm the Simple Connection without paradox: control implies the
absence of automaticity and vice versa. This entails the second proposition in a
specific form:
S’s Φ-ing exemplifies automaticity in action in respect of Φ-ing iff S’s Φ-ing is
not controlled in respect of Φ-ing.
All unintended features of action are automatic. Given the finite representational
capacity of human minds, most properties of action will be automatic because we
cannot represent all these features in intention. Where the feature is automatic at a
time (or temporal range), then it is not controlled at that time (or temporal range),
and vice versa. This temporal indexing allows us to track transitions in automaticity and control during extended learning: at an earlier time, Φ-ing might be
controlled as the agent intends to practice Φ-ing, but with acquired skill, her
Φ-ing later becomes automatic as she need not explicitly think about it. Similarly,
when the going gets tough, the agent can transition from Φ-ing automatically to
doing it deliberately, bringing action under control precisely because she has to
think about what to do, fine-tuning her intention. Thus, on a straight road, my
driving is automatic as I converse with you, but noticing a car weaving erratically,
I update my intention to focus explicitly on my driving relative to it, so driving
now becomes controlled (on “fine-tuning” intention, see Chapter 4; on direct
intentions and learning, see Section 1.8 and Chapter 6; on acquired skills, see
Chapters 5 and 6).
Finally, with an eye toward capturing habits, environmentally triggered actions,
and pathologies:
S’s Φ-ing is passive when every feature is automatic.
Passively acting is completely independent of an agent’s intending to do so. Such
actions are fully automatic expressions of specific capacities for action. Since the
term “passive action” will strike readers as paradoxical, note that it is a technical
term in the theory to capture fully automatic action.
There are different notions of control used in cognitive science, philosophy, and
other domains that are conceptually disconnected from automaticity, and, as such,
they are distinct from the agentive notion tied to the three truths noted earlier. Of
these, some are psychological or neural notions that might be relevant to a full
biological account of human agency so compatible with the analysis I have
provided (e.g. forward models in motor control theory, Wolpert and
Ghahramani 2000), which have been influential in theories of schizophrenia
(Frith, Blakemore, and Wolpert 2000; Cho and Wu 2013). Those concepts are
not my primary concern since a philosophy of action that disregards the role of
automaticity in agency, as a contrast to control, is incomplete. Automaticity
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
31
pervades agency and is tied to a form of control that must be understood whatever
additional notion of control one invokes. The three claims I have rendered
consistent allow a truth about intentional agency expressed by Anscombe to
intersect a foundational line of thought in empirical psychology.
Automaticity emphasizes that even once we have formed a decision, there are
further problems to be solved. Consider two actions, one bodily, one mental.
When one intends to drink a cup of coffee, this biases an input, one’s visual
experience of the drink, and an output, a “reach-and-grasp” response to be
coupled to the experience. One’s subsequent reaching and grasping is guided by
one’s seeing the target. Still, there are many ways one’s experience of the drink can
inform a movement that brings it to one’s mouth. For example, there are many
ways to grasp the target, say with one’s whole hand or with thumb and index
finger. Each requires different visual information. Even with the same type of
grasp, no two instances need be qualitatively the same (see Wu 2008 for a
discussion). Similarly, if one wants to figure out how to get to Timbuktu, there
are different ways of doing so: recalling a common route, visually imagining
oneself taking various options, deducing the best option given a set of constraints,
and so forth. Rarely are two deliberations on the same topic the same.
That there are one–many relations between intention and execution establishes
the necessary automaticity of action. The challenge of action-relevant selection is
not discharged just by intending to act. If one intends to drink the coffee, the
targeted mug presents many properties only some of which are relevant to guiding
an appropriate reach-and-grasp movement. Thus, the color of the mug is not
relevant but the location and contours of the handle are. Even then, given different
ways to pick up the mug, action-relevant properties can be further subdivided.
Intention cannot represent all these properties.
The same points can be made regarding deliberation that draws on imagining,
deducing, or recalling. The results of deliberation must be at a finer grain than
what we intend. For example, in recollection, we intend to figure out how to get to
Timbuktu, but the result is remembering a specific way to Timbuktu, perhaps
one among many possible routes. If the intention represented the requisite
route, there would be no need to deliberate since the result would already be in
mind (Strawson 2003). It is because intention begins with an abstract representation of action that further Selection Problems must be solved to engender a
concrete action.
These further Selection Problems are not to be solved by deliberation or
through conceptualization, at least completely (I called such problems nondeliberative problems; Wu 2008). A distance necessarily remains between the
abstractness of the action representation in intention and the concrete details of
the action performed. This distance entails the necessity of automaticity in the
actions of finite minds, for not every property of the action done can be controlled
since not every property can be represented in intention. The distance can be
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
32
   
narrowed by further practical reasoning that fine-tunes the content of the intention to render it more specific. Still, a gap will always remain given limitations on
what we can hold in mind. The intention can never specify all relevant details of
action at the precise level of determinacy.
It is easy to conflate automaticity with reflex where the latter suggests the
absence of agency. Yet while reflexes are by definition automatic, indeed passive
in the technical sense, automaticity does not imply reflex so does not imply the
absence of action. Automatic activities that contribute to action also involve the
agent. Consider the bodily action of intentionally reaching for a cup. Reaching
requires not just visually experiencing the target, but also selecting its actionrelevant properties among irrelevant properties. These guiding features, here
spatial properties of the cup, need not be represented in intention, but the agent
must be sensitive to them to guide her bodily response. Is this sensitivity at the
subject level attributed to the agent rather than to some part of her?
Assume that the subject’s involvement is only at the level of the visual state that
reflects the abstractness of the intention’s content, say that of seeing the mug. The
agent’s involvement stops at her intention and its abstract content. Yet seeing the
mug does not suffice for guiding a precise reach-and-grasp response for one must
also be sensitive to—take in—the action-relevant properties such as the mug’s
precise location, its shape, and its orientation. If the subject’s involvement is
restricted to those features of action in the scope of her intention, then the
automatic visual attunement to specific action-relevant properties is not subject
involving. The subject would be like the factory manager who never does any work
but give orders. Others make things happen. Similarly, detailed guidance in action
would be instituted by something that does not involve the agent. The agent can
only wait and see if things unfold as she intends. She contributes nothing beyond
having an intention. This picture abolishes the agent acting. Rather, the subject
acts only if guidance is due to her take on things, even at the level of fine-grained
attunement. Not only is the agent in control in intentionally acting, she is also
the guide.
Let me emphasize that what has been introduced is a technical characterization
of automaticity and control, two concepts necessary for an adequate characterization of action. We should stop using central notions non-technically. For
empirically oriented philosophers of mind who (should) accept the Simple
Connection, the concepts are incoherent given the paradox. My resolution has
several advantages beyond securing coherence. It explicates automaticity and
control in light of the Selection Problem and stays true to the Simple
Connection. Further, it defines control and automaticity sharply at a time and
allows for gradualism across time in giving a precise sense to talk of more or less
automaticity/control. Gradualism emerges because the analysis allows for the
extremes of passivity and of full control and everything in between (I discuss
gradualism further when examining learning; Chapters 5 and 6). The analysis
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
33
conceptually unifies philosophical and empirical concerns, so the definitions
should be our working account of automaticity and control. If I may: I urge
readers to take the analysis on board or do better.³
1.5 The Necessity of Bias for Action
Bias is a necessary feature of action and can be tied to control or
automaticity.
Intentional agency involves a tug of war between automaticity and control rooted
in different sources of bias. Action, intentional or not, necessarily involves bias.
Consider the Selection Problem in a Buridan Space:
Figure 1.6 A Buridan Action Space where two objects, S1 and S2, are qualitatively
identical such that the agent has no basis on object features alone to choose one to
act on.
A donkey sees two qualitatively identical bales of hay equidistant from it, one to
the left (S1), one to the right (S2). The animal’s action space consists of two inputs
and one output, an eating response. So, the donkey can eat the bale on the left or
on the right. If the donkey has no intention to act, there is no bias from intention.
Absent another bias, the Selection Problem remains unsolved leading to death.
Action requires bias.
Bias can emanate from other mental phenomena. Consider Rosalind Hursthouse’s
discussion of arational actions, actions driven by emotion (Hursthouse 1991). When
emotions motivate action, they must solve the Selection Problem. Like intention, they
bias an action. That Alex is angry at Sam rather than Chris explains why Alex is
focused on Sam in lashing out. Absent an intention to lash out, Alex’s action being an
outburst is by definition automatic. Emotion leads to action by occupying the biasing
role first identified by appeal to intention:
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
34
   
Figure 1.7 Emotion such as anger can provide a bias in solving the Selection Problem.
Note that emotion and intention can work together or against each other in
biasing solutions to the Selection Problem. Indeed, in typical action, there are a
variety of biases, congruent and incongruent (see Chapter 5 on implicit, automatic
biases and Figure 5.1).
The features of an action generated by emotion are automatic, and a purely
emotion-driven action is technically passive. Consider:
S’s Φ-ing at T is emotionally responsive in respect of T iff S is Φ-ing at T
because she is in an emotional state directed at T.
A person can be said to be controlled by his emotion, speaking loosely, when an
emotion is the bias in action. Someone lashing out in blind rage is a slave to
passion even if emotionally responsive. The struggle between control and automaticity is highlighted when we try to get our emotions under control, say by
doing something intentionally, hence with control, to disrupt the force of emotion.
Here, one bias attempts to cancel out another (cf. reward bias in Section 5.2).
Automatic biases will figure throughout the book as they figure pervasively in
lived action.⁴ As noted in the Introduction (Figure 0.1), the most general structure
of action is as follows:
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
35
Figure 1.8
An input can also be biased in being randomly activated sufficient to guide a
response, say due to noise in the neural basis of the input. This can lead to
thoughts that randomly pop into one’s head or the feeling of a cell phone buzzing
in one’s pocket when there is no cell phone. Spontaneous activity of the neural
basis of thought or perception drives a response. Buridan’s donkey might find
itself munching on the left bale precisely because spontaneous neural activity
altered the perceptual representation of that bale to drive a response, automatically. Perhaps that bale suddenly seems closer, larger, more delectable, and now
the donkey moves. Similarly, I suddenly reach for my cell phone when
I hallucinate a tactile buzzing in my pocket. In veridical perception, we speak of
a “bottom-up” bias tied to the capture of attention by a salient stimulus. Salience as
conceptualized in cognitive science provides a basic bias (Section 2.6). A sudden
flash or loud sound grabs one’s attention. Here, attention is pulled automatically,
without intention, emotion, or other mental attitudes as bias. The capture of
attention by salience is a basic passive movement of mind. We began with bias
in control due to intention, but most bias is tied to automaticity and provides the
lion’s share of what shapes actions (Chapter 5).
Human action spaces are constituted by action capacities that can be expressed
in light of an appropriate bias. The agent’s ability to act in a certain way, to Φ, is
depicted in an action space articulated by specific input-output paths. Each path
identifies what an agent can do. Path expression is the agent’s acting. An action
capacity is constituted by action-relevant capacities: psychological inputs, say
perceptual or mnemonic capacities, and outputs such as capacities to move the
body or to encode a content. The expression of these action-relevant capacities
constitute the agent’s being active as part of her action. Each action-relevant
capacity is the target of biasing. If the bias is an intention, these capacities are
the target of cognitive integration (Section 1.7). Action capacities are individuated
as such in light of being potential targets of intention so as potential parts of
intentional action.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
36
   
There might be agents who do not form intentions and in whom other mental
phenomena play a functionally similar role, say those with rudimentary beliefs
and desires. There might also be creatures driven largely by emotions or perceptual salience. In relation to human intentional action, their behavior is fully
automatic hence passive, driven by passion or the environment. The Selection
Problem and the concept of bias delineate many kinds of agency of which the
human form, planning agency with its functionally rich form of intention, being
one of many. The notion of control is conceptually tied to a specific form of
intentional action that humans exemplify, but an expansive theory of agency
including the agency of non-human animals will eventually opt for a broader
conception of control tied to varieties of executive function. That project is a
comparative biology of agency I do not take up here.
We can explain passive action, understood technically, as constituted by action
capacities which are activated fully independent of intention. Passive actions, like
reflexes, are fully automatic, yet are distinct from pure reflexes in being solutions
to the Selection Problem. Such passivity includes certain forms of mind wandering
and daydreaming but also pathologies of action, such as passivity phenomena in
schizophrenia, the hearing of voices or the experience of inserted thoughts (see
Cho and Wu 2013 for an overview of mechanisms of auditory hallucination).
1.6 The Biology of Intention-Based Biasing
Control in intention is revealed behaviorally, biologically, and computationally in biasing relevant input and output states in accord with the
content of the intention.
This section provides an empirical check on my a priori argument about the
structure of agency. Let us begin with two questions:
Control: How does the agent’s intention to act prioritize some inputs relative
to others and some outputs relative to others to yield appropriate coupling?
Guidance: How does the input inform the production of the response?
Guidance can be independent of control since automatic (passive) actions are
guided. It is instantiated whenever the subject’s taking things informs her
responding to a guiding feature. Guiding features explain why the response is
generated as it is. Control implicates a distinctive form of agency where the agent’s
intention biases a path by prioritizing specific input and output constituents. In
general, biasing prioritizes certain capacities for action. Psychology and neuroscience identify realizers of the action structure identified a priori.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
37
Agents are inundated with too much information. Empirical theories of perception have long emphasized the need for selection in order to generate coherent
behavior in light of information overload (Broadbent 1958). For example, the
visual system faces information overload under normal viewing conditions and
selection is required to avoid informational meltdown. Yet even without excess
information, a Selection Problem remains. Action, not information overload, is
the fundamental constraint necessitating selection (Neumann 1987). Buridan’s ass
doesn’t face information overload but certainly must overcome a Selection
Problem, else death.
Consider a common case of intentionally investigating the world: looking
around. An exemplary study was conducted by Alfred Yarbus (1967; cf. Greene,
Liu, and Wolfe 2012). Yarbus presented his subjects with the same stimulus,
I. M. Repin’s painting of a homecoming scene, a man returning after a long
absence, surprising his family at meal. He assigned his subjects different tasks:
to remember aspects of the painting or make judgments about the story. To do so,
Figure 1.9 The stimulus in Yarbus’ experiment is I. P. Repin’s “Unexpected Visitor”
(A). Lines show scan paths of a subject’s saccadic eye movements. Task instructions are
as follows: (B) remember the clothes worn by the people; (C) remember the position of
people and objects in the room and (D) estimate how long the visitor has been away
from the family. Reprinted by permission from Springer Nature: Springer, Eye
Movements and Vision by Alfred Yarbus (1967, 174, fig. 109). This figure has been
adapted and reprinted from M. F. Land. 2006. “Eye Movements and the Control of
Action in Everyday Life.” Progress in Retinal and Eye Research 25, 296–324, with
permission from Elsevier.
Download Complete Ebook By email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters For E-books Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download Complete Ebook By email at etutorsource@gmail.com
38
   
subjects had to visually select relevant information, moving their eyes to fixate
relevant targets. We have visual inquiry.
When these movements were tracked over time, intelligible patterns emerged
given the task. When subjects were asked to remember the clothing worn by
people in the painting, their fixations centered around those figures; when asked to
remember the location of objects in the room, fixations ranged widely to various
objects; and when asked to estimate how long the father had been away, fixations
focused on faces to estimate emotional response. The eye moved to items needed
to guide task performance.
Scientist’s set the intentions of cooperative experimental subjects through
task instructions. Yarbus’ different instructions modulate intention, leading
to altered movements. When intention is set, response (eye movement) shifts
given a constant stimulus. This manipulation suggests that the content of the
intention—the task instruction that informs the subject’s plan—plays a causal
role in generating the observed response. Toggling intention by instruction
leads to task-relevant changes in behavior needed to appropriately solve the
Selection Problem. Similarly, in mundane life, we are instructed by others or in
our own case, “self-instruct” by deciding to act. This sets an intention that leads
to action.
Such experiments identify a behavioral correlate of the biasing role of intention
that we have postulated. Action requires solving the Selection Problem, and in
intentional action, the solution must be sensitive to the subject’s intention. Thus,
I postulated a causal dependence between the path selected and what the agent
intends. Yarbus’ experiment, indeed any behavioral experiment involving task
instructions, confirms this, showing how action couplings, here specific eye movements to visible targets, change over time to serve the agent’s intention (for further
work on task-relevant eye movement, see especially work from Michael Land,
Mary Hayhoe, and co-workers, e.g. Land, Mennie, and Rusted (1999); Hayhoe and
Rothkopf (2011); Hayhoe and Ballard (2014).
Recall the question concerning control: given the agent’s intention, how are
inputs prioritized relative to other inputs and how are outputs prioritized relative
to other outputs to explain why a specific coupling arises? The behavioral work
shows that intentions yield behavior in conformity to their content. I have explicated this functional role in terms of intentions providing a bias to solve the
Selection Problem.
Neural biasing in the brain implements intentional biasing by the subject.
Consider an illustrative case: We can monitor visual system activity directly
during intentional performance of tasks. Leonardo Chelazzi et al. (1998, 2001)
examined visual processing in awake behaving monkeys. The animals were
trained to perform a delayed match to sample task. In this task, an animal holds
the eye at the fixation point. A cue identifying the target (here a flower) briefly
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
39
appears at fixation. The animal must remember the sample during a delay period,
a working memory task (Chapter 3). Subsequently, the animal was presented with
two test stimuli, at most one of which matched the sample. The subject must either
identify the match, making an eye movement to it, or maintain fixation if there is
no match.
Figure 1.10 This depicts a delayed match to sample task. The subject maintains
fixation while activity from a neuron is recorded. The neuron’s receptive field is
identified by the dotted circle. A target (the flower) appears at fixation and the subject
must remember it during the delay period, a working memory task. In the last panel,
two targets are presented in the neuron’s receptive field, and the animal must report
the match by moving the eye to it or, if there is no match, by keeping the eye at fixation.
This figure is modified and reproduced from Leonardo Chelazzi et al. 2001. “Responses
of Neurons in Macaque Area V4 during Memory-Guided Visual Search.” Cerebral
Cortex 11: 761–72, by permission of Oxford University Press.
The test array presents the animal with a Selection Problem similar in structure
to a Buridan Space. Solving the Problem depends on the animal’s intention to
perform the task and on correlated shifts in visual processing. Chelazzi et al.
examined activity in two visual areas: (1) V4, a mid-level area in the ventral visual
stream; and (2) in inferotemporal cortex, deeper in the ventral stream where
strong neural responses to objects are found. The ventral stream is a necessary
part of the neural basis of conscious seeing of object and form (Ungerleider and
Mishkin 1982). Lesions in this stream lead to visual agnosias, inabilities to see
form, faces, or objects (Farah 2004).
Visual neurons respond to specific parts of the visual field. The spatial receptive
field for a visual neuron corresponds to specific areas in the visual field relative to
fixation in which the neuron is responsive to stimuli. The visual neurons monitored by Chelazzi et al. are tuned toward certain stimuli in that preferred stimuli
induced a strong neural response, a high firing rate, the generation of many action
potentials (spikes) that carry information. In contrast, non-preferred stimuli
generate weaker responses. For our purposes, the signal that carries information
is the firing rate of the neuron (there are many neural codes; DeCharms and
Zador 2000).
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
40
   
What does task-relevant neural selection look like? For those not used to
thinking about neural activity, think of each neuron as a homunculus deploying
a signal. In the Chelazzi et al. experiment, the visual neurons in V4 or inferotemporal cortex are presented with two items in their receptive fields, only one of
which might be task relevant. The neurons must signal to other systems regarding
the task-relevant stimulus. This reproduces at the level of neural processing the
Selection Problem the animal faces in performing the task. How does the neuron
respond to the Problem? Consider the following data regarding neural activity
which maps the firing rate (spikes/sec) over time with time 0 being when the
stimuli are presented.
Figure 1.11 This figure shows the average response of 76 visual neuron under four
stimulus conditions. The y-axis gives the number of action potentials (spikes)
generated per second while the x-axis is the time relative to stimulus presentation at
t = 0 milliseconds (ms). The grey and black vertical bars on the x-axis indicate latency
of the eye movement to the target in the one and two target presentations, respectively.
The thin solid line shows response to the preferred stimulus (flower) when it is
presented alone. The thick solid line shows response when the preferred stimulus is
presented with a second, less preferred stimulus (mug; cf. thin dotted line for neural
response to just the mug). Note that the neural response is suppressed with two stimuli
as the peak of the thick solid line is lower than the peak of the thin solid line. Crucially,
following the thick solid line, at about 175 ms, the neural response begins to shift,
becoming more like the neural response to just the preferred stimulus alone (thin solid
line). The authors note that this is as if the receptive field is “contracting” around the
target stimulus. Gray circles around a stimulus in the two stimuli conditions, two
rightmost circles, identify it as the correct target. This figure is modified and
reproduced from Leonardo Chelazzi et al. 2001. “Responses of Neurons in Macaque
Area V4 during Memory-guided Visual Search.” Cerebral Cortex 11: 761–72, by
permission of Oxford University Press.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
41
This neuron responds strongly to the flower, the preferred stimulus (high firing
rate, >40 spikes per second; thin solid line; stimuli are illustrative, not the actual
stimuli used). When a second object (a cup) is placed in the receptive field, this
object suppresses the neuron’s response to the flower as seen in the lower firing
rate represented by the thick solid line. Activity is lower than it would be had the
preferred stimulus appeared alone (at about 100 milliseconds (ms); dark line).
Suppression can be understood as resulting from competition between the two
stimuli for the neuron’s limited signaling resources, its spikes. The result is less
information regarding what object is in the receptive field. Uncertainty about
object identity increases.
Let’s start with the task where we present a sample to be remembered. The
animal must subsequently report whether the sample is matched in a subsequent
test array of two stimuli. If there is a match, the animal must move its eye to the
match in report. We present the animal with a flower which it commits to
memory. This fine-tunes the animal’s intention, from an intention to report a
match to an intention to report a match to this sample. Now, we test the animal
by providing it with two possible matches, one of which is the flower (presented
at time “0”), and begin monitoring neural activity. Focus on the darkest line in
the figure.
The presence of the two stimuli, one preferred and one not, leads to neural
competition and suppression of the neuron’s response to a lower level than if the
flower was presented alone. Given the animal’s intention to report a match to the
remembered flower, it must select the flower to guide an eye movement to it while
ignoring the distractor mug. Selection for action must be constituted by selection
at the neural implementation level. Follow the dark line over time. The suppressed
response eventually coincides with what the neuron’s response would be if only
the flower was in the receptive field (overlap of thick and thin solid lines just after
200 ms). As Chelazzi et al. note, it is as if the neuron’s receptive field has
contracted around the task-relevant object. Its response is seemingly driven only
by the task-relevant stimulus as competition is resolved in favor of that target.
Similar results are reported in fMRI in humans (e.g. Reddy, Kanwisher, and
VanRullen 2009). This selective processing is the neural correlate of biasing in
solving the Selection Problem at the psychological level: what the animal intends
makes a difference to behavioral and neural selection.
Task-relevant selection has been argued for a priori given the Selection
Problem. Solving that Problem is recapitulated in behavioral experiments like
Yarbus’ that must have a basis in neural processing. Such task-relevant shifting of
processing is conceptualized computationally as biased competition (Desimone
and Duncan 1995). At the subject level, competition is encapsulated by the
Selection Problem. At the neural level, task-relevant competition in the visual
system is exemplified by suppression of neural activity when multiple stimuli are
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
42
   
present. Talk of a neural bias is talk of that which shifts processing to generate
response. In the Chelazzi et al. experiment, this neural bias originates from the
neural basis of the animal’s intention to match a remembered sample as per the
trained task (for related work in humans, see Carlisle et al. 2011). In the Yarbus
experiment, bias emerges from the neural basis of remembering (intending) an
instructed task.
The structure at which we have arrived a priori is, as it must be, connected to
the behavior and biology of actual agentive systems (Miller and Cohen 2001; Cisek
and Kalaska 2010). Philosophical, psychological (behavioral), neural, and, as
discussed in the next section, algorithmic perspectives converge on the Selection
Problem. This suggests that we are seeing matters aright.
1.7 Intention-Based Biasing as Cognitive Integration
Biasing by intention involves the intention cognitively integrating with
inputs to solve the Selection Problem consistent with the intention’s
content.
I present a hypothesis about human agents in light of the structure of action and
the biology: Bias by intention involves cognitive integration with subject-level
capacities. The subject-level notion of bias from intention is constituted by neural
bias that resolves neural competition in solving the Selection Problem, linking
Yarbus’ behavioral results with Chelazzi et al.’s electrophysiological data. While
I have elsewhere discussed biased competition as cognitive penetration
(Wu 2017b), here, I dissociate integration from penetration. My discussion is
relevant to debates about the latter, but I set that issue aside. To mark this, I shift
terminology.
Integration explicates intention’s causal role in light of primate biology. It is an
algorithmic notion founded on informational or content exchange captured as
follows:
X is in an informational transaction with Y in respect of content C if in its
computations, X computes over C as provided by Y.
“Information” and “content” are used expansively to allow for different theoretical accounts of what influences computation. A compelling idea is that where the
information carried by a signal functions to guide behavior, that signal functions
as a representation (Millikan 1984; Dretske 1991).
I take the informational/content transactions between subject-level states as
founded on the transactions between their neural bases. Accordingly, in explicating intention’s functional role as biasing inputs, say seeing (visual taking),
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
43
I examined task-relevant shifts in visual processing due to shifts in what is
intended. An agent’s intention biases inputs such as her visually taking in a
target only if the neural basis of the intention stands in an informational
transaction with the neural basis of the subject’s visual takings. Accordingly,
visual processing shifts to serve the intention. In Chapters 3 and 4, I shall
discuss the neural basis of intention by leveraging research on working memory,
but here, I give a computational gloss. The Chelazzi et al. experiments point to
the ventral stream’s facing a neural version of the Selection Problem, one
resolved by its sensitivity to what the subject intends to do. They provide
evidence for the type of information transaction I am postulating. To state the
point strictly:
Cognitive Integration by Intention: If an information processing system that is
the basis of the agent’s intention to R contains information regarding the intention’s content and the system that is the basis of the agent’s input states computes
over this information to generate a task-relevant subject-level state S rather than
Sn, then the intention cognitively integrates with S (for example, in Figure 1.5,
S = S1 Sn = S2).
A mouthful, but the basic idea is that if the input system establishes a selective
orientation by computing over content from intention, then the input system is
integrated with cognition. The interaction is computational.
In the Chelazzi et al. experiment, the visual ventral stream responds to
information about the target of the subject’s intention, changing its processing
to select the intended target and exclude the distractor before contributing to
guiding the eye to the former (Milner and Goodale 2006 argue that the ventral
stream serves as a pointer for visuomotor computations in the dorsal visual
stream that informs visually guided movement). At some point, such selection
by the visual system is the basis of the subject’s being in a subject-level visual
state (S) tuned to the target rather than to the task-irrelevant distractor (Sn).
Cognitive integration of intention with visual takings is built on computational
exchange between their neural bases as postulated in biased competition. The
resulting shift in action space involves prioritizing action capacities needed to
execute the intention. Consequently, a coupling of biased input and output is
instantiated, the subject’s responding in light of how she takes things, given her
intention to act.
The task-relevant shift in neural processing has more recently been modeled
in terms of divisive normalization which can be understood as an instance of
biased competition (Reynolds and Heeger 2009; Lee and Maunsell 2009).
Divisive normalization has been dubbed a canonical neural computation
(Carandini and Heeger 2012). John Reynolds and David Heeger provided one
version:
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
44
   
Figure 1.12 The left box presents a fixation point and two stimuli. The stimulus on
the right of that box, a vertically oriented contrast patch, is the task target. The
response of neurons that respond to orientation are recorded and mapped along the
y-axis based on their tuning to vertical orientation in the dark boxes in the figure (these
are the bright vertical lines in each dark box). Brighter locations indicate stronger
neural response, so greater preference for vertical orientation. Note the output
population response that shows prioritization (stronger response) of neurons that
respond to the right stimulus after divisive normalization. A more complete
description is given in the text. Reprinted from John Reynolds and David Heeger.
2009. “The Normalization Model of Attention.” Neuron 16 (2): 168–85, with
permission from Elsevier.
Let’s start with an intuition of how we might expect processing to change when
the animal intends to act on a specific object. In this example, the animal
maintains fixation while two objects of vertical orientation appear, left and right
at L and R. Treat the right stimulus as the task-relevant target designated by a cue
to the subject. Prior to the cue, we expect the brain to be faced with a Buridan
Action Space in that there is no differentiating the two putative targets. Once one
target is identified as task relevant, however, the brain must solve the Selection
Problem by withdrawing from one object to deal effectively with the target. The
latter should be prioritized.
With intuition in hand, consider the computation depicted. The input stimulus
drive is a representation of responses among many visual neurons which have
receptive fields responsive to space marked along the x-axis (we look at just one
dimension for ease). That map depicts the activity of neurons that respond to
stimuli at positions L and R. The neurons represented as active have different
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
45
tuning to the stimuli. For some neurons, vertical orientations are preferred, and
they fire strongly (note bright center of each vertical line in the stimulus drive
map). Other neurons prefer non-vertical orientations, and their response scales
according to how different their preferred orientation is to vertical, with response
dropping as their preference moves further away from vertical (moving up and
down from the center of the vertical lines, noting that lines grow dimmer as one
does so, this indicating decreasing response). Thus, the y-axis of the stimulus drive
map depicts the varying activity of these neurons that have receptive fields that
respond to either L or R. Notice that we have a neural Buridan Space in that there
is no differentiating the stimuli based on level of response.
Now, the visual system computes over these representations. Note what is
effectively suppression, namely dividing neural response by what is also called
the normalization pool (Lee and Maunsell 2009), here called the suppressive drive.
We can treat normalization as an expression of competition. Second, the input is
multiplicatively scaled by a factor ascribed to attention, the attention field.
Effectively, this is a spotlight model of attention which I shall question in
Chapter 2. Consider instead what information the attention field carries. The
animal has been cued to the location of the task-relevant target, namely the
right stimulus and thereby knows, forms a specific intention, to act on that target.
The attention field thereby carries a signal regarding the task-relevant location.
This, I submit, is just the information of where the animal intends to act (see
Chapter 3 on sensory working memory). On that assumption, the visual system is
computing over a representation of the content of the animal’s spatial intention as
signaled in the attention field. In that respect, visual processing is integrated with
the subject’s intention since vision computes over this cognitive content (for more
discussion, see Wu 2013b, 2017b). The result is withdrawing from the taskirrelevant stimulus on the left in order to prioritize the task-relevant one on the
right, as seen in the shift in neural population response (output) indicating a
stronger signal for the target on the right (prioritizing) and a weaker response
for the target on the left (withdrawal). If the result is the activation of a visual
capacity of tuning toward the task-relevant object, the divisive normalization
model shows how intention integrates with visual capacities.
The behavioral situation described is static since the hypothesized subject is
maintaining fixation. The eye, however, moves two to three times a second, so
input is constantly changing. As input changes, intentions keep time, so on the
postulated model, the attention field (better, the intention field) will dynamically
shift. If the agent updates her intention, this involves a basic form of practical
reasoning (fine-tuning; Chapter 4). Accordingly, there is a sense in which integration reveals intention as being active in setting attention over time. I return to this
idea of being active in Section 1.10.
Yarbus’ experiment demonstrated that eye movements are appropriately sensitive to the agent’s intention. Chelazzi et al. demonstrated that visual neural
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
46
   
processing is sensitive to the content of the agent’s intention. Reynolds and
Heeger show how this interaction between intention and vision can be understood
as cognitive integration. If this synthesis of a priori, behavioral, neural and
computational perspectives is correct, then we return to the sufficient condition:
intention cognitively integrates with visual takings because the neural basis of
the former institutes a shift in processing in the neural basis of the latter to
prioritize the task-relevant target, S rather than Sn. This biological shift constitutes the psychological shift captured in changes in action space that eventuate
in the intended action.⁵ In Section 3.5, I will consider an upshot of this shift
as the establishment of vigilance, a preparation to attend (on the neural
networks supporting this, see Corbetta and Shulman 2002; Corbetta, Patel,
and Shulman 2008).
1.8 Learning to Act and Shifting Control
In learning to act, acquiring the appropriate biases, a subject comes to
directly intend to act in shifting the balance of automaticity and
control.
Learning plays a central role in my account of automatically biased behavior and
deduction in Part III. Typical learning is the result of intentional action. As one
learns how to Φ, the balance of automaticity and control in Φ-ing shifts, paralleling a change in cognitive integration. Let X be a guiding feature, what one is
attuned to in the input that guides response. Given the structure of action, the
agent’s taking(X) guides her response R. Let there be an action Φ-ing on X
constituted by coupling the agent’s Taking(X) to response R:
For example, one might reach for a glass of water X guided by one’s seeing it or
one might answer a question X by posing it to oneself. When the action is
performed, an action capacity is exercised, anchored on the subject’s taking in
the guiding feature. The appropriateness of a coupling, Taking(X) ! R, is
measured against one’s intention, namely whether it satisfies what is intended.
Intention provides the standard of success whereby couplings are assessed. Where
the coupling satisfies the intention, R is appropriately informed by one’s taking X.
Learning transforms action spaces. At a given time, an agent either has an
action capacity constituted by the coupling type, Taking(X) ! R, or she does not.
That is, the agent can or cannot intentionally Φ. If she cannot, the coupling is not
a possibility in the agent’s causal action space. To Φ, the corresponding ability
must be acquired through learning. If Φ-ing is something that the agent can learn
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
47
to do, it is in the agent’s behavior space, a behavior she can in principle perform.
Learning moves behavioral capacities into action space, thereby increasing the
agent’s action repertoire.
Figure 1.13 A behavior space, or at least a set of potential behavior couplings,
moves into the set of causal action spaces when an agent learns how to act in the way at
issue. This is depicted by the dot that moves from behavior to action space.
Now, the agent can Φ intentionally. If so, the capacity to Φ can be cognitively
integrated with intention.
Learning does not cease when one learns to intentionally Φ, for one’s Φ-ing is
sensitive to further practice. Changes in one’s ability to Φ are tied to changes in the
intentions needed to engage the capacity to Φ precisely because practicing Φ-ing is
motivated by the agent’s intention to do so. Practice leads to a change in the
agent’s intention vis-à-vis her capacity to Φ.
The agent’s intending to Φ is direct if it can cognitively integrate with the capacity
to Φ leading to her so acting without the need to fine-tune the intention.
In the normal case, when the agent intends to Φ at the present time, she simply
does so. When the intention is not direct, hence indirect, fine-tuning is needed for
the intention to integrate with action capacities. That is, the action intended must
be broken down into digestible parts, and the content of the intention is correspondingly sharpened, representing how to Φ by doing specific subactions.⁶
Download Complete Ebook By email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters For E-books Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download Complete Ebook By email at etutorsource@gmail.com
48
   
Consider the virtuoso violinist. She intends to play the violin, and she plays. No
further thought is required. She does not need to think through the basic steps.
Step back in time, however, when she was a child at her first lesson. Then, she
intended to play the violin but her intention is indirect for she did not know how
to play. More thought was required. Her teacher helped her by describing and
demonstrating subactions that, when put together, amount to playing the violin:
picking up the bow and instrument, holding the violin under one’s chin, putting
the bow on the string, drawing the bow across it, and so forth. This involves
sharpening how the agent takes things and how she responds. Some of the
subactions the teacher highlights are those that the child can directly intend to
do for she knows how: picking up an object, putting something under her chin. If
the subactions are too complicated, the teacher breaks things down further to
subparts that the student can practically understand and directly do. The teacher’s
instruction fine-tunes the student’s intention and also sets appropriate attention.
Learning involves joint practical reasoning and joint attention (cf. Section 6.4 on
learning symbolic logic).
The virtuoso learned through practice and instruction, and in doing so she
acquired both a capacity to act and, through intense practice, a skill. The shift
from indirect to direct intention correlates with a shift from control to automaticity for certain features of the action. At the beginning, the specification
of subactions is part of the agent’s conception of what it is to play the violin.
In intending to do those subactions, her intention directly engaged extant
action capacities present at the time of learning. Definitionally, these subparts,
things she could directly do, were controlled in being explicitly intended. This
allowed her to learn something more complicated based on things she already
knew how to do.
With increase in skill, intended subparts come to be performed automatically,
for the agent no longer needs to explicitly intend to do them. The agent simply
intends to play, now automatically doing the necessary subactions. In general, if
Φ-ing involves doing X, Y, and Z, then early in learning, one’s intending to Φ
cannot be direct as one has not yet learned how to Φ. Rather, one must fine-tune
the intention to Φ into an intention to Φ by doing X, Y, and Z where the latter are
subactions that can be directly done and are explicitly intended. Working memory
capacity limitations will constrain such learning, say in the student’s ability to
retain complicated instructions (the content of her intention; Chapter 3). After
much practice, one need only intend to Φ. Doing X, Y, and Z need not be explicitly
intended, so are then, by definition, automatic. No fine-tuning is needed (cf. the
hierarchical account of intentions, Elisabeth Pacherie 2008, Mylopoulos and
Pacherie 2019; cf. Brozzo 2017).⁷ The balance between control and automaticity
changes as skill and expertise are acquired. This exemplifies the gradualist
approach noted in Section 1.4. I shall return to this when discussing attentional
skill in medicine (Section 5.4) and in symbolic reasoning (Section 6.4).
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
49
1.9 The Agent Must Be in Control in Action
As theories that identify actions in terms of their causes make the
subject qua agent a target of control, not in control, the agent’s causal
powers, tied to her intending and her taking things, must be internal,
not external, to her action.
The phenomena that constitute guidance and control must be attributed to the
agent as it is she who acts. Assume that in intentionally acting, neither control nor
guidance is attributed to the subject. That is, her behavior is not guided by how she
takes things or controlled by her intending to act. Perhaps these executive powers
are attributed to some part of the subject’s brain, an internal mechanism disconnected from her perspective. Thus, her response is guided by something that is not
her own taking things or is controlled by something that is not her intending to
act, even if the cause is part of her body.
In such cases, the subject qua agent is not in control. She is along for the ride,
subjected to guidance and control rather than being their source. Every property of
the resulting behavior will be automatic, so passive. It is difficult to cleanly divide the
subject level from levels below the subject, but clear cases suffice to make the point.
A paradigm passive behavior is the spinal cord reflex exemplified when one pulls
one’s hand away from a burning hot surface. The body moves, but the agent does not
move it, hence does not act. Rather, processes distinct from the agent, though part of
her, guide her response. The subject’s spinal cord takes over to guarantee the needed
response (recall our engineering ideal; Section 1.2). Yet to secure agency, the subject
must be the source of control and guidance, not subject to it as in many reflexes.
The problematic causal structure in reflex that removes control from the subject
is recapitulated by the standard causal theory of action. That theory conceives of
executive features such as control and guidance as external to the agent’s acting.
On the causal theory, intentional action is treated as an effect, the target of control
and guidance rooted in a source external to the agent’s doing things. This
externalist causal perspective emphasizes mental causes numerically distinct
from the agent’s movements, bodily or mental, as what makes those movements
into bona fide action. For example, a movement of the arm is an intentional action
when it is caused by an appropriate belief and desire (Davidson 1980a). Else it is a
mere movement (cf. Hornsby 1981 on transitive versus intransitive movements).
Yet as in reflex, the agent is rendered a patient, a target of an external influence
even if the cause is part of her as the spinal cord is part of her. In both cases,
something outside of the agent’s action guides and controls her movement. This
eliminates genuine agency. One is made to act. Accordingly, control and guidance
must be internal, not external, to action.
Davidson noted a similar “fundamental” confusion: “It is a mistake to think
that when I close the door of my own free will anyone normally causes me to do it,
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
50
   
even myself, or that any prior or other action of mine causes me to close the door”
(Davidson 1980b, 56). One might substitute “anything,” including spinal cord
mechanisms or external mental states, for “anyone.” Davidson noted correctly
that we are not the causes of our action. After all, as agents, our doing things is the
expression of our distinctive causal power. Yet a theory that reveals an event to be
an action because of how it is caused exemplifies the confusion Davidson identified. For if my beliefs and desires cause my action, make me close the door, then
we have reinstated the problematic causal structure. The fundamental problem is
that something deployed to explain agentive control is distinct from and directed
at the very thing that should be the source of control. Control begins within, not
outside, of intentional action.
Intentional agency constitutively involves control and guidance (Wu 2011a).
Accordingly, it is not just that control and guidance must be attributed to the
agent. It must be part of the agent’s acting. So, the controlling role of intention in
an agent’s action is internal, not external, to her acting. Similarly, the agent’s
attunement in how she takes things constitutes her guidance when appropriately
coupled to her response. Guidance must also be internal to the agent’s doing
things. She is not guided by something external to her doing something. Our
triangular motif (Figure 1.5) thus captures the structure of intentional action with
the agent’s intention, the source of control, and the agent’s taking things, the basis
of guidance, as constituents of action.
My focus on the internal structure of action bears affinities to work of Harry
Frankfurt (1978) and John Searle (1983). Frankfurt identifies guidance as a target
of explanation, and his terminology has been taken up by a number of philosophers. My use of guidance differs from Frankfurt’s for, in Chapter 2, I explain
guidance as attention while Frankfurt emphasizes counterfactual features of
agency as characteristic of guidance, say the agent’s disposition to compensate
for perturbations during action (see Shepherd 2021 for a detailed account).
I suggest we restrict “guidance” as a technical notion to attention. Cognitive
scientists and philosophers speak of visual or memory guidance in action where
this points to visual or mnemonic contents informing response, say explaining
why one reaches or recalls as one does. As action theorists need that notion of
guidance too, Frankfurt’s terminology courts unnecessary theoretical ambiguity.
Empirical work has a term for the phenomenon Frankfurt has in mind by talk of
“guidance,” namely control. This is not to deny that scientists sometimes use
“guidance” as Frankfurt does. The point is conceptual regimentation to maintain a
technical way of speaking. In many of the contexts in question, scientists use
“guidance” in an informal way which can be replaced with “control” (e.g. Banich
2009 discussion of executive (control) functions as guiding behavior).
Earlier (Section 1.4), I noted different conceptions of control, those analytically
tied to automaticity and those not. The counterfactually characterized form of
control tied to an ability to deal with perturbation, much discussed by
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
51
philosophers of action since Frankfurt, is, from the agent’s point of view, an
automaticity. Such quick responsiveness to change is part of agentive skill, refined
by sustained practice, and differences in the ability to respond to perturbations
reflect levels of skill, say a professional tennis versus a novice player adjusting to a
ball suddenly off course having hit a divot in the court (cf. eye movements in
medicine; Section 5.5).
The fine-tuned movements an agent makes can certainly involve control at a
subpersonal level. A compelling empirical idea is that the motor system predicts
the consequences of a commanded movement and compares these predictions
with the actual consequences as perceived. This on-line comparison allows the
system to make adjustments as the agent moves (see the concepts of forward
models and comparators; Wolpert and Ghahramani 2000). Such processing
is a control computation executed by the motor system, but this subpersonal
phenomenon, a feature of part of the agent, realizes an agentive automaticity
in movement. Granted in ordinary speech, we say the tennis player exhibited
exquisite control in making sudden adjustments, but this is speaking nontechnically. We could just as well say that the tennis player exhibited exquisite
skill or exquisite spontaneity (automaticity). If there is control here, used in a
technical sense, it is motor control that realizes agentive automaticity, a
subject-level skill exercised exquisitely. The rapid response to perturbation is
something the agent need not think about, intend to do, precisely because she
is skilled. What this means is that we have (at least) two distinct conceptions
of control in cognitive science, one in motor control tied to forward models
and comparators for which there is no correlated notion of automaticity, and
one in philosophical psychology regarding agentive control and its contrast,
automaticity.
Irrespective of terminology, Frankfurt leaves out an adequate account of guidance in the sense I will explicate as attention (see related issues regarding causal
deviance; Section 2.10). To explain his notion of guidance, Frankfurt discusses a
driver who lets his car coast down the hill. The driver guides his action in the
Frankfurtian sense even though he does not move his body because the agent
would respond appropriately were obstacles to suddenly appear. Yet what is
missing is an account of guidance tied to the empirical sense of that notion. The
agent is guiding his action in that his response is continually informed by how he is
actually taking things: his perception of the speed, direction, and the absence of
obstacles on the road. He attends to all these features to inform his response, here
keeping his hands lightly pressed on the steering wheel because the parameters
he takes in are appropriate to simply coasting. His guidance, as I would say, is
not in his being watchful for possible obstacles (cf. vigilance; Section 3.6), but in
actually watching how the coasting unfolds. The agent’s taking things actively
informs his maintaining his current bodily state during coasting. Guidance is
through attention.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
52
   
1.10 The Agent’s Being Active
The mental elements of the agent’s action identify the agent’s being
active though short of acting, for her being active partly constitutes her
acting.
Critics of the standard story of action argue that it makes the agent disappear. If
one’s theory fails to recover action, then the agent is not present. Davidson noted that
mental states, not being occurrences, cannot play the right causal role in explaining
action, namely as efficient cause, so he appealed to events such as the onslaught of a
desire (Davidson 1980a, 12). Helen Steward (2012b, 2016) and Jennifer Hornsby
(2004) have argued against the centrality of events in the ontology of action
(cf. Steward’s emphasis on processes; see also Stout 1996). To capture action’s constituent structure, the constituents have to be more than static states and mere happenings. This more cannot, however, itself be an action, on pain of regress or circularity.
When an intentional action occurs, each of the constituents of the structure are
put in motion. I am inclined to treat the expression of action-relevant capacities,
say perceptual capacities, as activities of the agent that are not themselves actions.
The metaphysics of activity and of processes have been actively discussed in the
philosophical literature, so to be clear, my goal is not to build on that work. Rather,
I aim to examine the relevant idea of activity biologically to gain a different
perspective on action and its constituents. What I draw from the discussion of
the metaphysics of action is that states being static are not sufficient to recover the
dynamics of agency, and events, construed as concrete particulars (Davidson
1970) or as facts (Kim 1993), also do not evince the requisite dynamics (see
Steward 1997 and Bennett 1988 on events). The alternative to these categories is
the idea of a process in which an agent partakes (Steward 2012b).
My approach is also to take up a third way between events and states, threading
the needle by drawing on biology, broadly construed. Action on my account is the
amalgam of the active component parts. Conceptually, we began unpacking this
amalgam via control (Sections 1.5–1.8). Another part, guidance, will be explicated
in Chapter 2 as attention. In the former case, control was grounded biologically
in cognitive integration. Accordingly, in the case of intention, amalgamation
involves integration, a dynamic process that must modulate action over time.
The agent’s intending to act shifts processing to allow for a type of selectivity in
how the agent takes things where this integration must be as dynamic as action
requires. Chapters 3 and 4 further explore intention’s activity.
Let us make an initial start on guidance rooted in the agent’s perceptual
taking, say her perceiving the targets of action. The traditional paradigms in
philosophy of action are perceptually guided movements. In explicating these,
the standard theory appeals to beliefs and desires but is silent on perception.
When philosophers speak of an agent drinking a glass of gin (Williams 1981), they
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
53
mention only a desire to drink gin and a belief that drinking that gin will satisfy
the desire. So, the agent reaches for the gin. Yet for all that the standard story says,
the agent could be doing so with her eyes closed. The complicated but essential
phenomenon of perceptual guidance is left out.
The agent’s reaching for a glass that she sees involves her seeing it guiding her
reach, providing essential spatial content. Her seeing is not a static state for its
content, hence its guiding role, changes as the agent moves. The agent’s seeing is
not aptly theorized as an event for the issue is not that the experience happens at a
place and time, every one of its features fixed to individuate a spatiotemporal
particular whose temporal boundaries must be set to understand its causal role.
Appeal to events would not illuminate the temporal dynamics of action, for seeing
plays a temporally extended guiding role, providing new information to the
subject as she completes her intended action. Seeing is active in action.
In guiding action, seeing is the agent’s activity of attending (see figures in
Section 2.1). That is, the agent’s taking things provides a constant input to inform
response. This continual guidance is what I mean to capture by talk of activity. In
any event, this section announces a thesis to be unpacked in the next three
chapters with emphasis on a biological perspective. The larger question of how
to stitch together the biological conception of being active to the metaphysical
conception of activity must be left for another time.
1.11 Taking Stock
An agent’s acting intentionally has a psychological structure: it is the agent’s
responding, guided by her taking things, given her intending to act. The necessity
of this structure derives from the contrast between action and reflex and points to
the Selection Problem necessarily faced by all agents. Paradigmatically, the agent
solves this Problem by acting with an intention, and the action capacities exercised
in intentional action are cognitively integrated with the agent’s intending. Control
in the agent’s intending to act leads to defining the division between automaticity
and control. Agents also guide, and in the next chapter, I explicate guidance in
action as the subject’s attention.
Appendix 1.1
A Reflection: Automaticity and Control in Parallel
Distributed Processing
In cognitive science, a standard paradigm to probe automaticity is the Stroop task: color
words are presented in different colors and subjects are tasked with reporting the font color.
In the congruent condition, the color word matches the font color, say the word “red”
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
54
   
printed in red. In the incongruent condition, the color word does not match the font color,
so “red” printed in green. Reporting the color of the word is harder in the incongruent
condition. A common explanation is that word reading is automatic and must be suppressed to enable naming in the incongruent condition. While one is tasked to report the
color the word is printed in, one is strongly inclined to read the printed word. In my
terminology, the controlled color-naming action is interfered with by the automaticity of
word processing.
Jonathan Cohen and co-workers constructed a parallel distributed processing (PDP)
network to model Stroop behavior (Cohen, Dunbar, and McClelland 1990). Their network
involves nodes constituting the input layer, specifically nodes responding to either colors or
to color words, and nodes corresponding to an output layer linked to responses, specifically
nodes corresponding to naming colors/words. In their initial implementation, Cohen et al.
picked two color word inputs, “red” and “green”, and the two corresponding color inputs
(thus, four total input nodes for color words and the colors thereby named). Outputs were
production of the words (utterances of “red” or “green”). A single intermediate, hidden,
layer was also part of the network. Weights assigned to connections between nodes identify
the strength of a given pathway, a bias, and determine speed and performance by the
network. Finally, a node representing task was connected to intermediate layers. Notice
that this is a computational reflection of the Selection Problem for this version of the
Stroop task.
Cohen et al. used this network to model the behavioral results observed in standard
Stroop tasks. That is, the performance of their network under analogous task conditions
in human experiments yielded similar performance. They note: “two processes that
use qualitatively identical mechanisms and differ only in their strength can exhibit
differences in speed of processing and a pattern of interference effects that make the
processes look as though one is automatic and the other is controlled” (334). This poses a
challenge to the standard categorization of a process as automatic or controlled, based on
the following common inference: For two processes, A and C, “if A is faster than C, and if
A interferes with C but C does not interfere with A, then A is automatic and C is
controlled” (333). The rule, however, can show that A is automatic in one context relative
to C as controlled and that A is controlled in another context where C is automatic. This
echoes our paradox about categorizing processes as either automatic or controlled.
Indeed, later, they give the following description which fits the perspective argued for
in this chapter nicely:
given the task of naming the color [that a word is printed in], a person can exercise
control by responding “red” to such a stimulus. In the model, this is achieved by
activating the color-naming unit in the task layer of the network. This unit sends
additional biasing activity to the intermediate units in the color pathway, so that they
are more responsive to their inputs. In this way, the model can selectively “attend” to
the color dimension and respond accordingly. It is worth emphasizing that the
increase in responsivity of the intermediate units is achieved simply by the additional
top–down input provided by the task unit . . . it does not require any special or
qualitatively distinct apparatus [cf. a spotlight of attention]. The key observation is
that attention, and corresponding control of behavior, emerges from the activation of a
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
55
Figure A.1 This figure depicts a connectionist network modeling processing during
the Stroop task described in the text. Two types of inputs are possible, one concerning
actual colors (red and green), and another concerning color words (“red” and “green”).
Outputs are verbal reports expressing “red” and “green” which, depending on the task,
can be repetition of the input word or report of the word color. Connections between
nodes are assigned weights, in this case, a stronger weight (darker lines) from word
input to output. Task representations provide a bias that increases the strength of the
appropriate connections. In the standard Stroop task, reporting color is the task
representation that must increase the weight of the weaker input color to report
connections (left side connections) in order to overcome the strong prepotent
weighting linking word to report (right side connections). This figure is redrawn from a
figure in Matthew M. Botvinick and Jonathan D. Cohen. 2014. “The Computational
and Neural Basis of Cognitive Control: Charted Territory and New Frontiers.”
Cognitive Science 38 (6): 1249–85.
representation of the task to be performed and its influence on units that implement
the set of possible mappings from stimuli to responses. (1254; cf. Desimone and
Duncan 1995, 194, quoted in Section 2.4)
Readers might come back to this quote after reading Section 2.4. See also discussion of the
antisaccade task in Section 3.6.
Cohen et al. deploy a representation of the task to bias processing. In the geometry of
action described in Chapter 1, this task representation corresponds functionally to intention where biasing sets selective processing for task, what I shall argue is attention. They
note: “The role of attention in the model is to select one of two competing processes on the
basis of the task instructions. For this to occur, one of two task demand specifications
must be provided as input to the model: ‘respond to color’ or ‘respond to word’ ” (338).
Since attention is not a cause in the sense attributed (Section 2.4), replace “attention” with
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
56
   
“intention” and “to select” with “to bias” in their quote, and we have the correct answer (see
also Miller and Cohen 2001).
Automaticity is tied to control via the Simple Connection. The theory of psychological
control, as tied to discussion of executive function, is broader than, though includes, the
form of control at issue in this book: acting as one intends. For example, an important
element in the study of executive control in cognitive science is probed through task
switching paradigms. These paradigms require subjects to shift which intention is active
in bringing about action (Monsell 2003). Shifting plans is an important part of ordinary
agency but task switching presupposes the ability to act as one intended, the basic
phenomenon which we are trying to understand. Accordingly, I acknowledge other ways
we might theorize about control in agency, but those conversations will be predicated on
assuming that the agent can act in the basic sense, say perform a task. This book focuses on
clarifying that basic sense.
Further layers of control on intention build on basic control. To capture higher-order
control, we would have to add to the PDP model in Figure A.1 additional nodes that
regulate the task structures, say higher-order goal representations. Consider the Wisconsin
Card Sorting Test where subjects have to infer a rule used to sort cards and then, when the
experimenter changes the rules without warning or stating the new rule, the subjects have to
recognize the change and infer the new rule. This involves inference, monitoring of current
behavior, and task switching. The full theory of human agency will have to incorporate
actions of this kind and integrate them in the overall theory of control, but these are more
complicated cases again built on the basic ability to act with an intention.
A conceptual point does arise, for I have defined control relative to intention, so for the
actions noted to come out as part of agentive control, there must be a corresponding
intention. This is plausible for many cases. In many task-switching paradigms, subjects are
told that there will be a task switch in the experiment where the switch is either cued or
must be inferred. In this context, it is clear that the agent forms a plan to be receptive to the
cue or to recognize signs that the task is switched, say through feedback on performance.
Here, an intention regulates task switching. That said, task switching can be automatic.
Notably, in cases of expertise, an agent can respond to environmental conditions by
delaying one task and switching to another because she knows how best to respond in
certain unstable conditions. There need not be an intention to switch, just the expression of
the agent’s expertise acquired through learning (Chapters 5 and 6).
Notes
1. Disambiguating “Reflex”: The idea of a pure reflex is a useful fiction. No actual reflexes
are pure since all actual reflexes can fail. A pure reflex is an idealization at the engineer’s
limit that distills the contrast with action. Hence, pure reflex is a term of art. It is no
objection to the first premise to note that there are reflexes that can fail as these are not
pure (see my 2018 response to Jennings and Nanay 2016). Note that automaticity is
not the basis of the contrast with action even if standard reflexes are, by definition
(Section 1.4), fully automatic, hence passive. But actions can also be automatic.
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
57
Often, when we talk loosely about acting on reflex, we really mean acting automatically.
I use pure reflex and automatic as technical terms.
2. The Simple Connection and System 1 / System 2 thinking: The paradox undercuts a
common way of dividing mental processes, often dubbed System 1 and System 2
thinking (Evans and Stanovich 2013). Paul Boghossian (2014) quotes Daniel
Kahneman’s characterization:
System 1 operates automatically and quickly, with little or no effort and no sense of
voluntary control.
System 2 allocates attention to the effortful mental activities that demand it,
including complex computations. The operations of System 2 are often associated
with the subjective experience of agency, choice, and concentration.
(Kahneman 2011, 20–1)
Kahneman draws on the Simple Connection. Boghossian suggests that to capture
normal cases of inferring, we need something in between, what he calls System 1.5
thinking: “It resembles System 2 thinking in that it is a person-level, conscious, voluntary mental action; it resembles System 1 in that it is quick, relatively automatic and not
particularly demanding on the resources of attention” (3). This is just to affirm (1) and
(2) in our triad for the mental action of inferring. Reasoning and inference, as intentional actions, exemplify control and automaticity. Thus, the paradox arises for System 1
and System 2 accounts of reasoning. Chapter 6 discusses inference as mental action,
specifically as the deployment of attention.
3. Four Objections to the Analysis: Ellen Fridland (Fridland 2017) raises four questions
regarding this account. The first is this: How does one motivate choosing intention as the
basis for analyzing control rather than consciousness or any of the other features
typically connected with control? My answer is that intention’s link to control is fixed
by the first proposition: intentional action exemplifies the agent’s control. The paradox
and its solution motivate my analysis.
Second, Fridland argues that what I intend to do can be automatic, yet on my view,
the action must be controlled. This generates a contradiction. Fridland considers a
pianist imagining playing a piece in a particular way, so intending to play just like that
(as she imagines). On my view, playing like that is controlled. Fridland asserts,
however, that the expert plays just like that automatically. Applying the Simple
Connection yields a contradiction: playing like that is both automatic and controlled,
yet on my view, this cannot be. The argument, however, equivocates on that. The first
sense in which the agent plays like that is that the agent plays as imagined in her
intention. Yet the concrete action that is her playing like that (second sense) has
specific parameters that she did not imagine. It is a concrete phenomenon with many
features the agent did not intend. So that way she actually played (second sense), the
performance we observe, is an instance of that (type) of way that she intended
(first sense). In the argument, “that” has different referents: that way I intend to
play, a type of action even if highly specified, and that way I actually play, a concrete
instance of the type intended. Even in the most specific intentions of finite minds,
the actual action that is performed must always be at a finer (determinate) grain. We
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
58
   
cannot, after all, imagine every parameter of an action, so even in Fridland’s case, the
way the pianist imagines playing (playing like that) can be multiply instantiated.
Third, Fridland notes that some intentions are automatic. I agree. Indeed, forming an
intention, like forming a belief, is an automatic aspect of an action of settling the mind,
the achievement of intentional reasoning. Acknowledging this, however, does not yield
explanatory circularity or contradiction, and Fridland does not demonstrate that it does.
Finally she notes that “being uncontrolled or uncontrollable is hardly a universal
property of automatic processes” (4345). I agree, and again, intentional action illustrates
the point. The objection, however, also changes the subject since my analysis explicitly
denies that we should divide processes as automatic or controlled. Many processes that
have automatic features are also controlled, like skilled action. Indeed, Fridland and I,
who are allied on many matters regarding skill, agree on this last point, so that’s a good
place to stop.
4. Alief as affective bias: Consider Tamar Gendler’s concept of alief as an explanation of
emotion- or affect-driven actions (Gendler 2008b, 2008a). Gendler characterizes aliefs as
follows:
A paradigmatic alief is a mental state with associatively linked content that is
representational, affective and behavioral, and that is activated—consciously or
nonconsciously—by features of the subject’s internal or ambient environment.
Aliefs may be either occurrent [activated] or dispositional (642)
. . . activated alief has three sorts of components: (a) the representation of some
object or concept or situation or circumstance, perhaps propositionally, perhaps
nonpropositionally, perhaps conceptually, perhaps nonconceptually; (b) the experience of some affective or emotional state; (c) the readying of some motor
routine. (643)
We can subsume Gendler’s proposal in the structure of action: there is an input state,
Gendler’s (a), an output state, Gendler’s (c), and a source of bias, Gendler’s (b). When
Gendler speaks of alief as occurrent or activated, this must be activation that is short
of action, for Gendler intends alief to explain action. The activation of alief must then
be a readiness to act. In the occurrent case, what we have is an emotional state that
readies attention and response but does not yet yield coupling (action). Affect biases
the action space and on the input side, it establishes a propensity to attend: vigilance
(Sections 2.1 and 3.5).
Accordingly, aliefs can be understood as a type of arational action through a shared
action structure (Figure 1.7). For another conative source that biases action, see also
desires in the directed-attention sense (Scanlon 1998, ch. 1; for a different approach in
the spirit Gendler’s account, see Brownstein and Madva 2012b, 2012a).
5. A debate about bias and translation in integration: Dan Burnston (2017a) has criticized
an earlier presentation of this approach (Wu 2013b). Burnston also appeals to the
notion of biasing but as he conceives of it, biasing does not involve transferring content.
Rather, cognition changes the probabilities that certain perceptual processes rather
than others will be initiated and maintained (see also Burnston 2021; cf. Wu 2008,
1010–11). He reasons that the content of an intention is too general relative to the
computational processes it is said to penetrate. So, an intention to grab a cup leaves
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
   
59
unspecified the movement’s specific parameters (Burnston 2017b). Some related issues
have been discussed in terms of the interface problem, roughly how intention engages the
correct, finely tuned output capacities (Butterfill and Sinigaglia 2014).
Since concepts are not sufficiently fine-grained to specify the precise motor movement parameters instantiated, something else must determine these movements
in accordance to the intention. Accordingly, the bias by intention potentiates not a
single movement capacity but a set of relevant capacities. The resulting parameters of
movement will be automatic, speaking technically. Still, in potentiating a set of motor
representations, we can treat this as the motor system operating over the content of the
intention by translation (Wu 2013b). To anthropomorphize, the way for the motor
system to deal with the content of the intention is by activating the set of specific
movement capacities the execution of which would satisfy the intention. As we noted
(Section 1.4), there are many such movements, these being necessarily automatic.
Given the architecture of the motor system, we can think of this set, for heuristic
purposes, as a disjunction of movements within the vocabulary of the motor system such
that where the intention speaks of a movement type X, the motor system treats this as
concrete movements it can generate: X1, X2, X3 . . . or Xn (the reader can insert their
favorite format for motor representations of movement possibilities). We can treat these
movements as the motor’s system expression in its representational scheme of the
intention’s content. It is in that narrow sense a translation though there need be no
mechanism of translation, only an association, likely established by learning and coactivation of conceptual and motor representations during practice, that links conceptual and motor representations. Subsequent machinery needed to generate movement
operates over this content to settle on one motor type (cf. Wu 2008 p. 1010ff). This
satisfies the condition on cognitive integration.
6. Hornsby on directness and basicness: Jennifer Hornsby’s discusses similar ideas that she
ties to an idea of basic action. In her (2013), she discusses intentional activity when we do
things “just like that”:
Practical reasoning terminates in things that one needs no knowledge of the means
to do. And that takes us back to basics. The knowledge a person has at a particular
time equips her to be doing whatever she is doing then. So at any time she must be
doing something she can then be doing without recourse to further knowledge
something she can then be doing directly, ‘just like that’. Thus on-going (intentional) activity will always be of some basic type. (16)
In the text, I speak of direct intentions that represent actions that one can directly do
and, in Chapter 4, I will discuss intending, one’s thinking in time with one’s action,
which I think connects with another idea of Hornsby’s that the agent acting “is at every
stage a thinking agent” (16). On actions as activity, see also Hornsby (2012). Helen
Steward’s work has also been influential (on actions, activity, and processes; see especially Steward 2012b). I should note that when I speak of being active in action, in
Section 1.10, I do not describe an agent’s acting as Hornsby does but rather, for example,
describe an aspect of the agent’s taking things when she acts where this taking guides her
response. Such guidance is not itself an action. Indeed, it is attention, a necessary
component of action (Chapter 2).
Download Complete Ebook By email at etutorsource@gmail.com
Download Complete Ebook By email at etutorsource@gmail.com
60
   
7. Learning through expansion: Consider a case discussed by Katherine Hawley (2003):
one way to escape an avalanche (apparently) is to make swimming motions. Thus, a
person who knows how to swim knows a way to escape avalanches. Yet she might not be
able to do this directly in simply intending to escape the avalanche as snow washes over
her. If she were instructed by a knowledgeable friend beside her right before both are
swept away that swimming motions are effective in escaping an avalanche, she can
intentionally escape by intending to make swimming motions and acting in that way.
This is a case where learning to act does not involve breaking the targeted action down
into subactions but in recognizing that one action F can be done by doing G where the
latter is a way of doing the former. Still, this is a case where I can F directly only by
learning that G is a way of F-ing where I can, fortunately, directly G when I intend to. So,
learning again fills a gap. I am especially grateful to Katherine Hawley for generously
taking the time in the autumn of 2020 to discuss with me issues where our work
intersected.
Download Complete Ebook By email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters For E-books Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download