Potential evidence for ESSAYS / Articles

The Chronicle Review
March 18, 2012
You Don't Have Free Will
By Jerry A. Coyne
The term "free will" has so many diverse connotations that I'm obliged to define it before I
explain why we don't have it. I construe free will the way I think most people do: At the
moment when you have to decide among alternatives, you have free will if you could have
chosen otherwise. To put it more technically, if you could rerun the tape of your life up to
the moment you make a choice, with every aspect of the universe configured identically,
free will means that your choice could have been different.
Although we can't really rerun that tape, this sort of free will is ruled out, simply and
decisively, by the laws of physics. Your brain and body, the vehicles that make "choices," are
composed of molecules, and the arrangement of those molecules is entirely determined by
your genes and your environment. Your decisions result from molecular-based electrical
impulses and chemical substances transmitted from one brain cell to another. These
molecules must obey the laws of physics, so the outputs of our brain—our "choices"—are
dictated by those laws. (It's possible, though improbable, that the indeterminacy of quantum
physics may tweak behavior a bit, but such random effects can't be part of free will.) And
deliberating about your choices in advance doesn't help matters, for that deliberation also
reflects brain activity that must obey physical laws.
To assert that we can freely choose among alternatives is to claim, then, that we can
somehow step outside the physical structure of our brain and change its workings. That is
impossible. Like the output of a programmed computer, only one choice is ever physically
possible: the one you made. As such, the burden of proof rests on those who argue that
we can make alternative choices, for that's a claim that our brains, unique among all forms
of matter, are exempt from the laws of physics by a spooky, nonphysical "will" that can
redirect our own molecules.
My claim that free will as defined above is an illusion leads to a prediction: Our sense of
controlling our actions might sometimes be decoupled from those actions themselves.
Recent experiments in cognitive science show that some deliberate acts occur before they
reach our consciousness (typing or driving, for example), while in other cases, brain scans
can predict our choices several seconds before we're conscious of having made them.
Additionally, stimulation of the brain, or clever psychological experiments, can significantly
increase or decrease our sense of control over our choices.
So what are the consequences of realizing that physical determinism negates our ability to
choose freely? Well, nihilism is not an option: We humans are so constituted, through
evolution or otherwise, to believe that we can choose. What is seriously affected is our idea
of moral responsibility, which should be discarded along with the idea of free will. If whether
we act well or badly is predetermined rather than a real choice, then there is no moral
responsibility—only actions that hurt or help others. That realization shouldn't seriously
change the way we punish or reward people, because we still need to protect society from
criminals, and observing punishment or reward can alter the brains of others, acting as a
deterrent or stimulus. What we should discard is the idea of punishment as retribution,
which rests on the false notion that people can choose to do wrong.
The absence of real choice also has implications for religion. Many sects of Christianity, for
example, grant salvation only to those who freely choose Jesus as their savior. And some
theologians explain human evil as an unavoidable byproduct of God's gift of free will. If free
will goes, so do those beliefs. But of course religion won't relinquish those ideas, for such
important dogma is immune to scientific advances.
Finally, on the lighter side, knowing that we don't have free will can perhaps temper our
sense of regret or self-recrimination, since we never had real choices in our past. No, we
couldn't have had that V8, and Robert Frost couldn't have taken the other road.
Although science strongly suggests that free will of the sort I defined doesn't exist, this view
is unpopular because it contradicts our powerful feeling that we make real choices. In
response, some philosophers—most of them determinists who agree with me that our
decisions are preordained—have redefined free will in ways that allow us to have it. I see
most of these definitions as face-saving devices designed to prop up our feeling of
autonomy. To eliminate the confusion produced by multiple and contradictory concepts of
free will, I propose that we reject the term entirely and adopt the suggestion of the cognitive
scientist Marvin Minsky: Instead of saying my decision arises from free will, we might say,
"My decision was determined by internal forces I do not understand."
Jerry A. Coyne is a professor in the department of ecology and evolution at the University of Chicago.
The Chronicle Review
March 18, 2012
The Case Against the Case Against Free Will
By Alfred R. Mele
Is free will an illusion? Recent scientific arguments for an affirmative answer have a simple
structure. First, data are offered in support of some striking empirical proposition—for
example, that conscious intentions never play any role in producing corresponding actions.
Then this proposition is linked to a statement about what free will means to yield the
conclusion that it does not exist.
In my book Effective Intentions: The Power of Conscious Will (Oxford University Press,
2009), I explain why the data do not justify such arguments. Sometimes I am told that
even if I am correct, I overlook the best scientific argument for the nonexistence of free will.
This claim, in a nutshell, has two parts: Free will depends on the activity of nonphysical
minds or souls, and scientists have shown that something physical—the brain—is doing all
the work.
As the majority of philosophers understand the concept, free will doesn't depend at all on
the existence of nonphysical minds or souls. But philosophers don't own this expression. If
anyone owns it, people in general do. So I conducted some simple studies.
In one, I invited participants to imagine a scenario in which scientists had proved that
everything in the universe is physical and that what we refer to as a "mind" is actually a
brain at work. In this scenario, a man sees a $20 bill fall from a stranger's pocket, considers
returning it, and decides to keep it. Asked whether he had free will when he made that
decision, 73 percent answer yes. This study suggests that a majority of people do not see
having a nonphysical mind or soul as a requirement for free will.
If free will does not depend on souls, what is the scientific evidence that it is an illusion? I'll
briefly discuss just one study. Chun Siong Soon and colleagues, in a 2008 Nature
Neuroscience article, report the results of an experiment in which participants were asked
to make simple decisions while their brain activity was measured using functional
magnetic resonance imaging, or fMRI. The options were always two buttons, and nothing
hinged on which was pressed. Soon and coauthors write: "We found that two brain regions
encoded with high accuracy whether the subject was about to choose the left or right
response prior to the conscious decision," noting that "the predictive neural information ...
preceded the conscious motor decision by up to 10 seconds." The science writer Elsa
Youngsteadt represented these results as suggesting that "the unconscious brain calls the
shots, making free will an illusory afterthought."
In this study, however, the predictions are accurate only 60 percent of the time. Using a
coin, I can predict with 50-percent accuracy which button a participant will press next.
And if the person agrees not to press a button for a minute (or an hour), I can make my
predictions a minute (or an hour) in advance. I come out 10 points worse in accuracy, but I
win big in terms of time.
So what is indicated by the neural activity that Soon and colleagues measured? My
money is on a slight unconscious bias toward a particular button—a bias that may give
the participant about a 60-percent chance of pressing that button next.
Given such flimsy evidence, I do not recommend betting the farm on the nonexistence of
free will.
Alfred R. Mele is a professor of philosophy at Florida State University. He is the director of Big Questions in Free
Will, an investigation of the science, philosophy, and theology of free will, supported by a $4.4-million grant
from the John Templeton Foundation
The Chronicle Review
March 18, 2012
Free Will Is an Illusion, but You're Still
Responsible for Your Actions
By Michael S. Gazzaniga
Neuroscience reveals that the concept of free will is without meaning, just as John
Locke suggested in the 17th century. Do robots have free will? Do ants have free
will? Do chimps have free will? Is there really something in all of these machines
that needs to be free, and if so, from what? Alas, just as we have learned that the
world is not flat, neuroscience, with its ever-increasing mechanistic
understanding of how the brain enables mind, suggests that there is no one thing
in us pulling the levers and in charge. It's time to get over the idea of free will and
move on.
Understanding the mechanisms of mind is both daunting and thrilling, as well as
a central part of modern knowledge and life. We humans are about becoming less
dumb, and making better decisions to cope and adapt to the world we live in.
That is what our brain is for and what it does. It makes decisions based on
experience, innate biases, and mostly without our conscious knowledge. It is
beautiful to understand how that happens.
But brain determinism has no relevance to the concept of personal
responsibility.
The exquisite machine that generates our mental life also lives in a social world
and develops rules for living within a social network. For the social network to
function, each person assigns each other person responsibility for his or her
actions. There are rules for traffic that exist and are only understood and
adopted when cars interact. It is the same for human interactions. Just as we
would not try to understand traffic by studying the mechanics of cars, we should
not try to understand brains to understand the idea of responsibility.
Responsibility exists at a different level of organization: the social level, not in
our determined brains.
Viewing the age-old question of free will in this framework has many
implications. Holding people responsible for their actions remains untouched
and intact since that is a value granted by society. We all learn and obey rules,
both personal and social. Following social rules, as they say, is part of our DNA.
Virtually every human can follow rules no matter what mental state he or she is
in.
Thus Jared Loughner, who has been charged with shooting Representative
Gabrielle Giffords, is judged to be insane. Yet he followed one kind of rule when
he stopped to make change for the taxi driver on the way to murder and cause
mayhem. Should society really allow that the act of not following another kind of
rule (not to kill anyone) be accepted as an excuse for murder? Since
responsibility exists as a rule of social interaction and not normal, or even
abnormal brain processes, it makes no sense to excuse the breaking of one kind
of social rule but not another.
We should hold people responsible for their actions. No excuses. That keeps
everything simple and clean. Once accountability is established, we can then take
up the more challenging questions of what as a society we should do about
someone engaged in wrongdoing. We can debate punishment, treatment,
isolation, or many other ways to enforce accountability in a social network.
Those are truly difficult issues. Establishing how to think about responsibility is
not.
Michael S. Gazzaniga is director of the SAGE Center for the Study of the Mind at the University of
California at Santa Barbara. He is the author, most recently, of Who's In Charge? Free Will and the
Science of the Brain(HarperCollins, 2011)
The Chronicle Review
March 18, 2012
Want to Understand Free Will? Don't Look to
Neuroscience
By Hilary Bok
As a philosopher, I often find speculation about the implications of neuroscience for free will
perplexing. While some neuroscientists describe free will in ways that I recognize, others,
including some distinguished and thoughtful scientists, do not. Thus Benjamin Libet: If "our
consciously willed acts are fully determined by natural laws that govern the activities of nerve
cells in the brain," then free will is "illusory."
Most philosophers disagree.
Among philosophers the main division is between compatibilists, who believe that free will is
compatible with causal determinism, and incompatibilists, who believe that it is not. Almost all
compatibilists think that we are free. Most are not determinists, but they believe that we would
be free even if our actions are fully determined.
With the exception of those who work within a religious tradition, philosophers tend to be
naturalists who see individual mental events as identical with events in our brains. When we
say that a person's choice caused her action, we do not mean that she swooped in from
outside nature and altered her destiny; we mean that an event in her brain caused her to act.
On this view, the claim that a person chose her action does not conflict with the claim that
some neural processes or states caused it; it simply redescribes it.
For compatibilists, therefore, the problem of free will is not that neuroscience reveals our
choices as superfluous. It does not. Nor do compatibilists deny that our choices cause us to do
things. The problem of free will for compatibilists is not to preserve a role for deliberation and
choice in the face of explanations that threaten them with elimination; it is to explain how, once
our minds and our choices have been thoroughly naturalized, we can provide an adequate
account of human agency and freedom.
How can we reconcile the idea that our choices have scientific explanations with the idea that
we are free? Determinism does not relieve us of the need to make decisions. And when we
make decisions, we need some conception of the alternatives available to us. If we define an
alternative as an action that is physically possible, then determinism implies that we never
have more than one alternative. But since we cannot know in advance what we will choose, if
we define "alternative" this way, we will never know what our alternatives are. For the
purposes of deciding what to do, we need to define our alternatives more broadly: as those
actions that we would perform if we chose them.
A person whose actions depend on her choices has alternatives; if she is, in addition, capable
of stepping back from her existing motivations and habits and making a reasoned decision
among them, then, according to compatibilists, she is free.
Whether this view provides an adequate account of free will is not a problem neuroscience can
solve. Neuroscience can explain what happens in our brains: how we perceive and think, how
we weigh conflicting considerations and make choices, and so forth. But the question of
whether freedom and moral responsibility are compatible with free will is not a scientific one,
and we should not expect scientists to answer it.
Whatever their views on the compatibility of freedom and determinism, most philosophers
agree that someone can be free only if she can make a reasoned choice among various
alternatives, and act on her decision; in short, only if she has the capacity for self-government.
Neuroscience can help us to understand what this capacity is and how it can be strengthened.
What, for instance, determines when we engage in conscious self-regulation, and how might
we ensure that we do so when we need to? If the exercise of self-government can deplete our
capacity for further self-government in the short run, what exactly is depleted, and how might
we compensate for its loss? Does self-government deplete our resources in the short run while
strengthening them over time, like physical exercise, or does it simply weaken our ability to
govern ourselves without any compensating benefit?
Neuroscience can answer those questions, and it can provide causal explanations of human
action, but it can't resolve the question of whether or not such explanations are compatible with
free will.
Hilary Bok is an associate professor of philosophy at the Johns Hopkins University. She is the author of Freedom and
Responsibility (Princeton University Press, 1998).
The Chronicle Review
March 18, 2012
The End of (Discussing) Free Will
By Owen D. Jones
The problem with free will is that we keep dwelling on it. Really, this has to stop. Free will is to
human behavior what a perfect vacuum is to terrestrial physics—a largely abstract endpoint
from which to begin thinking, before immediately moving on to consider and confront the
practical frictions of daily existence.
I do get it. People don't like to be caused. It conflicts with their preference to be fully selfactualized. So it is understandable that, at base, free-will discussions tend to center on
whether people have the ability to make choices uncaused by anything other than themselves.
But there's a clear answer: They don't. Will is as free as lunch. (If you doubt, just try willing
yourself out of love, lust, anger, or jealousy.)
All animals are choice machines for two simple reasons. First, no organism can behave in all
physically possible ways simultaneously. Second, alternative courses are not all equal. At any
given moment, there are far more ways to behave disastrously than successfully (just as there
are more ways to break a machine than to fix it). So persistence of existence consistently
depends on one's ability to choose nondisastrous courses of action.
Yet (indeed, fortunately) that choosing is channeled. Choices are initially constrained by the
obvious—the time one has to decide, and the volume of brain tissue one can deploy to the
task. Choices are also constrained by things we have long suspected but which science now
increasingly clarifies.
For example, human brains are not general-purpose processors, idly awaiting culture's
activating infusion of consciousness. Evolutionary processes pre-equip brains in all species
with some information-processing predispositions. Generally speaking, these increase the
probabilities that some combinations of environmental circumstances—immediate physical and
social factors, contexts, and the like—will yield one subset of possible (and generally
nondisastrous) behaviors rather than others.
Also, we now know that brains, though remarkable and often malleable, are functionally
specialized. That is, different brain regions have evolved to do different things—even though
they generally do more than one thing. As a consequence, impairments to specific areas of the
brain—through injury or disease, for example—can impede normal human decision-making.
And those impediments can, in turn, relax inhibitions, increase impulsive and addictive
behaviors, alter the ability to make moral judgments, or otherwise leave a person situated
dissimilarly from the rest of the population.
Which brings us to law. How will insights from the brain sciences affect the ways we assess a
person's responsibility for bad behavior? Answer: only somewhat, but sometimes significantly.
Many people assume that legal responsibility requires free will, such that an absence of free
will necessarily implies an absence of responsibility. Not true, as many scholars have amply
demonstrated. Full, complete, utterly unconstrained freedom to choose among available
actions might be nice to have, but it is not in fact necessary for a fair and functioning legal
system.
This is not to say that degrees of freedom are irrelevant to law. Science hasn't killed free will.
But it has clarified various factors—social, economic, cultural, and biological in nature—that
constrain it.
The existence of constraints very rarely excuses behavior, as when a person in an epileptic fit
hits someone. But evidence of brain-based constraints—which can vary from small to large—
can be, and indeed have been, relevant in determining the severity of punishment. For
example, some jurors in a recent Florida case reported that evidence of abnormal brain
functioning warranted a murderer spending his life in prison, instead of being executed.
All behaviors have causes, and all choices are constrained. We need to accept this and adapt.
Brain sciences are revealing complex and interconnected pathways by which the informationprocessing activities of multiple brain regions coalesce to influence human decision making.
But this poses an advantage—neither a threat nor a revolutionary transition—to the legal
system. In the near term, these complexities are more likely to inform than to utterly transform
law's justice-driven efforts to treat people fairly and effectively.
Owen D. Jones is a professor of law and biological sciences at Vanderbilt University. His book Law and
Neuroscience, with Jeffrey Schall and Francis Shen, is forthcoming from Aspen Publishers next year.
The Chronicle Review
Free Will Does Not Exist. So What?
By Paul Bloom
I have a genetic condition. People like me are prone to violent fantasy and jealous rage;
we are over 10 times more likely to commit murder and over 40 times more likely to
commit sexual assault. Most prisoners suffer from my condition, and almost everyone
on death row has it. Relative to other people, we have an abundance of testosterone,
which is associated with dominance and aggression, and a deficit in oxytocin,
associated with compassion. My sons share my condition, and so does my father.
So, yes, I am male. The neuroscientist David Eagleman uses this example to illustrate
how our genetic blueprint partially determines our actions, including our moral
behavior. The rest is determined by our environments; by the forces that act upon us
throughout our journeys from zygotes to corpses. And this is it—we are physical
beings, and so our natures and our nurtures determine all that we are and all that we
do.
This conclusion does not feel right. Common sense tells us that we exist outside of the
material world—we are connected to our bodies and our brains, but we are not
ourselves material beings, and so we can act in ways that are exempt from physical law.
For every decision we make—from leaning over for a first kiss, to saying "no" when
asked if we want fries with that—our actions are not determined and not random, but
something else, something we describe as chosen.
This is what many call free will, and most scientists and philosophers agree that it is an
illusion. Our actions are in fact literally predestined, determined by the laws of physics,
the state of the universe, long before we were born, and, perhaps, by random events at
the quantum level. We chose none of this, and so free will does not exist.
I agree with the consensus, but it's not the big news that many of my colleagues seem to
think it is. For one thing, it isn't news at all. Determinism has been part of Philosophy
101 for quite a while now, and arguments against free will were around centuries
before we knew anything about genes or neurons. It's long been a concern in theology;
Moses Maimonides, in the 1100s, phrased the problem in terms of divine omniscience:
If God already knows what you will do, how could you be free to choose?
More important, it's not clear what difference it makes. Many scholars do draw
profound implications from the rejection of free will. Some neuroscientists claim that it
entails giving up on the notion of moral responsibility. There is no actual distinction,
they argue, between someone who is violent because of a large tumor in his brain and a
neurologically normal premeditated killer—both are influenced by forces beyond their
control, after all—and we should revise the criminal system accordingly. Other
researchers connect the denial of free will with the view that conscious deliberation is
impotent. We are mindless robots, influenced by unconscious motivations from within
and subtle environmental cues from without; these entirely determine what we think
and do. To claim that people consciously mull over decisions and think about
arguments is to be in the grips of a prescientific conception of human nature.
I think those claims are mistaken. In any case, none of them follow from determinism.
Most of all, the deterministic nature of the universe is fully compatible with the
existence of conscious deliberation and rational thought. These (physical and
determined) processes can influence our actions and our thoughts, in the same way
that the (physical and determined) workings of a computer can influence its output. It
is wrong, then, to think that one can escape from the world of physical causation—but
it is not wrong to think that one can think, that we can mull over arguments, weigh the
options, and sometimes come to a conclusion. After all, what are you doing now?
Paul Bloom is a professor of psychology and cognitive science at Yale University. His next book, Just Babies:
The Origins of Good and Evil, will be published next year by Crown. You can follow him at @paulbloomatyale.
Teenage Brains Are Malleable And Vulnerable,
Researchers Say
OCTOBER 16, 201210:36 AM ET
JON HAMILTON
i
Brain scans are showing researchers why it's important to treat problems like depression in teens.
iStockphoto.com
Adolescent brains have gotten a bad rap, according to neuroscientists.
It's true that teenage brains can be impulsive, scientists reported at the Society for Neuroscience meeting in New
Orleans. But adolescent brains are also vulnerable, dynamic and highly responsive to positive feedback, they say.
"The teen brain isn't broken," says Jay Giedd, a child psychiatry researcher at the National Institute of Mental Health.
He says the rapid changes occurring in the brains of teenagers make these years "a time of enormous opportunity."
Part of the bad rap has come from studies suggesting that adolescent brains are "wired" to engage in risky
behavior such as drug use or unsafe sex, says BJ Casey of Weill Cornell Medical College.
These studies have concluded that teens are prone to this sort of behavior because the so-called reward systems in their
brains are very sensitive while circuits involved in self-control are still not fully developed, Casey says. The result has
been a perception that "adolescents are driving around with no steering wheel and no brake," she says.
Casey says a new study from her lab makes it clear that this isn't the case.
The study had teens and adults play a game where they got points for correctly answering questions about the motions
of dots on a screen. Meanwhile researchers measured activity in brain regions involved in decisions and rewards.
When a lot of points were at stake, teens actually spent more time studying the dots than adults and brain scans showed
more activity in brain regions involved in making decisions.
"Instead of acting impulsively, the teens are making sure they get it right," Casey says. She says this shows how teens'
sensitivity to rewards can sometimes lead to better decisions.
Two other studies presented at the Society for Neuroscience meeting showed that the adolescent brain is literally
shaped by experiences early in life.
One of the studies involved 113 men who were monitored for depression from age 10 and then had brain scans at age
20. The scans showed that men who'd had an episode of depression had brains that were less responsive to rewards.
"They can't respond naturally when something good happens," says Erika Forbes at the University of Pittburgh. She
says this shows why it's important to treat problems like depression in teens.
The other study looked at how the brain's outer layer of cortex, which plays a critical role in thinking and memory, was
affected by childhood experiences in 64 people. It found that this layer was thicker in children who got a lot of
cognitive stimulation and had nurturing parents, says Martha Farrah of the University of Pennsylvania.
Finally, a study by researchers in the U.S. and U.K. showed how much the brain changes during adolescence in regions
involved in social interactions.
The study involved 288 people whose brains were scanned repeatedly starting at age 7. And the scans revealed
dramatic structural changes during adolescence in four regions that help us understand the intentions, beliefs and
desires of others, says Kathryn Mills of the Institute of Cognitive Neuroscience in London.
The results show that the tremendous social changes teenagers go through are reflected in their brains, Mills says. They
also show that these changes continue beyond the teen years she says.
Cheating in the Fate vs Free Will Debate: The Importance of Perspective
Some rights reserved
Article by Kyle Headley
I apply different perspectives to fate and business
Perspective plays an important roll in our decisions. Often, though, people take their own perspective for granted,
losing the ability to "think outside the box". I've noticed that even subtle variations can have great effects, and I'd
like to share one example of this. I'd also like to suggest that unique perspectives, even if vague, can lead to a
larger set of options to choose from. Even if we don't use those options directly, we may be able to incorporate
parts of them and begin analyzing their effects.
For millennia people have debated over the question of fate. Is the future predetermined or do we have the
opportunity to change our destiny? On one side are, for example, the chemists, who know that their chemicals
always behave the same way and that we are made of chemicals. They know about chemical bonds but have
never witnessed the "free will force". On the other side are, for example, the religious, who know that their god
has given us a purpose in this life. If our decisions don't change the future, why are we conscious at all? The
debate is far more complex on both sides.
What I'd like to point out is that these issues come from different perspectives of our existence. The ones I've
chosen could boil down to whether we are made of atoms or a soul. If we are atoms than there must be a force
for free will, or not. If we are souls than our god either gave us choices, or kept them from us. The two are
mutually exclusive, but only within each perspective. Because the perspectives are different one could imagine a
world where they both exist, and in that world fate and free will coexist as well, but only from the appropriate
perspectives.
Allow me to construct one such world for you. God in his infinite power has created our universe, but not just ours,
many, many more. Each one is constructed, start to finish, based on the rules of physics. Any creature living in
one would be bound by the law of Fate. Many of these universes are similar, differing only in one main aspect
(with minor differences as needed to make the physics work). God lines them all up neatly and creates some
souls to play in them. Each one is given a creature in one universe to begin life as. They watch as the story of that
creature's life unfolds, but occasionally, they get to make a choice. God allows them the Free Will to switch
universes depending on which choice they decide to make. When the creature's life has ended, the soul is judged
on which universe they ended up in.
As you can see, a scientist limited to one universe would observe only fate, but a soul would know that it has
complete freedom. A soulless object would never have free will, while a soul could not claim to be ruled by fate. I
put one perspective on top of the other and voila!, a world where fate and free will both exist, and do so from the
perspective shared by its supporters. This example was hypothetical, but in the real world the same thing can be
done. If people argue about how they want to build something, determine each goal with the perspective driving it,
unite those perspectives and you unite the goals. Red of blue? Why? To show personality? What about light or
dark? Now light red and dark blue are both acceptable!
Here's a perspective change. Imagine a landlord of an apartment building, two or three generations in the family.
His tenants have been there many years. This is a good business man: the building is well cared for, as are the
grounds. He's raised the rent, but added facilities to keep his tenants from looking elsewhere as they improve
their lives, even got some new ones in. Capitalism at work, right? Now the perspective change: same situation,
but instead of landlord we call him governor. Governor of his one building, but now it's a government and the
tenants are his citizens. The rent is now considered taxes, but the type of government can deal with the issue of
property rights, after all, after a few generations the building has been more than paid for by the tenants.
As promised earlier this perspective is rather vague, but at least look at what type of government this resembles.
With high taxes and many services provided by the government, it appears far more like socialism than
capitalism. With property in the family for years it seems more like a monarchy with a good king. If the business
was bought by a outside investor and changed character would that be a coup? Should the tenants have a right to
vote on their own property manager like in a democracy? These are all options, options on how to view the
situation so that you can decide what would be best to improve it. In the example the landlord chose the socialist
way, but he could have just as easily chosen to give his tenants the right to pick the employees as a way to gain
their loyalty to the location. Or like a king knighting a peasant he could have asked one of the more able tenants
to become a partner in the business.
I won't claim that these are proper perspectives on the situation, but you can pull options out of the analogies. It's
good practice for dealing with other people as well, for every person has a different perspective on a situation. If
you only keep to your own, you'll never understand what an argument is about, should one ever come up. You
may even miss an opportunity to join the perspectives, as I did with fate and free will, creating a situation where
each person can work as they see fit without compromising the result. Think outside the box!
https://www.bestthinking.com/articles/society_and_humanities/philosophy/cheating-in-the-fate-vs-free-will-debatethe-importance-of-perspective
The Teen Brain: Still Under Construction
Introduction
One of the ways that scientists have searched for the causes of mental illness is by studying
the development of the brain from birth to adulthood. Powerful new technologies have
enabled them to track the growth of the brain and to investigate the connections between
brain function, development, and behavior.
The research has turned up some surprises, among them the discovery of striking changes
taking place during the teen years. These findings have altered long-held assumptions about
the timing of brain maturation. In key ways, the brain doesn’t look like that of an adult until
the early 20s.
An understanding of how the brain of an adolescent is changing may help explain a puzzling
contradiction of adolescence: young people at this age are close to a lifelong peak of physical
health, strength, and mental capacity, and yet, for some, this can be a hazardous age. Mortality
rates jump between early and late adolescence. Rates of death by injury between ages 15 to
19 are about six times that of the rate between ages 10 and 14. Crime rates are highest among
young males and rates of alcohol abuse are high relative to other ages. Even though most
adolescents come through this transitional age well, it’s important to understand the risk
factors for behavior that can have serious consequences. Genes, childhood experience, and the
environment in which a young person reaches adolescence all shape behavior. Adding to this
complex picture, research is revealing how all these factors act in the context of a brain that is
changing, with its own impact on behavior.
The more we learn, the better we may be able to understand the abilities and vulnerabilities of
teens, and the significance of this stage for life-long mental health.
The fact that so much change is taking place beneath the surface may be something for
parents to keep in mind during the ups and downs of adolescence.
The "Visible" Brain
A clue to the degree of change taking place in the teen brain came from studies in which
scientists did brain scans of children as they grew from early childhood through age 20. The
scans revealed unexpectedly late changes in the volume of gray matter, which forms the thin,
folding outer layer or cortex of the brain. The cortex is where the processes of thought and
memory are based. Over the course of childhood, the volume of gray matter in the cortex
increases and then declines. A decline in volume is normal at this age and is in fact a necessary
part of maturation.
The assumption for many years had been that the volume of gray matter was highest in very
early childhood, and gradually fell as a child grew. The more recent scans, however, revealed
that the high point of the volume of gray matter occurs during early adolescence.
While the details behind the changes in volume on scans are not completely clear, the results
push the timeline of brain maturation into adolescence and young adulthood. In terms of the
volume of gray matter seen in brain images, the brain does not begin to resemble that of an
adult until the early 20s.
The scans also suggest that different parts of the cortex mature at different rates. Areas
involved in more basic functions mature first: those involved, for example, in the processing of
information from the senses, and in controlling movement. The parts of the brain responsible
for more "top-down" control, controlling impulses, and planning ahead—the hallmarks of adult
behavior—are among the last to mature.
What's Gray Matter?
The details of what is behind the increase and decline in gray matter are still not completely
clear. Gray matter is made up of the cell bodies of neurons, the nerve fibers that project from
them, and support cells. One of the features of the brain's growth in early life is that there is
an early blooming of synapses—the connections between brain cells or neurons—followed by
pruning as the brain matures. Synapses are the relays over which neurons communicate with
each other and are the basis of the working circuitry of the brain. Already more numerous than
an adult's at birth, synapses multiply rapidly in the first months of life. A 2-year-old has about
half again as many synapses as an adult. (For an idea of the complexity of the brain: a cube of
brain matter, 1 millimeter on each side, can contain between 35 and 70 million neurons and an
estimated 500 billion synapses.)
Scientists believe that the loss of synapses as a child matures is part of the process by which
the brain becomes more efficient. Although genes play a role in the decline in synapses, animal
research has shown that experience also shapes the decline. Synapses "exercised" by
experience survive and are strengthened, while others are pruned away. Scientists are working
to determine to what extent the changes in gray matter on brain scans during the teen years
reflect growth and pruning of synapses.
A Spectrum of Change
Research using many different approaches is showing that more than gray matter is changing:

Connections between different parts of the brain increase throughout childhood and well into
adulthood. As the brain develops, the fibers connecting nerve cells are wrapped in a protein
that greatly increases the speed with which they can transmit impulses from cell to cell. The
resulting increase in connectivity—a little like providing a growing city with a fast, integrated
communication system—shapes how well different parts of the brain work in tandem. Research
is finding that the extent of connectivity is related to growth in intellectual capacities such as
memory and reading ability.

Several lines of evidence suggest that the brain circuitry involved in emotional responses is
changing during the teen years. Functional brain imaging studies, for example, suggest that the
responses of teens to emotionally loaded images and situations are heightened relative to
younger children and adults. The brain changes underlying these patterns involve brain centers
and signaling molecules that are part of the reward system with which the brain motivates
behavior. These age-related changes shape how much different parts of the brain are activated
in response to experience, and in terms of behavior, the urgency and intensity of emotional
reactions.

Enormous hormonal changes take place during adolescence. Reproductive hormones shape
not only sex-related growth and behavior, but overall social behavior. Hormone systems
involved in the brain's response to stress are also changing during the teens. As with
reproductive hormones, stress hormones can have complex effects on the brain, and as a
result, behavior.

In terms of sheer intellectual power, the brain of an adolescent is a match for an adult's. The
capacity of a person to learn will never be greater than during adolescence. At the same time,
behavioral tests, sometimes combined with functional brain imaging, suggest differences in
how adolescents and adults carry out mental tasks. Adolescents and adults seem to engage
different parts of the brain to different extents during tests requiring calculation and impulse
control, or in reaction to emotional content.

Research suggests that adolescence brings with it brain-based changes in the regulation of
sleep that may contribute to teens' tendency to stay up late at night. Along with the obvious
effects of sleep deprivation, such as fatigue and difficulty maintaining attention, inadequate
sleep is a powerful contributor to irritability and depression. Studies of children and
adolescents have found that sleep deprivation can increase impulsive behavior; some
researchers report finding that it is a factor in delinquency. Adequate sleep is central to
physical and emotional health.
The Changing Brain and Behavior in Teens
One interpretation of all these findings is that in teens, the parts of the brain involved in
emotional responses are fully online, or even more active than in adults, while the parts of the
brain involved in keeping emotional, impulsive responses in check are still reaching maturity.
Such a changing balance might provide clues to a youthful appetite for novelty, and a
tendency to act on impulse—without regard for risk.
While much is being learned about the teen brain, it is not yet possible to know to what extent
a particular behavior or ability is the result of a feature of brain structure—or a change in brain
structure. Changes in the brain take place in the context of many other factors, among them,
inborn traits, personal history, family, friends, community, and culture.
Teens and the Brain: More Questions for Research
Scientists continue to investigate the development of the brain and the relationship between
the changes taking place, behavior, and health. The following questions are among the
important ones that are targets of research:

How do experience and environment interact with genetic preprogramming to shape the
maturing brain, and as a result, future abilities and behavior? In other words, to what extent
does what a teen does and learns shape his or her brain over the rest of a lifetime?

In what ways do features unique to the teen brain play a role in the high rates of illicit
substance use and alcohol abuse in the late teen to young adult years? Does the adolescent
capacity for learning make this a stage of particular vulnerability to addiction?

Why is it so often the case that, for many mental disorders, symptoms first emerge during
adolescence and young adulthood?
This last question has been the central reason to study brain development from infancy to
adulthood. Scientists increasingly view mental illnesses as developmental disorders that have
their roots in the processes involved in how the brain matures. By studying how the circuitry of
the brain develops, scientists hope to identify when and for what reasons development goes
off track. Brain imaging studies have revealed distinctive variations in growth patterns of brain
tissue in youth who show signs of conditions affecting mental health. Ongoing research is
providing information on how genetic factors increase or reduce vulnerability to mental illness;
and how experiences during infancy, childhood, and adolescence can increase the risk of
mental illness or protect against it.
The Adolescent and Adult Brain
It is not surprising that the behavior of adolescents would be a study in change, since the brain
itself is changing in such striking ways. Scientists emphasize that the fact that the teen brain is
in transition doesn't mean it is somehow not up to par. It is different from both a child's and
an adult's in ways that may equip youth to make the transition from dependence to
independence. The capacity for learning at this age, an expanding social life, and a taste for
exploration and limit testing may all, to some extent, be reflections of age-related biology.
Understanding the changes taking place in the brain at this age presents an opportunity to
intervene early in mental illnesses that have their onset at this age. Research findings on the
brain may also serve to help adults understand the importance of creating an environment in
which teens can explore and experiment while helping them avoid behavior that is destructive
to themselves and others.
Alcohol and the Teen Brain
Adults drink more frequently than teens, but when teens drink they tend to drink larger
quantities than adults. There is evidence to suggest that the adolescent brain responds to
alcohol differently than the adult brain, perhaps helping to explain the elevated risk of binge
drinking in youth. Drinking in youth, and intense drinking are both risk factors for later alcohol
dependence. Findings on the developing brain should help clarify the role of the changing
brain in youthful drinking, and the relationship between youth drinking and the risk of
addiction later in life.
You’re Reading Romeo & Juliet Wrong: You’re Supposed to Hate
Romeo
And not just the Leonardo DiCaprio Romeo, either.
Author: Glen Tickle
Website: The Mary Sue
URL: http://www.themarysue.com/romeo-and-julietcorrection/
Date published: Friday, January 31st 2014 at 2:43 pm
Shakespeare’s Romeo & Juliet is a classic love story, but it’s one that may be misunderstood. It’s not the story of
a young couple rebelling against their parents. It’s the story of Juliet falling victim to Romeo. It’s a tragedy
because of what happens to Juliet, not because their relationship doesn’t work out. We’re supposed to hate
Romeo.
This idea was presented to me by comedian Jay Black, a former English teacher who was explaining his theory to
a student at Edinboro University last week after a show. (More… so much more on Jay Black and that show
coming up on the site soon.)
Romeo & Juliet was written around 1595 (there’s some debate) and first performed soon after. We mention the
date here because it’s important to why you’re supposed to hate Romeo. There was rampant famine in England
in the 1590s among the poor. Most of the audience showing up to a performance of Romeo & Juliet was
probably hungry. They pay what little money they have to see a play to forget their misery for a few hours. Then
out saunters Romeo, a little rich boy, whining about love. Besides love, what’s one of the first lines out of his
mouth? He asks Benvolio:
Where shall we dine?
Imagine a theater full of starving people hearing that delivered by some beautiful rich kid. He has so many
options for where he’s going to get his next meal that he can’t even decide. They’d have thrown tomatoes if
they weren’t so hungry.
It’s semiotics. The same way a filmmaker now might show a villain being mean to an animal to signal to the
audience that this is the bad guy, Shakespeare included this line to incite the feeling in the audience that they
should hate this guy.
Besides talking about food when we first meet him, Romeo is whining about love, but really he’s just mad that
Rosaline won’t sleep with him. When he meets Juliet, he doesn’t fall instantly in love, he sees someone he thinks
he can have sex with. He uses the fact that Juliet has fallen for him to manipulate her.
Romeo is the worst.
Black told me this theory is one he came to on his own in studying the play, but admits it’s probably not a
particularly unique take on the idea. In researching this post, I found no shortage of theories and alternate
interpretations of the text, but as his was the first I had heard along these lines, he’s included here as the
source.
Beyond Black’s thoughts on Romeo, I have developed some of my own about Paris to further support the idea
that Romeo is a villain.
Paris tends to be seen as the guy that Juliet is having forced upon her by her parents, but his conversation with
Lord Capulet makes it clear that Capulet doesn’t want them to be married for at least two years, and that
although he likes Paris, the young man still needs to win Juliet over. Capulet tells Paris in Act I Scene II:
But woo her, gentle Paris, get her heart,
My will to her consent is but a part;
An she agree, within her scope of choice
Lies my consent and fair according voice.
Paris isn’t being forced on anybody. He loves Juliet. She is Paris’ dying thought at the end of the play after
Romeo kills him: “O, I am slain/If thou be merciful/Open the tomb, lay me with Juliet.”
In researching this, I found another theory on the site Shmoop.com through a Shakespeareforalltime.com post
about Juliet’s virginity. It proposes that Juliet’s reluctance to marry Paris isn’t because she’s so in love with
Romeo, it’s that she can’t marry him because he’ll know she’s no longer a virgin. As Peter, the writer
for Shakespeareforalltime.com, points out, there isn’t much in the text to support this directly, but most of
Juliet’s reluctance is about the idea of marriage, and not about Paris specifically.
Whether Juliet realizes the consequence of letting Romeo up on that balcony or not, it’s still true.
So Romeo, in an attempt to get laid, ruins Juliet’s prospects of marrying Paris, kills her cousin, gets banished, and
drives a 13-year-old girl to suicide. Romeo’s the bad guy here. Juliet kills herself because her love, Romeo, is
dead. Romeo does it because he’s screwed. He’s already been banished, killed Tybalt, and now Paris. What do
you think happens next if he walks out of that tomb?
When he finds Juliet “dead,” that’s the last straw. His whole world’s been thrown into upheaval over this girl,
and now she’s dead. Romeo, already a desperate man in a desperate situation, doesn’t seen any other option
than death.
So why then, do we see it as story about two crazy kids in love? Probably because that’s what people want to
see. We’d rather see two kids kill themselves because they’re so in love and the world just doesn’t understand
than watch a play where a sex-crazed maniac drives a 13-year-old to kill herself.
How Your Moral Decisions are Shaped
by a Bad Mood
Weighty choices can be shifted by surprising factors
March 12, 2013 |By Travis Riddle
Chris Lamphear/iStock
Imagine you’re standing on a footbridge over some trolley tracks. Below you, an out-of-control
trolley is bearing down on five unaware individuals standing on the track. Standing next to you
is a large man. You realize that the only way to prevent the five people from being killed by the
trolley is to push the man off the bridge, into the path of the trolley. His body would stop the
trolley, saving the lives of the five people further down the track.
What would you do? Would you push the man to save the others? Or would you stand by and
watch five people die, knowing that you could have saved them? Regardless of which option
you choose, you no doubt believe that it will reflect your deeply held personal convictions, not
trifles such as your mood.
Well, think again. In a paper published in the March edition of the journal Cognition, a group
of German researchers have shown that people’s mood can strongly influence how they
respond to this hypothetical scenario. Though this general observation iswell-known in the
literature on moral judgments and decision making, the current paper helps to resolve a
question which has long lurked in the background. That is, how does this happen? What is the
mechanism through which moods influence our moral decisions?
Early research showed a difference between personal moral decisions, such as the footbridge
problem above, and impersonal moral decisions, such as whether to keep money found in a lost
wallet. Areas of the brain usually characterized as responsible for processing emotional
information seemed to be more strongly engaged when making these personal as opposed to
impersonal moral decisions, they found. These scientists concluded that emotions were playing
a strong role in these personal moral judgments while the more calculating, reasoning part of
our mind was taking a siesta.
Unfortunately, given the various shortcomings of previous investigations on this particular
topic, there are a variety of other explanations for the observation thatemotions, or the more
general emotional states known as moods, affect how people may respond to the footbridge
scenario.
For example, moods could influence the thought process itself. This is the “moral thought”
hypothesis: just as something like attention may change our thought process by biasing how we
perceive two choices, mood could also bias our thought process, resulting in different patterns
of moral thinking. This is different from the “moral emotion” hypothesis, which suggests that
emotions directly change how we feel about the moral choice. That is, our good mood could
making us feel better (or worse) about potentially pushing, and therefore more (or less) likely
to do it. Resolving this ambiguity with neuroimaging studies such as the one detailed above is
difficult because of fMRI’s low temporal resolution – a brain scan is similar to taking a camera
with the exposure set to a couple of seconds. This makes it difficult to faithfully capture events
which happen quickly, such as whether moods change the experience of the decision, or if they
directly influence the thought process.
To test these competing ideas, participants were first put into a specific mood by listening to
music and write down an autobiographical memory. Those in the positive mood condition
listened to Mozart’s Eine Kleine Nachtmusic and wrote down a positive memory, while those in
the negative mood condition listened to Barber’s Adagio for Strings, Opus 11 and wrote down a
negative memory. The participants in the neutral mood condition listened to
Kraftwerk’s Pocket Calculator and wrote about a neutral memory.
After this mood induction procedure, participants were then presented with the trolley
scenario. Some participants were asked: “Do you think it is appropriate to be active and push
the man?” while others were asked “Do you think it is appropriate to be passive and not push
the man?”.
Participants in a positive mood were more inclined to agree to the question, regardless of
which way it was asked. If asked if it was okay to push, they were more likely to push. If asked if
it was okay not to push, they were more likely to not push. The opposite pattern was found for
those in a negative mood.
If mood directly changed our experience of potentially pushing — the moral emotion
hypothesis — then putting people in a positive mood should have made them more likely to
push, no matter how the question was asked. The ‘moral thought’ hypothesis, on the other
hand, accounts for these results quite nicely. Specifically, it is known from previous
research that positive moods validate accessible thoughts, and negative moods invalidate
accessible thoughts. So, for example, if I ask you if it’s okay to push, you will begin to consider
the act of pushing, making this thought accessible. If you’re in a positive mood, that mood acts
on this thought process by making you more likely to feel as though this is an acceptable
behavior – it validates the thought of pushing. On the other hand, if I were to ask if it is okay to
not push, the positive mood should validate the thought of not pushing, leading you to feel like
not pushing is an acceptable behavior. Negative mood, which invalidates accessible thought,
has a parallel effect, but in the opposite direction. Thus, this idea fits well with the observed
pattern of results in this experiment.
These findings raise some further questions, some of which psychologists have been
attempting to answer for a long time. Emotions and logical thought are frequently portrayed as
competing processes, with emotions depicted as getting in the way of effective decisionmaking. The results here are another demonstration that instead of competing, our emotions
and our cognitions interact and work closely to determine our behaviors. In fact, some
researchers have recently begun to suggest that the division between these two is rather tough
to make, and there may not actually be any meaningful difference between thought and
emotion. After all, if moods and emotions play a fundamental role in information processing,
what differentiates them on a functional level from other basic kinds of cognitive processes,
such as attention or memory? This paper obviously doesn’t resolve this issue, but it is certainly
another piece of the puzzle.
It would also be exciting, as the authors say, to see how more specific emotions might influence
our moral decision-making. Anger and sadness are both negative emotions, but differ in
important ways. Could these subtle differences also lead to differences in how we make moral
judgments?
This paper demonstrates that our professed moral principles can be shifted by subtle
differences in mood and how a question is posed. Though there are plenty of implications for
our daily lives, one that arguably screams the loudest concerns the yawning gap between how
humans actually think and behave, and how the legal system pretends they think and behave.
The relative rigidity of western law stands instark contrast to the plasticity of human thought
and behavior. If a simple difference in mood changes how likely one person is to throw another
over a footbridge, then does this imply that the law should account for a wider variety of
situational factors than it does presently? Regardless of how you feel, it is clear that this paper,
and behavioral science in general, should contribute to the decision. Having a legal system
based on reality is far preferable to one based on fantasy.
Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And
have you read a recent peer-reviewed paper that you would like to write about? Please send
suggestions to Mind Matters editor Gareth Cook, a Pulitzer prize-winning journalist at the
Boston Globe. He can be reached at garethideas AT gmail.com or Twitter @garethideas.
ABOUT THE AUTHOR(S)
Travis Riddle is a doctoral student in the psychology department at Columbia University. His work in the Sparrow
Lab focuses on the sense of control people have over their thoughts and actions, and the perceptual and self-regulatory
consequences of this sense of control.