Church and State

advertisement
Church and State
Throughout world history, religions and governments have coexisted in many kinds of relationships—sometimes at
odds with one another, sometimes closely linked, sometimes operating totally apart. Every nation has its own
relationship with religion. In the United States, the Constitution provides for the separation of church and state,
detaching the function of government from the support or practice of religion. The goal of the nation’s founders was to
ensure religious freedom for everyone in a society that included many faiths. Separation of church and state, however,
is neither clear-cut nor simple. In the legislature and especially in the courts, debate continues about issues from school
prayer to taxation that involve the relationship between religion and government.
Governments and Religions
Historians and sociologists have classified societies according to the degree of connection or separation between church
and state. Church-state relationships generally fall into two broad types. One is called "interpenetration," in which
church and state are closely associated, and political and religious actions are unified. The other is "separation," in
which church and state are separate institutions with no power to control each other.
Variations exist within these types. One version of interpenetration can be found in Iran, which is a theocracy. In a
theocracy the authority of religion dominates the state. Since Iran’s 1979 revolution, Islamic law has been used to settle
all disputes, and only religious figures can hold important political, administrative, or judicial positions. In
caesaropapism, a second kind of interpenetration, the state dominates religion. Russia has had this kind of church-state
relationship since the late 1600s, when the Russian Orthodox Church became an office of the state. During the years of
communist rule under the Soviet Union, the state tightly controlled and nearly abolished the church. Since the end of
that era, Russia’s government has exercised control in a different way by establishing the Orthodox Church as the
official state church, although citizens can practice other religions.
There are also two main versions of church-state separation, the "two-powers" model and the "strict separation" model.
The two-powers model exists in Poland, where church and state have authority in distinctly different areas of public
life. In that country, the Roman Catholic Church has been an important element of national identity as well as a
political force, but the church has never dominated nor been dominated by the state. Relations between church and state
in a two-power structure can range from hostility to government sponsorship of a state church. However, there is
always some mixing of influence and interests. However, the strict separation model, such as that in the United States,
eliminates links between the state and religion. Each institution is independent of the other and without influence over
it.
Church and State in the United States
The relationship between religion and government in the United States is defined by the Constitution. Yet the
Constitution does not provide ready-made answers for today’s church-state issues, such as the role of religion in public
schools. Thomas Jefferson, one of the nation’s founders, believed the Constitution created a "wall of separation"
between church and state. However, the Supreme Court has not upheld this concept in all of its decisions. Federal
courts, especially the Supreme Court, are responsible for applying the constitutional principle of separation of church
and state to modern life.
The Establishment and Free Exercise Clauses
The First Amendment to the Constitution states that "Congress shall make no law respecting an establishment of
religion, or prohibiting the free exercise thereof." The first part of this phrase is known as the "Establishment Clause,"
which the courts interpret as meaning that the government cannot declare an official religion or favor one religion over
another. The clause also prohibits laws that aid one religion or all religions or use tax money to support or aid any
religion or religious institution. The second part of the phrase, the "Free Exercise Clause," protects followers of all
religions (as well as atheists) from laws that would single them out.
The two clauses were designed to ensure religious freedom for all Americans. The framers of the Constitution were
especially determined that Americans should not be forced to support any religion through taxation, a practice that
Jefferson called "sinful and tyrannical." The nation’s founders believed that separation protected government from
improper religious influence. It also protected religion from undue government influence.
At times, the Establishment and Free Exercise clauses have appeared to work against each other. One forbids the
government from promoting religion, while the other forbids government interference in religious practice. Balancing
the two clauses is a constant challenge to the Supreme Court. In the 1962 case of Sherbert v. Verner, for example, the
Court ruled that South Carolina could not refuse unemployment benefits to a woman who lost her job for religious
reasons. She had been fired because she would not work on Saturday, the Sabbath of her Seventh-Day Adventist faith.
By declaring the woman eligible for benefits, the Court protected her right to free exercise of religion without penalty.
But because the ruling also appeared to provide special protection for her religious beliefs, it appeared to violate the
Establishment Clause. Justice Potter Stewart referred to this tension between the two clauses as a "double-barreled
dilemma."
The Court took a different view in 1990 in Employment Division v. Smith, in which two Native American workers
were fired for using the drug peyote, which they took as part of a religious ceremony. The Oregon Supreme Court ruled
that the state had to provide the workers with unemployment compensation. However, the U.S. Supreme Court
overturned this decision on the grounds that the state law prohibiting drug use was not intended to interfere with
religious practice. Therefore, the Free Exercise Clause permitted the state to include drugs used for religious purposes
in its general prohibition.
The Role of the Supreme Court
Since the 1940s, the Supreme Court has struggled to fully define the nonestablishment principle and to apply it to
current issues. These have included the place of religion and prayer in public schools and public life and the role of
religious symbols or texts in tax-supported public buildings. Supreme Court decisions between the late 1940s and the
mid-1980s generally followed closely the principle of strict separation. Rulings in the early 1960s, for example, held
that prayer and Bible readings in schools violated the Establishment Clause.
In 1971, the Supreme Court developed a three-part test for reviewing challenges to the Establishment Clause in Lemon
v. Kurtzman. To pass the Lemon test, a law must have a secular purpose, must neither advance nor limit religion as its
primary or principal effect, and must avoid "excessive government entanglement with religion." The Court admitted
that this approach was complex and open to multiple interpretations. It has at times proved difficult for lower courts to
apply consistently.
In the mid-1980s, there were two new developments in the church-state issue. First, although the courts continued to
support some earlier interpretations of the Establishment Clause, they strongly questioned others. In cases involving
government aid to religious institutions, religion in public schools, and religious symbols in public places, some rulings
sought an alternative to maximum separation. The courts seemed to support the idea that some government support of
religion might be acceptable as long as no religion was favored over others. An example of this position was the Bowen
v. Kendrick decision in 1988 that allowed federal funds to go to both religious and secular institutions for the purpose
of counseling teenagers on sexuality and pregnancy.
The second major change in church-state law has been the growing use of the Free Exercise Clause as a starting point
for lawsuits. Under the Free Exercise Clause, individuals and groups made constitutional attacks on laws that were not
intentionally hostile to religion but nevertheless interfered with its practice. In 1988 a captain in the Air Force, an
Orthodox Jew, claimed that by preventing him from wearing the skullcap called for by his faith, military regulations
interfered with his freedom of religion. The Court rejected this claim and in general has shown less flexibility toward
Free Exercise claims than toward easing some of the limits derived from the Establishment Clause.
Some religious groups have continued to fight for greater inclusion of religion in public life. Critics of the strict
separation of church and state argue that there is no specific reference to the "separation of church and state" in the
Constitution. A new tactic now being tested in the federal courts is to seek protection for religion under the First
Amendment’s "Speech Clause," which provides that "Congress shall make no law… abridging the freedom of
speech…" Some wonder what the Supreme Court’s church and state decisions will look like now that Justice David
Souter (1939– ), an ardent defender of the separation of church and state, has been replaced on the Court by Justice
Sonia Sotomayor (1954– ). As of 2010, Sotomayor’s views on the separation of church and state had not yet been
tested in her short time on the Court.
Source Citation:
"Church and State." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints
In Context. Web. 9 Apr. 2012.
The U.S. Supreme Court Should Limit the Role of Religion in Public Life
"The time has come for the nation's political left to remind voters that so many of the rights and privileges that people enjoy
today were established more than a generation ago by a Supreme Court that viewed the Constitution as a tool for expanding
and defending human dignity and independence."
Frederick S. Lane is an author and a professor at Bernard M. Baruch College, the City University of New York. In the following
viewpoint, Lane predicts how the current U.S. Supreme Court led by Justice John Roberts would rule on a variety of issues
regarding the separation of church and state. Lane notes the Christian right has had significant success in reshaping the ideology
of the Supreme Court in the past several decades and if liberals want to ensure fundamental rights of religious freedom and
privacy they must remind voters of the importance of such values.
As you read, consider the following questions:
1. What is the Lemon test, as described by the author?
2. How does the author predict the current U.S. Supreme Court will rule on the issue of displaying the Ten Commandments
in public buildings?
3. Why would the "religious right" view the retirement of Justice Sandra Day O'Connor as a loss, according to the author?
While keeping in mind that there are no guarantees as to how the members of the [U.S. Supreme] Court will vote on a
particular issue (as the Religious Right is painfully aware), there are some predictions that can be made about how the
relatively [2005] new [Chief Justice John] Roberts Court might decide various religious issues, and what might happen
in the future.
The Lemon Test
It is difficult to hold out much hope for the continued viability of the Lemon test, the current standard for evaluating
whether a particular government program or action violates the principle of separation of church and state. With
increasing frequency, conservative members of the Court have shown a willingness to simply ignore the Lemon test, or
have narrowly construed it to the point of insignificance. The Roberts Court may formally abandon it altogether.
The replacement of William Rehnquist with John Roberts is not likely to make much difference in the Court's actual
voting pattern, but it is worth remembering that Roberts worked on the solicitor general's brief in Lee v. Weisman, in
which the elder [George H.W.] Bush administration asked the Court to abandon the test altogether. Of much greater
potential significance is the replacement of Sandra Day O'Connor with the presumably more consistent Samuel Alito.
On issue after issue, Alito may tip the Court in a more conservative direction.
In Lee, for instance, O'Connor joined the majority opinion drafted by Anthony Kennedy. If Alito had been serving
instead, it seems likely that the majority opinion would have been written by Justice [Antonin] Scalia, thereby
abandoning Lemon and upholding prayer during school graduation ceremonies. Far more importantly, it is likely that
Scalia would have used the case to announce a much narrower church-state test, one that would find "establishment of
religion" by a government program only when there was financial support for religion and the threat of penalty for
noncompliance or nonadherence (such as jail time for not attending church). To put it mildly, Scalia's approach would
eviscerate contemporary boundaries between church and state.
The Ten Commandments
The church and state issue on which the Court seems the most divided is the publicly supported display of the Ten
Commandments.... In 2005 the Supreme Court simultaneously issued two contradictory 5-4 decisions involving
different types of Decalogue displays: in Van Orden v. Perry the Court voted to uphold the constitutionality of a stone
monument on the state capitol in Texas, but in McCreary County v. ACLU of Kentucky it struck down a Kentucky law
requiring the courthouse display of the Ten Commandments.
Given the narrow margins and the struggles by the lower courts to interpret and apply the Court's decisions, it seems
likely that the Court will take up the issue again in the near future. One entertaining possibility is the Summum v.
Pleasant Grove City case, in which a relatively new religion is suing to have its pyramid-shaped monument, inscribed
with its Seven Aphorisms, displayed next to the city's Ten Commandments monument. The American Center for Law
and Justice, the legal foundation established by 700 Club televangelist Pat Robertson, is actively soliciting funds to
help take the case to the Supreme Court on behalf of the city.
If a new Ten Commandments case reaches the Supreme Court, it is likely to get a sympathetic hearing. Justice
O'Connor voted against the constitutionality of the Ten Commandments display in both Van Orden and McCreary. If
both Chief Justice Roberts and Justice Alito align themselves with the existing conservative bloc of the Court, they
could arguably legalize the publicly supported display of the Ten Commandments in every public building in the
country.
Public-Supported Holiday Displays
The issues surrounding holiday displays ... seem more settled. Admittedly, the decision in Allegheny County v. Greater
Pittsburgh ACLU is hardly an example of doctrinal clarity, given the number of separate opinions written by the Court,
but the justices have not shown any interest in revisiting the issue in nearly twenty years. However, it is important to
remember that four members of the Allegheny County Court (Chief Justice Rehnquist and Justices [Byron] White,
Scalia, and Kennedy) voted to uphold the constitutionality of Pittsburgh's Nativity display, notwithstanding its
prominent placement in a government building, its lack of secular elements, and its overtly sectarian message....
Prayer and Evolution in the Public Schools
The Court decided Wallace v. Jaffree, the case that struck down the Alabama law providing for a moment of silence
"for meditation or silent prayer," more than twenty years ago. It seems unlikely, even given the personnel changes that
have occurred since then, that the Court will revisit the issue of prayer in the classroom.
A more likely candidate for reconsideration is the issue of prayer at school graduations or other events, such as football
games. The Lee v. Weisman case, which invalidated the practice of prayer in graduation ceremonies, was a 5-4
decision made narrower by the fact that Justice Kennedy switched sides during the deliberations. The margin on the
football-game prayer decision, Santa Independent School District v. Doe, was slightly larger (6-3), but hardly
unassailable. In both cases, Justice O'Connor joined the majority in invalidating the challenged governmental practices.
Should the Court reverse one or both of those decisions, then not only will the practice of prayer become far more
prevalent at school functions, but the Court would inevitably be entangled in doctrinal battles over how such prayers
may be phrased and delivered. The Court has enough of a challenge conducting the constitutional parsing for which it
is trained; it would be particularly ill-suited to the task of splitting theological hairs. More importantly, such debates by
their very nature will shatter the concept of separation of church and state.
As for the Christian Right's repeated efforts to water down the teaching of evolution, the chances seem low that the
Roberts Court will take up the issue in the near future. The Court's ruling twenty years ago in Edwards v. Aguillard ...
firmly (7-2) rejected the parallel teaching of "creation science," and the 2005 U.S. District Court decision in Kitzmiller
v. Dover Area School District appears to have substantially slowed the push to get public schools to incorporate
"intelligent design" into their curricula.
Religion in the Workplace
The Court's position regarding the role of religion in the workplace ... is somewhat less ideologically consistent than
other church-state issues. When the Court ruled in Employment Div., Ore. Dept. of Human Resources v. Smith that
employees could not claim a "free exercise" exception to a generally applicable criminal law, the 6-3 majority consisted
of justices from the Court's left, right, and center voting blocs. Only the Court's most liberal justices (William Brennan,
Harry Blackmun, and Thurgood Marshall) dissented.
When Congress tried to reverse the Smith decision legislatively, by passing the Religious Freedom Restoration Act, a
similar 6-3 majority (equally mixed ideologically) struck down the law as it applied to state legislation in City of
Boerne v. Flores. Justice O'Connor (along with Justices [David] Souter and [Stephen] Breyer) dissented in Boerne,
arguing that the Court had erred in Smith by making it easier for a government to justify a "substantial burden" on a
religious practice.
The Religious Right usually does not have much positive to say about Justice O'Connor, but in this one area, at least,
the movement may see her departure as a loss. In balancing the religious rights of the individual versus the police
power of the state, O'Connor made it clear that the state should be required to show both a compelling state interest and
a narrowly tailored approach. That is not a view likely to be endorsed by her replacement, Justice Alito.
The Right to Privacy
Forty years after the Supreme Court first recognized a "right to privacy," the legal doctrine seems firmly and securely
established.... Every member of the current Supreme Court (aside from Justice [John Paul] Stevens) has publicly stated
that he or she believes that the Constitution contains such a right, even though neither the phrase nor even the word
"privacy" appears in the Constitution. Absent the unlikely appointment of a justice with views as legally rigid as Robert
Bork, there is little likelihood that the Court will flatly overturn the right to privacy that evolved from the Griswold and
Eisenstadt cases. Only in the Christian Right's most salacious dreams would state government regain the ability to
dictate a couple's birth control choices, for instance, or once again jail someone for fornication (the crime of sex
between unmarried individuals).
But in 2007 ... in Gonzales v. Carhart, five justices (Scalia, Kennedy, [Clarence] Thomas, Roberts, and Alito) for the
first time since Roe v. Wade, upheld a law that places limitations on a woman's decision to have a previability abortion
[an abortion prior to the time when a fetus could survive on its own outside the womb], a period of time which the
Court had previously said was exclusively within a woman's zone of privacy. It is worth noting that the only opinion
that even mentioned the term "privacy" was Justice [Ruth Bader] Ginsburg's impassioned dissent, which was joined by
Justices Stevens, Souter, and Breyer. And as Ginsburg pointed out, the challenge to the "undue restriction" imposed by
the Partial-Birth Abortion Ban was not an attempt "to vindicate some generalized notion of privacy; rather, [it centers]
on a woman's autonomy to determine her life's course, and thus to enjoy equal citizenship stature."
The fact that a majority of the Court was willing to endorse an infringement on a woman's right to privacy without even
discussing the concept or using the word does not bode well for the full preservation of the citizenship stature of
women in the future. Equally disturbing is the Court's willingness to defer to congressional findings that were
repeatedly and conclusively shown to be false or at best misleading. And as Justice Ginsburg noted, the language used
by the Court's majority opinion reveals a "hostility to the right Roe and Casey secured." There is good reason to worry
that even while the Roberts Court in this or future iterations will not go so far as to abandon the right to privacy
altogether, the Court will be increasingly receptive to greater and greater intrusions on a woman's right of privacy and
self-determination.
A Forgotten and Threatened Court
Since the 1970s, in the wake of [evangelical Christian] Francis Schaeffer's call to arms, the Religious Right has viewed
the composition of the Supreme Court as a political problem, and Christian conservatives have aggressively used the
tools of politics to try to solve the problem. If Americans who value such fundamental principles as separation of
church and state and personal privacy do not do the same, they may be stunned by the rapidity with which those values
are severely diminished or eliminated altogether.
More than anything else, the relative success of the Religious Right in reshaping the Supreme Court—and there is no
question that it has done so—has stemmed from the fact that all too many people take the decisions of the [Earl]
Warren Court for granted. More than any other single interest group, the Christian Right has educated its supporters on
the connection between political success and judicial change, and it has consistently and aggressively worked for the
appointment of federal judges and Supreme Court justices who share its philosophical opposition to the Warren Court's
rulings. The time has come for the nation's political left to remind voters that so many of the rights and privileges that
people enjoy today were established more than a generation ago by a Supreme Court that viewed the Constitution as a
tool for expanding and defending human dignity and independence.
If that education does not take place, much of what the remarkable Warren Court accomplished will be weakened or
wiped out by a social and political movement that more than anything else wants to baptize the United States as a
Christian nation and use the Bible as its primary source of legal authority. In the end, the goal of the Religious Right is
nothing less than to bring this country to its knees.
Source Citation:
Lane, Frederick S. "The U.S. Supreme Court Should Limit the Role of Religion in Public Life." The Court and the Cross. Boston, MA:
Beacon Press, 2008. Rpt. in The U.S Supreme Court. Ed. Margaret Haerens. Detroit: Greenhaven Press, 2010. Opposing
Viewpoints. Gale Opposing Viewpoints In Context. Web. 9 Apr. 2012.
The U.S. Supreme Court Should Not Limit the Role of Religion in Public
Life
"The Court has brought law and religion into opposition. The results are damaging to both fields."
Robert Bork is a conservative jurist, legal scholar, and author. In the following viewpoint, Bork asserts that the liberal
intelligentsia has succeeded in spreading antagonism toward religion to the U.S. judiciary. Bork traces the Supreme Court
decisions that allowed the Court to marginalize the role of religion in public life and he concludes that when law becomes
antagonistic to religion, it is undermining the greater moral good needed for civilized societies.
As you read, consider the following questions:
1. How does the author believe the Supreme Court decision in Flast v. Cohen illustrates the Court's attitude toward
religion?
2. According to the author, how does the Lemon test erase all hints of religion in government domains?
3. Why does the author believe Lee v. Weisman was decided wrongly?
The liberal intelligentsia is overwhelmingly secular and fearful of religion; hence its incessant harping on the dangers
posed by the "religious right." That ominous phrase is intended to suggest that Americans who are conservative and
religious are a threat to the Republic, for they are probably intending to establish a theocracy and to institute an
ecumenical version of the Inquisition. (Exasperated, a friend suggested that the press should begin referring to the
"pagan left.") It is certainly true, however, that the liberal intelligentsia's antagonism to religion is now a prominent
feature of American jurisprudence. The Court moved rather suddenly from tolerance of religion and religious
expression to fierce hostility.
Flast v. Cohen
Though not the first manifestation, one case illustrates the place of religion on the Court's scale of values. Major
philosophical shifts in the law sometimes occur through what may seem to laymen mere tinkerings with technical
doctrine. The judiciary's power to marginalize religion in public life was vastly increased through a change in the law
of what lawyers call "standing," which withholds the power to litigate from persons claiming only a generalized or
ideological interest in an issue. Some direct impact on the plaintiff, such as the loss of money or liberty, is required. But
in 1968, in Flast v. Cohen, the Supreme Court created the entirely novel rule that taxpayers can sue under the
establishment clause to prohibit federal expenditures aiding religious schools. The Court refused to allow similar suits
to be brought under other parts of the Constitution. Thus, every single provision of the Constitution, from Article I,
section 1, to the Twenty-Seventh Amendment, except one, is immune from taxpayer or citizen enforcement—and that
exception is the one used to attack public manifestations of religion.
Now we are treated to the preposterous spectacle of lawsuits by persons whose only complaint is that they are
"offended" by seeing a religious symbol, such as a creche or a menorah, on public property during a holiday season or
even by the sight of the Ten Commandments on a plaque on a high school wall. Apparently those who do not like
religion are exquisitely sensitive to the pain of being reminded of it, but the religious are assumed to have no right to
such feelings about the banishment of religion from the public arena.
The Lemon Test
The distance between the Court's position on religion and the Framers' and ratifiers' understanding of the First
Amendment was revealed, though not for the first time, in Lemon v. Kurtzman. The case created a three-part test that,
if applied consistently, would erase all hints of religion in any public context. In order to survive judicial scrutiny a
statute or practice must have a secular legislative purpose; its principal or primary effect must be one that neither
advances nor inhibits religion; and it must not foster an excessive government entanglement with religion. Few statutes
or governmental practices that brush anywhere in the vicinity of religion can pass all those tests.
Yet the Supreme Court narrowly approved Nebraska's employment of a chaplain for its legislature in Marsh v.
Chambers. Though the dissent correctly pointed out that the Lemon test was violated, as it was in each of its three
criteria, the majority relied on the fact that employing chaplains to open legislative sessions with prayers conformed to
historic precedent: Not only did the Continental Congress employ a chaplain but so did both houses of the first
Congress, which also proposed the First Amendment. That same Congress also provided paid chaplains for the Army
and the Navy. The Court often pays little attention to the historic meaning of the Constitution, but it would be
particularly egregious to hold that those who sent the amendment to the states for ratification intended to prohibit what
they had just done themselves. That Lemon fails when specific historical evidence is available necessarily means that,
in cases where specific history is not discoverable, Lemon destroys laws and practices that were meant to be allowable.
There is no lack of other evidence to show that no absolute barrier to any interaction between government and religion
was intended. From the beginning of the Republic, Congress called upon presidents to issue Thanksgiving Day
proclamations in the name of God. All the presidents complied, with the sole exception of Jefferson, who thought such
proclamations at odds with the principle of the establishment clause. Jefferson's tossed-off metaphor in a letter about
the "wall" between church and state has become the modern law, despite the fact that it was idiosyncratic and not at all
what Congress and the ratifying states understood themselves to be saying. The first Congress readopted the Northwest
Ordinance, initially passed by the Continental Congress, which stated that "religion, morality, and knowledge, being
necessary to good government and the happiness of mankind, schools and the means of learning shall forever be
encouraged." The ordinance required that specified amounts of land be set aside for churches.
Schools and Prayer
Yet in Lee v. Weisman, a six-justice majority held that a short, bland, non-sectarian prayer at a public school
commencement amounted to an establishment of religion. The Court saw government interference with religion in the
very fact that the school principal asked the rabbi to offer a nonsectarian prayer. Coercion of Deborah Weisman was
detected in the possibility that she might feel "peer pressure" to stand or at least to maintain respectful silence during
the prayer. She would, of course, have had no constitutional case had the commencement speaker read from The
Communist Manifesto or Mein Kampf while peer pressure and school authorities required her to maintain a respectful
silence. Only religion is beyond the judge-erected pale. In this way a long tradition across the entire nation of prayer at
public school graduation ceremonies came to an end.
One more example will suffice. In Santa Fe Independent School Dist. v. Doe, the school district arranged student
elections to determine whether invocations should be delivered before high school football games and, if so, to select
students to deliver them. The student could make a statement or read a nonsectarian, nonproselytizing prayer. The
Supreme Court majority held that "school sponsorship of a religious message is impermissible because it sends the
ancillary message to members of the audience who are nonadherents 'that they are outsiders, not full members of the
political community, and an accompanying message to adherents that they are insiders, favored members of the
political community.'" The nonadherent was put to "the choice between whether to attend these games or to risk facing
a personally offensive religious ritual." The incredibly thin skin of nonadherents is constitutional dogma. The Court
repeatedly referred to the elections as "majoritarian," as though that made them all the more threatening. The opinion is
remarkable for a tone that "bristles with hostility to all things religious in public life," Chief William H. Justice
Rehnquist noted in dissent. The majority opinion, it might be said, also bristles with hostility to majoritarian (i.e.,
democratic) processes. Still more remarkable, and sadly ironic, is the majority's statement that "one of the purposes
served by the Establishment Clause is to remove debate over this kind of issue from governmental supervision or
control." That is precisely what the decision does not do. The Court's pronounced antireligious animus [feeling of ill
will], displayed in decades of decisions, has itself produced angry debate that is under the control of the Supreme
Court, a branch of government.
At some point, parody is the only appropriate response. Nude dancing is entitled to considerable protection as
"expressive" behavior, according to Erie v. Pap's A.M. Theodore Olson, a leading Supreme Court advocate and [from
2001 to 2004] solicitor general of the United States, was prompted to suggest that high school students should dance
nude before football games because naked dancing is preferred to prayer as a form of expression. He might have noted,
of course, that nudity must not be achieved through the Dance of the Seven Veils because that has biblical
connotations!
Courts Have Gone Too Far
Lower courts have found a forbidden "establishment of religion" in the most innocuous practices: a high school football
team praying for an injury-free game; a local ordinance forbidding the sale of nonkosher foods as kosher; a small child
trying to read a child's version of a religious story before a class; a teacher reading the Bible silently during a reading
period (because students, who did not know what the teacher was reading, might, if they found out, be influenced by
his choice of reading material). The Court's establishment clause decisions show the same devotion to radical
individual autonomy as do the speech cases. The words "Congress shall make no law respecting an establishment of
religion" might have been read, as common understanding would suggest, merely to preclude government recognition
of an official church or to prohibit discriminatory aid to one or a few religions. No one reading the establishment clause
when it was ratified in 1791 could have anticipated the unhistorical sweep it would develop under the sway of modern
liberalism to produce, as [Catholic priest and founder of the neoconservative group Institute on Religion and Public
Life] Richard John Neuhaus put it, a "public square naked of religious symbol and substance."
The Court has brought law and religion into opposition. The results are damaging to both fields. All law rests upon
choices guided by moral assumptions and beliefs. There is no reason to prohibit any conduct, except on the
understanding that some moral good is thereby served. Though the proposition is certainly not undisputed, an excellent
case can be made that religion, though not the original source of moral understanding, is an indispensable
reinforcement of that understanding. It is surely significant that, as religious belief has declined, moral behavior has
worsened as well. When law becomes antagonistic to religion, it undermines its own main support.
Christopher Lasch [social critic and historian], who was by no means a conservative, asked: "What accounts for [our
society's] wholesale defection from the standards of personal conduct—civility, industry, self-restraint—that were once
considered indispensable to democracy?" He answered that a major reason is the "gradual decay of religion." Our
liberal elites, whose "attitude to religion," Lasch said, "ranges from indifference to active hostility," have succeeded in
removing religion from public recognition and debate. Indeed, it could be added that the Court has almost succeeded in
establishing a new religion: secular humanism. That is what the intelligentsia want, it is what they are getting, and we
may all be the worse for it.
Source Citation:
Bork, Robert. "The U.S. Supreme Court Should Not Limit the Role of Religion in Public Life." Coercing Virtue: The Worldwide Rule
of Judges. Washington, DC: The AEI Press, 2003. Rpt. in The U.S Supreme Court. Ed. Margaret Haerens. Detroit: Greenhaven
Press, 2010. Opposing Viewpoints. Gale Opposing Viewpoints In Context. Web. 9 Apr. 2012.
Creationism
Creationism is the belief that a supernatural being created all living things in their present form. Its supporters are
primarily fundamentalists, members of certain Christian groups that emerged in the early 1900s in reaction to scientific
teachings. They accept the creation story in the Bible’s book of Genesis as the literal truth, maintaining that God
created the world and everything in it in six days. Creationists oppose evolution, the science-based theory of how many
species of living things have developed from some forms into others over millions of years.
Creationism has been at the center of legal and cultural conflicts in the United States for some time, primarily in public
education. At the heart of the conflict is the Establishment Clause in the First Amendment of the U.S. Constitution,
which prevents the government from making laws that establish, endorse, or give preference to any religion.
Creationism and the Courts
Creationists reject the view of many modern Jews and Christians that the biblical story of creation is a poetic or
metaphorical myth about the power of God in the universe. According to the biblical story, God created the world,
plants, animals, and humans in their present forms only a few thousand years ago. Many people in Europe and the
Americas held this view of creation until the 1800s. At that time, new discoveries in biology and geology—the study of
the earth’s history as recorded in rocks—led to a different view of the history of life and of the earth’s past.
Scientific theories measured Earth’s history in billions of years and saw biology as shaped by the natural force of
evolution. These views soon became widely accepted. Many religious people, including some Christians, saw no
conflict between the new scientific theory of the origins of life and their belief in God. They believed that God could
have influenced the evolutionary process or created the universe in which evolution takes place. Fundamentalist
Christians, however, felt their faith could not coexist with a version of geologic, biological, and human history that
contradicted the biblical story of creation. They sought laws banning the teaching of evolution in public schools, and
some states passed such laws.
In 1925 John Scopes, a science teacher in Tennessee, was convicted of having taught evolution despite a state law
banning it. The Scopes trial brought worldwide attention to the conflicting theories of life’s origins. Evolutionists
considered the trial a victory of sorts because it highlighted inconsistencies in the creationist position. However, the
Tennessee supreme court upheld the law against teaching evolution. This law remained in effect until 1967, although it
was not enforced after the Scopes trial.
Fundamentalists’ efforts to keep evolution from being taught in public schools brought the issue to the attention of the
U.S. Supreme Court. In 1968, in the case of Epperson v. Arkansas, the Court held that Arkansas’s laws against teaching
evolution were unconstitutional. The justices concluded that the laws violated the Establishment Clause of the First
Amendment. Since that time, fundamentalists have taken a different approach. Instead of trying to ban the teaching of
evolution in public schools, they have sought laws that allow or even require a version of creationism to be taught
wherever evolution is taught. This approach, however, has also been challenged in the courts.
Creation Science
Since the late 1960s the fundamentalist effort to include creationism in public education has focused on creation
science, which opposes evolution on scientific rather than religious grounds. Creation science, sometimes called
scientific creationism, was developed by fundamentalists who are also scientists. They maintain that scientific evidence
can be interpreted to support the biblical story of creation.
The key principles of creation science are that the universe, energy, and life were created suddenly and from nothing;
that the mechanisms of evolution as currently understood by scientists are insufficient to explain how living things
developed; that humans and apes are not descended from a common ancestor, as evolution indicates, but have separate
ancestries; that geology can be explained by catastrophes in the recent past, including a worldwide flood like Noah’s
flood in the Bible; and that the earth’s age is much less than the billions of years claimed by mainstream scientists.
Creation science presents a version of earth’s history that matches the biblical account.
Many contemporary scientists dismiss the claims of creation science as unfounded. However, beginning in 1981
several states passed "balanced treatment" laws that said that if evolution was taught in public schools, then creation
science must also be taught. Students should be exposed to both. Higher courts overturned these laws on the grounds
that creation science reflects the beliefs of a particular religion. In Edwards v. Aguillard (1987), for example, the
Supreme Court struck down a Louisiana law that required that creationism be taught if evolution was being taught in
public schools. As such, teaching creation science would amount to endorsing religion—a violation of the First
Amendment’s Establishment Clause.
Some fundamentalists argue that presenting the theory of scientific creationism in schools would not necessarily violate
the Establishment Clause. Such presentations need not emphasize the religious origins of creationism nor insist upon its
truth. Schools could teach about the theory of creationism without teaching it as fact. Such arguments may serve as the
basis for future efforts to introduce creationism into the public school curriculum.
Questions concerning the origins and meaning of life are of compelling interest to both science and religion. The
conflict over creationism in public schools reflects the ongoing struggle to identify the difference between what can be
proven through science and what must be accepted on faith alone.
Source Citation:
"Creationism." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints In
Context. Web. 10 Apr. 2012.
Intelligent Design Should Not Be Taught in Public School Science
Classrooms
Alan J. Leshner, former director of the National Institute on Drug Abuse, is chief executive officer of the American
Association for the Advancement of Science and executive publisher of Science magazine.
Science classrooms are for the teaching of science. Inserting a belief system such as intelligent design into the science classroom
conflicts with the principles of science education. In science classes, students learn that scientists do not accept a theory based
on what they want to believe, but only after repeated observation and experiments provide evidence to support the theory.
Since intelligent design theory has yet to offer any testable hypotheses, it remains a matter of belief, not science, and as such
should not be taught in science classes.
Science classrooms are for the teaching of science, and intelligent design is not science-based. Science involves welldeveloped methods of inquiry for explaining the natural world in a systematic, testable fashion. The theory of evolution
is based on such rigorous sifting of evidence.
But advocates of intelligent design, while seeking to cloak themselves in the language of science, have yet to propose
testable hypotheses that can be subjected to the methods of experimental science. Intelligent design presupposes that an
intelligent, supernatural agent is responsible for biological structures and processes deemed to be "irreducibly
complex." But whether such an intelligent designer exists is a matter of belief or faith, not science.
A Place for Science and Religion
In science classrooms, students learn that scientists reject or accept theories according to how well they explain the
evidence rather than on what the researchers would like to believe. Students learn that a scientific theory, such as
evolution or gravity, is much more than just an educated guess. A theory is accepted only after repeated observation
and experiment.
Discussion of intelligent design may be appropriate in a class devoted to history, philosophy or social studies but not in
a biology class. Science teachers should not be asked to teach religious ideas or to balance the scientific theory of
evolution against an untestable alternative.
Many scientists are deeply religious and see scientific investigation and religious faith as complementary components
of a well-rounded life. There is a place for discussing the role of science and religion in American life, but the science
classroom should remain a place for teachers to nurture the spirit of curiosity and inquiry that has marked American
science since the days of Benjamin Franklin and Thomas Jefferson.
Our children deserve a first-class science education. Efforts to redefine science by inserting a particular belief into the
biology curriculum are in direct conflict with science standards recommended by both the National Academy of
Sciences and the [American Association for the Advancement of Science] AAAS.
Proponents of intelligent design are doing more than attack evolution. They also are undermining essential methods of
science by challenging its reliance on observable causes to explain the world around us.
America's students must be taught to distinguish between true science and a system of belief based on faith. At a time
when the United States faces increasing global competition in science and technology, public school science
classrooms should remain free of ideological interference and dedicated to the rigor that has made American science
the envy of the world.
Source Citation:
Leshner, Alan J. "Intelligent Design Should Not Be Taught in Public School Science Classrooms." Intelligent Design vs. Evolution.
Ed. Louise Gerdes. Detroit: Greenhaven Press, 2007. At Issue. Rpt. from "Should Public Schools 'Teach the Controversy'
Surrounding Evolution and Intelligent Design? No." CQ Researcher (29 July 2005). Gale Opposing Viewpoints In Context. Web. 10
Apr. 2012.
Outlawing Discussion of Intelligent Design in Schools Is a Violation
John H. Calvert, a lawyer, is managing director of the Intelligent Design Network Inc. Calvert, who counsels school
boards, school administrators, and science teachers regarding the teaching of origins science, is co-author of Intelligent
Design: The Scientific Alternative to Evolution.
Policies that endorse the teaching of material causes for life and forbid teaching alternative evidence of intelligent design violate
Constitutional neutrality by favoring one religion over another. The materialist, non-theistic world view that supports evolution
is as much a religion as world views that believe life was created by God or gods. To censor scientific evidence that supports a
theistic world view and that contradicts a non-theistic world view violates the principle of scientific objectivity. Indeed,
endorsing evolution alone is, essentially, state sponsorship of materialism.
The twisted decision of the court in Dover, PA on December 21 [2005] effectively establishes a state sponsored
ideology that is fundamental to non-theistic religions and religious beliefs. By outlawing discussion of the evidence of
design and the inference of design that arises from observation and analysis, the court has effectively caused the state to
endorse materialism and the various religions it supports. Thus the court actually inserted a religious bias into science,
while purporting to remove one.
The incorrect assumption implicit in the decision is that there is only one kind of "religion"—the kind that holds that
life and the world were created by a God or gods. In fact religion includes the other kinds, those that embrace material
causes for life rather than any God that might intervene in the natural world. These include Atheism, Secular
Humanism, Buddhism, Agnosticism, etc. The court's second error was to ignore the obvious: any explanation of origins
will unavoidably favor one kind of religion over another.
A Key Judicial Mistake
For Judge [John] Jones "religion" seems to be a term that describes only belief in a God. Although the judge was quick
to note the theistic friendly implications of an intelligent cause for life, his opinion omits any discussion of the religious
implications of materialism, the opposite of the idea that life may be the product of an intelligent, rather than a material
cause. Materialism is the root of evolution's core claim that life is not designed because it claims to be adequately
explained via material causes. Instead he arrives at the absurd conclusion that evolutionary theory "in no way conflicts
with, nor does it deny, the existence of a divine creator." This key mistake of the court was caught by an ardent
opponent of ID and philosopher of science, Daniel Dennett, who said after the decision:
I must say that I find that claim to be disingenuous. The theory of evolution demolishes the best reason anyone has ever
suggested for believing in a divine creator. This does not demonstrate that there is no divine creator, of course, but only shows
that if there is one, it (He?) needn't have bothered to create anything, since natural selection would have taken care of all that....
This mistake is crucial to the outcome of the case. By ignoring the major competing religious implications of
evolutionary theory and materialism/naturalism he has effectively caused the state to prefer one kind of religion over
another, the very antithesis of constitutional neutrality.
The court also failed to discuss the fact that the inference of design derives from an observation and logical and rational
analysis of the data, not from a religious text. Nor does he discuss or ask, from whence does a counter-intuitive
inference of "no-design" arise? From the data or from a philosophy? He makes it clear that it derives from a
philosophy: "methodological naturalism." Which hypothesis is truly inferential and scientific? Which idea arises from
the data and which from philosophy?
Censoring Evidence
Evolution, and methodological naturalism which effectively shields it from scientific criticism, is key to all of the
major non-theistic religions and belief systems. The Dover opinion censors scientific data that is friendly to one set of
religious beliefs in favor of data that supports competing and antagonistic belief systems. For the court, it is OK for the
state to put into the minds of impressionable students evidence that promotes a materialistic and non-theistic world
view while censoring contradictory evidence that supports a theistic one. How can teaching only one side of this
scientific controversy be secular, neutral and non-ideological?
A ruling that effectively insulates evolution from scientific criticism actually converts it into an ideology. It takes the
theory out of the realm of science and makes it a religion in and of itself. Unfortunately, the court fails to recognize that
the only way for the state to deal with the unavoidable religious problem entailed by any discussion of "Where do we
come from?" is to objectively provide students with relevant scientific information on both sides of that controversy.
As soon as the state takes sides in that discussion it steps over the wall.
A Lack of Objectivity
On December 21, 2005, the court in Dover caused the state to take sides in that religiously charged discussion. Four
days before Christmas, the court in Dover instituted state sponsorship of materialism.
The 139-page opinion shows a remarkable lack of understanding of other issues critical to the decision. Rather than
seek a true understanding of evolution, intelligent design, the scientific method and methodological naturalism, the
court accepted hook, line and sinker the propaganda of true "Fundamentalists," who are as passionate about their
"Fundamentalism" as those of the Dover Board. The court ignored key evidence that challenges evolution's claim that
life is not designed. It called a strike when the ball hit the dirt six feet in front of the batter.
True institutional scientific objectivity is the only antidote to this religious problem. There is no issue in science that
cries out more for competing hypotheses than highly subjective "historical narratives" about our origins. From where
we come is inseparable from where we go. So long as only one answer to this question is allowed the story will
necessarily be religious. We need the competition to make the explanations truly scientific.
The decision in Dover took evolution out of science and made it a religion. I have confidence that this truth will
eventually emerge and be corrected.
Source Citation:
Calvert, John H. "Outlawing Discussion of Intelligent Design in Schools Is a Violation." Intelligent Design vs. Evolution. Ed. Louise
Gerdes. Detroit: Greenhaven Press, 2007. At Issue. Rpt. from "Dover Court Establishes State Materialism." The Watchman 3
(Feb. 2006). Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
DNA Technology and Crime
Deoxyribonucleic acid (DNA) has become the cornerstone of police investigations involving close-contact crimes such
as murder and rape. Technicians have created ways to compare DNA between samples to determine if it came from the
same person; this is known as DNA profiling. In addition, law enforcement personnel have amassed a large and evergrowing database of DNA profiles to compare against samples gathered from crime scenes old and new. This has led to
many older cases being solved at long last, and has also resulted in the release of hundreds of convicted prisoners
whose guilt was called into question by new DNA evidence. However, DNA profiling remains as fallible as any
technology that depends upon human participation, and some notable failures in crime lab procedure have highlighted
the risks of placing too much faith in DNA analysis when seeking justice.
Where Biology and Criminology Meet
DNA was first studied in depth by biologists James Watson and Francis Crick in the early 1950s. They later won a
Nobel Prize for their work, which involved identifying the structure of genes—the blueprints for all living things. At
around the same time, forensic science was becoming an important tool for police and prosecutors to identify criminals.
The guiding principle behind forensic science was first articulated by French criminologist Edmond Locard, founder of
the first modern crime lab in the world. This idea, known as the Locard exchange principle, states that when a criminal
interacts with a crime scene or victim, the criminal will inevitably leave behind evidence of his or her presence, and
will unknowingly take with them evidence from the crime scene.
In Locard’s time, evidence linking a criminal to a crime could involve fingerprints, fibers, or unique types of soil. With
advances in DNA technology, investigators could match specific individuals to a crime scene just from the presence of
tiny amounts of blood or other bodily material. DNA profiling does not involve comparing the entire genetic sequence,
or genome, of two samples; instead, criminologists compare several different segments of DNA from different parts of
the genome. As a crude example, imagine comparing two books by opening each one to the same page number and
looking at the first word that appears.
When comparing DNA samples from the same person, the segments will match. When comparing the samples of two
different people, the odds of every segment matching are usually very small—often less than one in a billion. Some
DNA tests are more precise than others, and smaller samples of DNA tend to result in less compelling numbers. There
is one important exception when discussing the accuracy of DNA identification: identical twins, since they share the
same exact genetic makeup, cannot be distinguished by DNA profiling. (However, they can be distinguished by
fingerprints when available.)
The Rise of DNA Profiling
The first case in which DNA profiling was used to locate and convict a criminal occurred in England in 1987. After two
girls in Leicestershire were raped and murdered, police went to a local pioneer in DNA profiling, Alec Jeffreys, for
help. A campaign was launched for local men to voluntarily provide DNA samples to investigators. Even though the
campaign resulted in thousands of samples, none of them matched. However, one man later confessed to providing a
sample for his friend, who paid him a large sum of money to do so. The friend was Colin Pitchfork, and after
investigators obtained a real sample of his DNA, they concluded that he was the killer. Pitchfork later confessed to both
murders.
The Pitchfork murder case was also notable for being the first case where DNA profiling was used to prove that a
suspect was not guilty of a crime. Before authorities knew about Pitchfork, a young man named Richard Buckland had
already confessed to the second murder. When Jeffreys tested his DNA, however, Buckland was ruled out as the
murderer.
As DNA techniques have grown more sophisticated, genetic profiling has been used to revisit older cases in which
genetic material was collected but could not be properly analyzed at the time. Although DNA degrades over time, some
samples—combined with gene segment replication techniques that increase the sample amount—have enabled
investigators to solve crimes that stymied their predecessors for decades. For example, in 2005, a sixty-two-year-old
nurse named Gary Leiterman was arrested and convicted for the 1969 murder of a Michigan law-school student named
Jane Mixer after his DNA was found in several spots left at the crime scene.
Advancements in DNA profiling have also proven effective in overturning convictions against alleged criminals whose
DNA does not match old samples related to their cases. Kirk Bloodsworth was convicted of raping and murdering a
nine-year-old girl in 1985. After eight years in prison—two of which were spent on death row—DNA analysis of the
victim’s underwear showed that Bloodsworth’s DNA did not match the samples believed to belong to the murderer.
Years later, the real killer was identified through a match in the DNA database system.
In 1992, lawyers Barry Scheck and Peter Neufeld created the Innocence Project, a legal support organization aimed at
overturning wrongful convictions through DNA profiling. Since then, more than two hundred criminal convictions
have been overturned in the United States alone. In seventeen of these cases, the wrongly convicted had been sentenced
to death. In more than one-third of the cases with overturned convictions, DNA profiling eventually led to the
conviction of the real perpetrator.
Sound Science, Unsound Practices
When DNA technology was first introduced, jurors were often skeptical about how reliable genetic profiling could be.
Now, the growing acceptance of DNA profiling is evident in popular culture. Shows such as CSI have persuaded many
citizens that DNA analysis and other forensic techniques result in irrefutable evidence of a person’s guilt or innocence.
However, DNA profiling remains a process at the mercy of human operators, and overreliance on its validity may lead
to lapses in justice much like those that have been revealed thanks to DNA technology.
Scheck, one of the co-founders of the Innocence Project, is also one of the best-known critics of typical crime lab
procedures. Genetic material requires special care in collection and handling; as in Locard’s exchange principle, two
samples containing DNA—for example, an alleged killer’s bloody glove and a victim’s blood-soaked dress—must be
kept separate to avoid a possible transfer of material between the samples. This sort of transfer is known as crosscontamination, and it can call into question any results the crime lab reaches.
As a defense attorney in the 1995 highly publicized murder trial of Hall of Fame football player O. J. Simpson, Scheck
argued that the criminologists and crime lab technicians involved in the case did not properly handle the evidence they
collected. In particular, a vial of Simpson’s blood was carried around in an assistant’s lab coat for a day before being
entered into evidence, and technicians could not later explain why there appeared to be a portion of the blood
unaccounted for. Simpson was found not guilty of the murder of his wife, Nicole Brown Simpson, and her friend
Ronald Goldman; some jurors have suggested that this was due to doubts about police and crime lab procedures.
In 2002, the Houston Police Department’s crime lab came under fire after investigative journalists discovered
numerous lapses in procedure involving DNA evidence. In January 2003, the police department shut down all genetic
testing at the lab and began retesting evidence from past cases. When investigators began searching through the lab,
they discovered almost three hundred boxes of evidence that was previously believed to be lost, including some body
parts. Independent retesting of evidence later showed that more than one-third of the lab’s findings were questionable.
Even when DNA profiling is done correctly, it is still up to investigators to determine the facts that surround the
evidence. In Europe, investigators spent more than fifteen years trying to locate a mysterious woman whose DNA
appeared at crime scenes throughout Germany, Austria, and France. The woman was linked to several different
murders and robberies, many of which seemed completely unrelated. In 2009, investigators concluded that the
mysterious DNA came not from the crime scenes but from cotton swabs used to collect samples from the crime scenes.
Although the swabs were sterile, they were not certified for collecting genetic material.
In cases of contamination, it can be impossible to separate the real evidence from false leads. In the 2005 murder
investigation of Gary Leiterman, mentioned previously, one drop of blood found on the victim’s body was matched to a
convicted felon named John Ruelas. Prosecutors focused on the DNA evidence linking Leiterman to the murder and
ignored the evidence implicating Ruelas, for one simple reason: Ruelas was only four years old at the time of the
murder. Prosecutors cannot explain the presence of Ruelas’s blood on the victim, and Leiterman’s defenders have
argued that he should be granted a new trial in light of this evidence.
Source Citation:
"DNA Technology and Crime." Opposing Viewpoints Online Collection. Gale, Cengage Learning, 2010. Gale
Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Law Enforcement DNA Databanks Can Protect Americans
"By collecting DNA from arrestees, law enforcement can identify criminals earlier and create more efficient investigation
practices."
In the following viewpoint, DNA Saves promotes the nationwide legalization of DNA testing of arrestees charged with felony
crimes. The organization declares that DNA records are the most powerful tool in forensic investigations, and DNA databanks are
crucial in solving violent crimes, capturing criminals, and exonerating the innocent. Medical privacy would not be threatened,
states DNA Saves, as law enforcement have no use for collecting genetic indicators or health information. The organization was
formed in New Mexico by Dave and Jayann Sepich in 2008. The Sepichs worked in their state to pass "Katie's Law," which is
named after their daughter who was murdered in 2003.
As you read, consider the following questions:
1. How does DNA Saves describe the steps of DNA collecting and analyzing?
2. Why is forensic analysis of DNA not a privacy concern, in the view of DNA Saves?
3. What did the Chicago Study reveal, according to DNA Saves?
DNA is the most powerful tool available for identification in forensic investigations. Because of its ability to link
physical evidence found at a crime scene to a single person, it is often referred to as a "digital fingerprint." This method
is so precise that it can ensure pinpoint accuracy, down to one in a billion. And, unlike fingerprints, which can only be
found if a suspect touches something, DNA exists in every cell of the human body, from hair and blood to skin and
tears, and can be shed or deposited while committing a crime. That means it is often the only means for accurate
identification.
DNA databases make it possible for law enforcement crime laboratories to electronically search and compare collected
DNA profiles to crime scene evidence. In the United States, the Combined DNA Index System (CODIS) links all local,
state, and national databases and contains more than 5 million records. Currently, legislation exists on the federal level
and in 21 states, enabling investigators to collect DNA upon arrest for certain felony crimes.
The process for collecting and analyzing DNA is minimally invasive and only takes a few steps:



Lightly swab the inside of an arrestee's cheek
Analyze the sample to obtain a unique identifier containing only the 13 to 15 key markers required to confirm identity
Enter the sample into the CODIS, where it can be compared against forensic evidence from crime scenes across the
country
Not a Privacy Concern
While some have raised concerns about the privacy rights of persons accused of serious crimes, DNA testing of
arrestees can actually protect civil liberties. Moreover, crime laboratories and investigators would have no need of such
predictive health information as it is of no value in a criminal investigation. Through a forensic DNA profile, it is
impossible to obtain medical information and genetic indicators. Forensic analysts only analyze the 13 to 15 key
markers that make identification possible. And, unlike fingerprints, the DNA profile is stored in CODIS as a numeric
file, with absolutely no access to personal information (not even the person's name) or criminal background. Crime
scene evidence matching this profile will lead police to the right suspect, regardless of race or economic status, thereby
reducing the incidence of racial profiling and other objectionable means of developing suspects.
Why Pass This Law?
States have been collecting DNA from convicted felons for almost two decades, and it's helped solve thousands of
crimes. But, we can do more. By passing arrestee DNA legislation, law enforcement officials can catch criminals
sooner, save more lives and use DNA to its full potential. Collected at the same time as fingerprints, DNA testing only
requires a simple cheek swab upon arrest. That's why Congress and a few pioneering states have already passed laws
for DNA Arrestee Testing.
Arrestee testing will help your state to:





Catch repeat offenders sooner
Prevent violent crimes
Exonerate the innocent
Protect civil liberties
Reduce criminal justice costs
The Facts:






Since 1974, more than 90 percent of all state prisoners have been repeat offenders
70% of America's crime is committed by 6% of its criminals
With DNA arrestee testing on the books since 2003, Virginia has received over 5,000 hits on their database, with nearly
500 of these matches directly attributable to arrestees
1 out of every 6 American women have been the victims of an attempted or completed rape
To date, post-conviction DNA testing has led to the exoneration of more than 200 wrongfully convicted individuals in the
United States, and many of these individuals were not fully exonerated until after a DNA match was made on the
database to another offender
21 States have enacted legislation to require DNA from certain felony arrestees, and even more states are considering
such laws
The Chicago Study
In 2005, the City of Chicago demonstrated the prevalence of repeat crime and the importance of arrestee testing. By
taking a closer look at the criminal history of eight convicted felons, the Chicago Study uncovered startling results—60
violent crimes could have been prevented if only DNA had been collected for a prior felony arrest. In each case, the
offender had committed previously undetected violent crimes that investigators could have identified immediately
through a DNA match.
Unfortunately, DNA was not required at arrest. The eight offenders in Chicago accumulated a total of 21 felony arrests
before law enforcement officials were finally able to convict them of violent crimes.
With DNA arrestee testing, the following crimes could have been prevented:





22 murders—victims ranging from 24 to 44 years of age
30 rapes—victims ranging from 15 to 65 years of age
Attempted rapes
Aggravated kidnapping
Protecting civil liberties
Privacy Rights Intact
While often challenged on Constitutional grounds, courts throughout the country have overwhelmingly upheld DNA
database statutes. These decisions and the supporting rationale have been clear that the processes, procedures, and
benefits of collecting DNA from those arrested for serious crimes is as constitutionally sound as the collection of
fingerprints.
The Fourth Amendment to the US Constitution protects individuals from searches and seizures which are
"unreasonable." For years, the Courts, including the US Supreme Court, have found that, when a suspect is arrested
with probable cause, his identification becomes a matter of legitimate state interest. The rationale behind the decision is
the fact that the identification of suspects is "relevant not only to solving the crime for which the suspect is arrested, but
also for maintaining a permanent record to solve other past and future crimes." This becomes particularly clear when
we consider the universal nature of "booking" procedures that are followed for every suspect arrested for a felony,
whether or not the proof of a particular suspect's crime will involve fingerprint evidence or an eyewitness identification
for which mug shots could be used.
Treating the taking of DNA samples at arrest just like fingerprinting at arrest has been widely accepted. Consider these
additional examples:







The Second Circuit [Court] held "[t]he collection and maintenance of DNA information, while effected through relatively
more intrusive procedures such as blood draws or buccal cheek swabs, in our view plays the same role as
fingerprinting."
The Third Circuit held "[t]he governmental justification for [DNA] identification relies on no argument different in kind
from that traditionally advanced for taking fingerprints and photographs, but with additional force because of the
potentially greater precision of DNA sampling and matching methods."
The Ninth Circuit held "[t]hat the gathering of DNA information requires the drawing of blood rather than inking and
rolling a person's fingertips does not elevate the intrusion upon the plaintiffs' Fourth Amendment interests to a level
beyond minimal."
The State of Maryland held "The purpose [of the DNA profile] is akin to that of a fingerprint."
New Jersey held, "We harbor no doubt that the taking of a buccal cheek swab is a very minor physical intrusion upon the
person.... [T]hat intrusion is no more intrusive than the fingerprint procedure and the taking of one's photograph that a
person must already undergo as part of the normal arrest process."
Oregon held, "Because using a swab to take a DNA sample from the mucous membrane of an arrestee's cheek is akin to
the fingerprinting of a person in custody, we conclude that the seizure of defendant's DNA did not constitute an
unreasonable seizure under the constitution."
The Virginia State Supreme Court held "the taking of [the suspect's] DNA sample upon arrest in Stafford County pursuant
to Code § 19.2-310.2:1 is analogous to the taking of a suspect's fingerprints upon arrest and was not an unlawful search
under the Fourth Amendment."
Reducing Costs
By collecting DNA from arrestees, law enforcement can identify criminals earlier and create more efficient
investigation practices. Solving crimes sooner reduces costs associated with misdirected investigations. With a DNA
match, law enforcement can quickly narrow in on the right suspect, saving untold manhours and manpower used in
traditional investigations. This cost savings can then be redirected to other crimes where DNA is not available and
traditional investigation techniques are the only means of solving the crime. With a DNA match, persons wrongfully
accused of committing a crime can be freed sooner. Consider the case of Robert Gonzalez who provided a false
confession and was in danger of a wrongful conviction until a match was made on the DNA database—a match to a
DNA sample collected under Katie's Law [a 2006 bill that requires DNA sampling in New Mexico, named for a crime
victim] from a felony arrestee. With a DNA match, more crimes can be prevented, such as those in the Chicago Study,
or the cases from California, Maryland, Texas, and Washington State. How do we put a price on the cost of saving a
life or preventing a rape? What is the cost of knowing we could have done something to prevent these crimes, and
chose not to?
Source Citation:
DNA Saves. "Law Enforcement DNA Databanks Can Protect Americans." Privacy. Roman Espejo. Detroit: Greenhaven Press,
2010. Opposing Viewpoints. Rpt. from "What Is DNA Testing? and Why Pass Law?" 2010. Gale Opposing Viewpoints In Context.
Web. 10 Apr. 2012.
Law Enforcement DNA Databanks Can Threaten Medical Privacy
"Compelling persons to provide their DNA to law enforcement agencies raises concerns about ... individual and familial privacy."
In the following viewpoint, Karen J. Maschke argues that DNA databanks for law enforcement and criminal investigations can
imperil privacy. Maschke claims that advances in technology will make it possible to determine an individual's ancestry, genetic
conditions, and other personal medical information from forensic DNA profiles. Additionally, she posits that innocent people
may be harassed to submit DNA samples and placed under "genetic surveillance" without probable cause. The use of DNA
databanks for genetic research to determine criminality, says Maschke, is also a concern. Maschke is a research scholar at the
Hastings Center and editor of IRB: Ethics & Human Research.
As you read, consider the following questions:
1. How did state lawmakers expand the categories of groups required to submit DNA in 2007, as told by the author?
2. In the author's view, how are the procedures changing for the release of identifying information based on partial DNA
matches?
3. What are "backdoor" methods of obtaining DNA, as described by the author?
The European Court of Human Rights in Strasbourg [France] is expected to decide in 2008 whether the United
Kingdom can permanently keep the DNA samples and profiles of criminal suspects who were never convicted of a
crime. Since 2004, anyone aged 10 years or over arrested in England or Wales for a "recordable offense" must provide
a DNA sample to law enforcement officials. Certain information from their DNA—known as the DNA profile—is then
stored electronically in the National DNA Database. Containing 4.5 million DNA profiles, it was until recently the
world's largest DNA databank. Today, that distinction goes to the United States, where state and federal law
enforcement databases combined contain about 5.6 million DNA profiles. Although the overwhelming majority of the
DNA profiles in the United States are from convicted felons, a growing number are from parolees, probationers, and
people under arrest.
Like a fingerprint, DNA is a type of bioinformation that can be used to identify people and is therefore a valuable tool
in attempts to identify criminal offenders. Yet compelling persons to provide their DNA to law enforcement agencies
raises concerns about informed consent, individual and familial privacy, the use of genetic information in the criminal
justice system, and the retention and use of DNA profiles and samples.
Collecting DNA
In 1988 Colorado became the first state to require some criminals—in this case sex offenders—to provide a DNA
sample to law enforcement officials. Two years later Virginia enacted a law requiring all convicted felons to provide
DNA. States initially collected DNA samples only from persons convicted of certain sex offenses and serious violent
crimes under the assumption that these individuals were likely to be repeat offenders. It was also assumed that DNA
might be the only biological evidence obtained at a crime scene.
Since then, states have expanded the categories of persons required to provide a DNA sample to law enforcement
officials. Today, all states collect DNA from sex offenders, and 44 states collect it from all felony offenders. Kentucky
is one of 31 states that collect DNA from juveniles convicted of certain crimes. The state's court of appeals recently
upheld a portion of the law that requires collecting DNA from juveniles convicted of felony sex offenses, though it
ruled as unconstitutional the portion that allowed for DNA collection from juveniles convicted of burglary. Over a third
of the states also permit DNA collection from individuals convicted of certain misdemeanors. For instance, New Jersey
permits DNA collection for misdemeanor offenses with a prison sentence of six months or more. Several states also
collect DNA samples from some probationers and parolees, and 13 states have laws that compel persons to provide a
DNA sample at the time of arrest. California, Kansas, and North Dakota have the broadest arrestee laws; they require a
DNA sample from everyone arrested for any felony offense. Arrestee laws with a narrower scope include New
Mexico's, which authorizes DNA collection from persons arrested only for specific violent felonies.
State lawmakers continue to introduce bills to expand the categories of persons required to provide their DNA to law
enforcement officials. In 2007 alone, 91 DNA expansion bills were introduced in 36 states. Almost half of the bills
were aimed at people arrested for certain offenses. A total of 15 bills were passed in 12 states, though an arrestee bill in
South Carolina never became law because two separate House votes failed to override the governor's veto. Of the 14
bills that became law, four authorize law enforcement officials to obtain a DNA sample from persons arrested for
various felony offenses.
Congress authorized the collection of DNA samples for certain federal offenders under the DNA Analysis Backlog
Elimination Act of 2000. The Act requires individuals in federal custody and those convicted of certain violent crimes
who were probationers, parolees, or on supervised release to provide a DNA sample. The 2001 U.S.A. Patriot Act
added additional categories of qualifying federal offenses, and the Justice for All Act of 2004 further expanded the
definition of qualifying offenders to include all persons convicted of felonies under federal law.
Two recent federal actions again expanded DNA sample categories. When Congress renewed the Violence Against
Women Act in 2006, it included an amendment that authorizes federal officials to collect DNA samples from
individuals who are arrested and from non-United States persons detained under U.S. authority. (Non-United States
persons are neither U.S. citizens nor lawful permanent resident aliens.) In April 2008 the Department of Justice
published a proposed rule directing certain U.S. law enforcement agencies to collect DNA samples from individuals
who are arrested, facing charges, or convicted, and from non-United States persons who are detained under U.S.
authority.
State and federal courts have upheld the constitutionality of some DNA statutes—including the DNA Analysis Backlog
Elimination Act of 2000 and the Justice for All Act of 2004—on the grounds that the laws do not violate privacy rights
or federal constitutional protections against unreasonable searches and seizures. However, in late 2006, the Minnesota
court of appeals invalidated a portion of that state's DNA arrestee law. The court ruled that the privacy interest of a
person charged with, but not convicted of, an offense outweighs the state's interest in that person's DNA. To compel
someone who has not been convicted of a crime to provide a DNA sample, the court ruled that law enforcement
officials must first obtain a warrant based on probable cause. To date, the U.S. Supreme Court has not ruled on the
constitutionality of DNA collection laws.
The DNA Profile
State and federal forensic laboratories analyze DNA samples to obtain DNA profiles of people, and these profiles are
stored in various electronic databases. The National DNA Index System (NDIS) contains the DNA profiles submitted
by state and federal laboratories. The FBI's [Federal Bureau of Investigation's] software program CODIS (Combined
DNA Index System) links the profiles in these databases. For there to be a CODIS "hit," two DNA profiles must be
perfect matches on 13 regions, or loci, of the individuals' DNA.
There is a growing dispute about whether the CODIS core loci constitute "junk DNA"—segments of genetic code that
provide no information about a person's physical characteristics (phenotype) or medical conditions. Several
commentators raise concerns that advances in genetic testing technologies might eventually make it possible to obtain
statistical approximations of an individual's ancestry, addictive behaviors, sexual orientation, temperament, and other
personal information from the genetic markers that make up the CODIS core loci. For instance, several attempts have
been made to construct phenotypic profiles of criminal suspects using a new method of DNA analysis that purports to
provide an inference of genetic heritage or ancestry. Obtaining such sensitive information from DNA samples collected
without a person's consent raises individual and familial privacy issues, especially if samples collected for law
enforcement purposes are released to others for research purposes. Another privacy issue is the possibility that new
technologies will be able to extract medical information from DNA profiles collected for law enforcement purposes.
A Hypothetical Scenario
New Orleans [Louisiana] police collect a DNA sample from Anthony, a 16-year-old high school student arrested for
allegedly assaulting his schoolteacher. However, the prosecutor does not bring charges against Anthony because the
police investigation revealed that he was helping the teacher defend herself against an attack by another student. Even
though Anthony was never charged with a crime, his DNA profile remains in the state's DNA database, and his DNA
sample stays in storage because state law permits samples of arrestees to be retained indefinitely.
A year later, police obtain DNA samples from a homicide scene and get a partial match to Anthony's DNA profile; his
DNA profile shares seven of the genetic markers of a profile in an offender DNA profile database. Thus, the partial
match suggests that the crime scene DNA came from a genetic male relative. Using partial matches to support police
investigations is known as "familial searching." This practice was used 115 times in the United Kingdom in 2006.
Whether law enforcement agencies in other countries use this practice—and if they do, to what extent—is unknown.
Until recently, the FBI prohibited the release of identifying information attached to a DNA profile unless there was a
complete CODIS match. However, in the summer of 2006, the agency issued an interim plan to release identifiable
information from NDIS-participating laboratories when CODIS revealed a partial match. Some state crime labs have
used partial matches, and at least two states (New York and Massachusetts) have laws permitting their databases to
generate partial match profiles. In March 2008 the FBI held a symposium to address the privacy implications of
familial searching. Representatives from law enforcement agencies and prosecutors offices argued that the practice
should be used because it provides investigative leads that can result in arrests and convictions. Civil liberty and
privacy advocates raised concerns about innocent people being put under what has been called "genetic surveillance"
solely because they have a genetic relative whose DNA profile is in a law enforcement database.
After getting a partial match, New Orleans police ask Anthony's male relatives to give them a DNA sample voluntarily.
Asking a certain population—such as all men in a geographic area—to provide a DNA sample to law enforcement
officials is known as a DNA dragnet. Since 1987, at least 20 DNA dragnets have been conducted in the United States.
Critics charge that in some communities the police have harassed individuals who refused to participate, and in one
community the police obtained a warrant to collect DNA from a man after he declined to participate in a dragnet. In
2006, a federal appeals court ruled that the police violated the man's constitutional rights because they did not have
probable cause to obtain a warrant to seize his DNA.
Anthony's male relatives refuse to give police DNA samples, and the local judge refuses to issue a warrant compelling
them to do so. As a consequence, the police follow the male relatives in the hope of getting discarded items like
cigarette butts, coffee cups, and gum wrappers from which they hope to obtain a DNA sample. This "backdoor" method
of collecting DNA raises questions about whether people under surveillance have a constitutional expectation of
privacy concerning their abandoned DNA, which would mean that collecting the DNA without a warrant would be a
violation of the Fourth Amendment's protection against unreasonable seizures. Sometimes referred to as "surreptitious
sampling," this practice reportedly is growing in popularity in law enforcement agencies throughout the country.
Police Tactics Grow More Invasive
While the police conduct their investigation, the state forensic laboratory uses new technology to analyze the crime
scene DNA. The analysis suggests that the DNA is from a 20-to-30-year-old male of primarily African ancestry who
has asthma and a genetic predisposition to hypertension. Four of Anthony's cousins partially fit this description. The
police go to local hospitals, clinics, and pharmacies to obtain his cousins' medical records to see if one of the men has
been treated for asthma or high blood pressure. The federal privacy rule under the Health Insurance Portability and
Accountability Act of 1996 (HIPAA) permits hospitals, clinics, pharmacies, and other entities covered by the rule to
disclose to law enforcement officials the medical, injury, and treatment information of a criminal suspect.
Based on information obtained from medical and pharmacy records, the police get warrants to arrest two of Anthony's
cousins. Because Louisiana has an arrestee DNA law, the cousins are required to give the police a DNA sample. The
oldest cousin's DNA matches the DNA sample from the crime scene. After the cousin is charged with murder, he
demands an independent analysis of his DNA to see if it can refute the state's claim. When the new DNA analysis
confirms the state's finding, the cousin demands new genetic tests that might show whether he has a genetic
predisposition to violence. No reliable data are available about how many criminal defendants have tried to "argue
genetics" against charges of criminal offenses or to mitigate punishment, although reports in the media and the legal
literature suggest the number is low. These reports also indicate that judges have refused to let defendants use genetic
information at trial, although at least one defendant was permitted to introduce it at sentencing.
Moreover, arguing genetics has implications beyond its use at trial and sentencing. Several commentators have
suggested that genetic-based crime control strategies might include mandatory genetic screening to identify individuals
predisposed to certain behaviors or deemed genetically predisposed to criminal offending. They might also include
mandatory preventive treatment such as gene therapy or preventive detention policies. These and other potential crime
control strategies raise questions about the ethical, legal, and social implications of new applications of genetic
screening, about the loss of privacy and liberty for individuals identified as "genetically predisposed" to criminal
offending, and about the potential for policies and practices that stigmatize and discriminate against such individuals.
Meanwhile, Anthony and his cousin who was not charged want the state to destroy their DNA samples and to remove
their profiles from its DNA database. State laws about retaining DNA samples and DNA profiles vary, and there is no
national standard or guideline on the matter. Some commentators raise concerns that stored DNA samples and profiles
collected for law enforcement purposes will be used for genetic research to examine whether there are genetic
predictors of aggression, pedophilia, mental illness, and drug and alcohol addiction. Others contend that there are valid
reasons for state and federal authorities to retain DNA samples and profiles, and that adequate safeguards are in place
to limit access to them and disclosure of the information they contain. To date, the law on the matter remains
inconclusive.
Source Citation:
Maschke, Karen J. "Law Enforcement DNA Databanks Can Threaten Medical Privacy." Birth to Death and Bench to Clinic: The
Hastings Center Bioethics Briefing Book for Journalists, Policymakers, and Campaigns. Garrison, NY: Hastings Center, 2008. Rpt.
in Privacy. Roman Espejo. Detroit: Greenhaven Press, 2010. Opposing Viewpoints. Gale Opposing Viewpoints In Context. Web. 10
Apr. 2012.
Drinking (Alcoholic Beverages)
In societies throughout the world, people drink alcoholic beverages to relax or to celebrate special occasions. Many
regard moderate drinking to be a normal, pleasurable part of life, and it has been associated with several health benefits.
But there are also health and societal risks associated with alcohol, including alcoholism, drunk driving, and the
increased incidence of many diseases.
Some societies, including Islamic countries, ban the use of alcohol, citing religious laws against it. In the United States,
state laws regulate drinking. [Yet federal legislation passed in 1984 required all states to set the legal age for the
purchase and public consumption of alcohol at 21 years. All 50 states have been in compliance with this law since
1988, though a few states have not enacted separate laws banning the sale of alcohol to minors (in these cases, it is
illegal for minors to be in public possession of alcohol, or to use false identification to buy it‐in effect, making
purchase illegal)]. Strike this—[In all fifty states, it is illegal for people under age twenty-one to purchase alcohol.
States also enact specific laws defining and punishing offenses such as drunk driving and regulating the types of
establishments where alcoholic drinks can be served or sold.
Types of Drinks
Different kinds of alcohol beverages are made via different processes and contain varying amounts of alcohol. The
oldest and most common alcohol drink throughout the world is beer, which is made by brewing and fermenting grains
such as barley. The alcohol content in beer usually varies from around 4 percent to 6 percent. Stronger varieties of beer,
such as ale, porter, and stout, have higher amounts of alcohol, generally ranging from 7 percent to about 10 percent.
The production of wine, which is made from fermented grape juice, dates as far back as 6000 BCE. Wine is an
important component of many cuisines, including those in the Mediterranean and other parts of Europe. Table wines
generally contain about 10 to 14 percent alcohol, whereas fortified wines, such as sherry and dessert wines, typically
contain about 14 to 20 percent alcohol.
Distilled beverages, often called spirits, are made by distilling fermented grain, fruit, or vegetables. Brandy, produced
by distilling wine, usually contains between 36 and 60 percent alcohol by volume. Vodka, often made from potatoes or
rye grain, generally contains 35 to 50 percent alcohol by volume. Gin, distilled from grain alcohol, generally contains
about 40 percent alcohol, while tequila, produced from the agave plant, contains between 35 and 55 percent alcohol by
volume. Rum, made from sugarcane byproducts such as molasses, usually contains about 35 percent alcohol by
volume. Whiskey, distilled from fermented grain mash, generally contains 40 to 50 percent alcohol by volume.
Some people mistakenly believe that, because beer and wine have lower percentages of alcohol per volume than
stronger drinks do, people who drink only beer or wine cannot become alcoholics or problem drinkers. This is not true.
Heavy drinking of any type of alcoholic beverage can contribute to these conditions.
Measuring Intake
Health specialists emphasize that the benefits of alcohol are associated with light or moderate drinking only. This is
usually defined as no more than two standard drinks per day for men and one drink per day for women. Amounts differ
by gender because men and women metabolize alcohol differently. Intake that exceeds these recommended amounts is
defined as heavy drinking and is associated with increased health risks.
Because different types of alcoholic drinks contain different amounts of alcohol, some confusion exists as to how to
measure a standard drink. It can be useful to employ a general rule-of-thumb: typical servings of beer (12 ounces), wine
(about 5 ounces), or spirits (about 1.5 ounces) contain roughly the same amount of alcohol (about 0.6 fluid ounces or
1.2 tablespoons). So a standard drink can be defined as one typical serving of beer, wine, or spirits. However, it is
important to note that precise amounts of alcohol may differ according to beverage type and brand. In addition, typical
serving size can differ. Many restaurants, for example, serve glasses of wine that are significantly larger than five
ounces. When people are measuring their intake of alcohol, they should take these variables into consideration.
Blood Alcohol Content
The concentration of alcohol in a person's bloodstream is used to determine the degree of intoxication. In general, with
a blood alcohol content (BAC) of 0.01–0.02 percent, a person shows few if any signs of intoxication. Impaired
alertness and concentration, however, are evident at BACs of only 0.03 percent. Serious intoxication occurs between
0.11 and 0.20 percent, and potentially deadly intoxication occurs with rates of 0.30 percent.
It is difficult to determine how many drinks it takes to increases BAC to these levels. People metabolize alcohol
differently, and gender, basic health status, and weight are all variables. In general, the body is capable of processing
the alcohol in one standard drink in one hour. A person weighing between 110 pounds and 129 pounds could reach a
BAC of 0.08 percent after consuming as few as two drinks within one hour. For someone weighing between 130 and
189 pounds, a 0.08 percent BAC could be reached after consuming three or more drinks within one hour. Persons over
190 pounds could reach that limit by consuming four drinks in one hour.
BAC provides legal definitions of intoxication, which vary according to country. In all fifty states in the United States,
it is a crime to drive with a BAC of 0.08 percent or above. Many states set lower BAC limits for teenage drivers or
operators with commercial licenses.
Health Benefits
Moderate drinking has been associated with several health benefits, particularly for the heart. Numerous studies show
that moderate drinkers have significantly lower rates of heart attack, blood clot-caused stroke, peripheral vascular
disease, sudden cardiac death, and death from all cardiovascular causes. Studies have shown a 25 percent to 40 percent
reduction in these risks among moderate drinkers. Moderate amounts of alcohol raise levels of "good" cholesterol
(HDL), which appears to offer protection against heart disease. Moderate amounts of alcohol are also associated with
improved sensitivity to insulin and with improved blood clotting mechanisms. Studies have also shown that moderate
drinkers have lower rates of gallstones and of type 2 diabetes.
Red wine in particular has been touted as a beneficial beverage for the heart. Red wine contains antioxidants known as
polyphenols, which help protect the lining of blood vessels in the heart. It also contains resveratrol, a chemical that
reduces "bad" cholesterol (LDL) and protects against blood clots. While many studies have shown that moderate
consumption of red wine lowers the risk of heart disease, no clear evidence proves that it offers superior cardiovascular
benefits compared to other kinds of alcoholic drinks.
Risks
Though moderate drinking can cut the risks of many diseases, heavy drinking damages the health. It can cause
pancreatic and liver disease, including alcoholic hepatitis and cyrrhosis. In addition, drinking excessively can raise
blood pressure and damage heart muscles. Heavy drinking is linked to higher rates of many cancers, including cancer
of the mouth, pharynx, larynx, and esophagus. High alcohol consumption is also associated with increased rates of
breast cancer, and of colorectal cancer in men. Some evidence suggests that heavy drinking increases women's risk of
liver cancer and colorectal cancer. And alcohol is an addictive substance, increasing the risk of addiction and abuse.
Even moderate drinking can affect health in negative ways. It can interfere with sleep and interact dangerously with
prescription medications. Moderate drinking is also linked to increased rates of breast cancer. A large study reported by
the Harvard School of Public Health found that consumption of two or more drinks per day increased women's risk of
breast cancer by up to 41 percent. This risk may be lessened, however, by taking extra folate, a B-vitamin, because
alcohol blocks its absorption in the body.
It is especially dangerous for a woman to drink while she is pregnant. Alcohol affects the unborn baby and can lead to
lifelong problems for the child. It is associated with lower birth weights, growth problems, and behavioral problems.
One of the most serious consequences of drinking during pregnancy is fetal alcohol syndrome (FAS), a group of
incurable and lasting problems for the child that can include mental retardation, vision and hearing problems, and
physical birth defects. Health providers emphasize that women who are pregnant or might get pregnant should not
drink alcohol at all.
Drinking, even in moderate amounts, can impair a person's motor skills, mental concentration, and judgment. Heavy
drinking causes intoxication, or drunkenness, the symptoms of which include slurred speech, impaired balance, and
erratic behavior. People who are intoxicated are vulnerable to risky behaviors, such as unprotected sex. Extreme
intoxication can cause coma and even death.
Health practitioners point out that, though moderate drinking can offer health benefits, similar results can be obtained
in other ways—for example, with better diet and exercise. Citing the risks associated with alcohol, many practitioners
advise their patients not to begin drinking to improve their health. If patients already drink, they are advised to keep
their intake moderate.
Alcohol Abuse
Though many people drink responsibly, others abuse alcohol. Alcohol abuse can take many forms. Binge drinking, in
which a person consumes five or more drinks (for men) and four or more drinks (for women) in a period of about two
hours, is particularly dangerous. It quickly increases the level of alcohol in a person's blood, making him or her very
drunk. Binge drinking can cause alcohol poisoning and result in accidental death. It also increases the risk of accidental
injuries, such as falls, drowning, burns, or car crashes, and of unprotected sex, sexually transmitted diseases, and
unplanned pregnancy. According to the Centers for Disease Control and Prevention, binge drinkers are up to fourteen
times more likely to drive while drunk than are non-binge drinkers. Binge drinking is also associated with health
problems, including neurological disease, high blood pressure, and other cardiovascular conditions. Even one episode
of binge drinking can cause liver damage. Binge drinking is especially prevalent among young people. Some 51
percent of drinkers in the United States who binge are ages eighteen to twenty years old.
Alcoholism is another pervasive form of alcohol abuse. The National Institutes of Health define alcoholism as the
continued consumption of alcoholic beverages "at a level that interferes with physical health, mental health, and social,
family, or job responsibilities." Alcoholism is considered a type of drug addiction that has both physical and emotional
components. It is not known why some drinkers become alcoholics and others do not. Genetic factors may play a role;
a person with an alcoholic parent is more likely to become an alcoholic than someone without a family history of
alcoholism.
In addition to damaging their physical health, alcoholics harm their families in many ways. They may be unable to get
or keep jobs, and their drinking may damage their personal relationships. For example, some alcoholics become
verbally or physically abusive to their spouses when drunk. Alcoholic parents frequently create a stressful home
environment in which the emotional needs of their children are not met. According to federal statistics reported by the
Harvard School of Public Health, some 18.2 million people in the United States abuse alcohol or are alcoholics.
Alcohol abuse contributes to many social problems. Among these is drunk driving, which kills more than 16,000
people in the United States each year. Alcohol is also a major factor in violent crime, including homicides and assaults.
According to the U.S. Department of Justice, some 36.3 percent of criminal offenders were under the influence of
alcohol when they engaged in their crimes. Among convicted murderers, more than 40 percent were under the
influence of alcohol when the crime was committed. Federal data estimate the economic costs of alcohol abuse at about
$185 billion per year.
Source Citation:
"Drinking (Alcoholic Beverages)." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale
Opposing Viewpoints In Context. Web. 10 Apr. 2012.
The Minimum Legal Drinking Age Should Be Lowered
"American society has determined that upon turning 18 teenagers become adults."
The authors at Choose Responsibility, a nonprofit organization founded in 2007, discuss in the following viewpoint several issues
concerning the debate over lowering the legal drinking age from twenty-one to eighteen. For example, the authors point out
that eighteen-year-olds in America are legally allowed to buy cigarettes, purchase property, vote, and serve on a jury—yet are
not legally allowed to purchase or drink alcoholic beverages. The organization also maintains that education programs about
using alcohol safely are effective at reducing high-risk drinking.
As you read, consider the following questions:
1. According to Choose Responsibility, at what age do Americans become legally responsible for their actions?
2. Alcohol education programs can generally be grouped into what two categories, according to the organization?
3. Does H.S. Swartzwelder believe that the brain of an adolescent has finished developing by the age of eighteen?
Debating the Issues ...







If a person can go to war, shouldn't he or she be able to have a beer?
Many youth under age 21 still drink, despite the current legal drinking age. Doesn't that prove the policy is ineffective?
Youth in other countries are exposed to alcohol at earlier ages and engage in less alcohol abuse and have healthier
attitudes toward alcohol. Don't those countries have fewer alcohol-related problems than we do?
I've read that if we educate teens about using alcohol safely starting at age 18, that will encourage responsible drinking.
Is that true?
I've read that the adolescent brain continues to develop through the early 20s. What are the long-term effects of alcohol
use on a developing brain?
There seems to be support for lowering the drinking age—is this true?
So what strategies are effective for reducing high-risk alcohol use?...
Old Enough for War, Old Enough for Alcohol
For better or worse, American society has determined that upon turning 18 teenagers become adults. This means they
can enlist [in the military], serve, fight and potentially die for their country. And while the "fight for your country"
argument is a powerful one, it only begins to capture the essence of adulthood. Most importantly, at age 18 you become
legally responsible for your actions. You can buy and smoke cigarettes even though you know that, in time, they will
probably give you lung cancer. You may even purchase property, strike binding legal contracts, take out a loan, vote,
hold office, serve on a jury, or adopt a child. But strangely at 18, one cannot buy a beer. While that may be an injustice
to those choosing to serve their country, the more serious consequence is the postponement of legal culpability. In most
other countries, the age of majority coincides with the legal drinking or purchasing age.
Critics are quick to point out that 18 is not an age of majority, but one step amongst many that together mark the
gradual path to adulthood. This argument notes that young adults cannot drink until 21, rent cars until 25, run for the
U.S. Senate until they are 30, and run for President until 35. This is, the critics suggest, evidence of a graduated legal
adulthood. But this argument falls flat. First, rental car companies are not legally prevented from renting cars to those
under 25; this is a decision made by insurance companies. In fact, some rental companies do rent to those under 25, and
the associated higher rates compensate for that potential liability. Second, age requirements for these high public
offices are more appropriately seen as exceptions to full adulthood, rather than benchmarks of adulthood. Finally, and
most importantly, the Constitution speaks to the legal age of majority only once and that is in the 26th Amendment to
the Constitution where, "The right of citizens of the US, who are 18 years of age or older, to vote shall not be denied or
abridged ... on account of age."...
Minors Drink Alcohol Despite Policies
Many young people under the age of 21 consume alcohol, and continue to do so despite nearly 25 years worth of
prohibition of that behavior. The trend over the past decade has had a polarizing effect of sorts—fewer 12-20-year-olds
are drinking, but those who choose to drink are drinking more. Between 1993 and 2001, the rate of 12-20-year-olds
who reported consuming alcohol in the past 30 days decreased from 33.4% to 29.3%, while rates of binge drinking
increased among that age group over those same years, from 15.2% to 18.9%. Data specific to college and university
students also indicate this polarization of drinking behaviors over time. A decade's worth of research in the College
Alcohol Study found both the proportion of students abstaining and the proportion of students engaging in frequent
binge drinking had increased. Furthermore, as compared to 1993, more 18-24-year-old students who chose to drink in
2001 were drinking excessively—defined by frequency of drinking occasions, frequency of drunkenness, and drinking
to get drunk.
There is evidence that the decline in alcohol consumption by those under the age of 21 seen throughout the 1980s and
1990s was not the result of the 21-year-old drinking age, but of a larger societal trend. "Nationwide per capita
consumption peaked around 1980 and dropped steeply during the 1980s. Drinking by youths followed this same
pattern. The predominant reason was not changes in state MLDA [minimum legal drinking age] but rather a close link
between youthful and adult alcohol consumption.... Increasing the MLDA did make some difference but not as much as
might be guessed from a simple 'before and after' comparison." Even the 1993 source so frequently cited in support of
the 21-year-old drinking age acknowledges that "... [survey-based research from the 1970s] has shown that increased
minimum age both does and does not covary with decreased youth drinking." This evidence suggests that the 21-yearold drinking age is not an unqualified success, but rather a well-intentioned social policy whose 25-year history has led
to several unintended consequences, including but not limited to an increase in the prevalence of abusive drinking
amongst young people....
Europeans Drink Alcohol at Younger Ages
Any generalizations of the behavior of "European" youth should be scrutinized. The drinking cultures of northern and
southern European nations vary markedly; history and an extensive body of cross cultural research would suggest that
cultural attitudes towards alcohol use play a far more influential role than minimum age legislation. Recent research
published by the World Health Organization found that while 15- and 16-year-old teens in many European states,
where the drinking age is 18 or younger (and often unenforced), have more drinking occasions per month, they have
fewer dangerous, intoxication occasions than their American counterparts. For example, in southern European nations
ratios of all drinking occasions to intoxication occasions were quite low—roughly one in ten—while in the United
States, almost half of all drinking occasions involving 15- and 16-year-olds resulted in intoxication.
Though its legal drinking age is highest among all the countries surveyed, the United States has a higher rate of
dangerous intoxication occasions than many countries that not only have drinking ages that are lower or nonexistent,
but also have much higher levels of per capita consumption.
Research also notes that the 15- and 16-year-olds who are most at risk for alcohol problems (defined as those who
consume alcohol 10 times or more in 30 days and drink to intoxication three times or more in 30 days) are not those
who live in countries where overall per capita consumption is highest, but rather from the countries where it is lower.
For example, though France and Portugal have the highest per capita consumption in Europe, 15- and 16-year-olds in
both countries show very moderate consumption. By contrast, Denmark, Ireland, and the United Kingdom, where per
capita consumption is comparatively low, have the highest number of at-risk 15- and 16-year-olds. Per capita
consumption and the degree of risk for serious alcohol problems, therefore, are inversely proportional....
Alcohol Education Courses Teach Responsible Drinking
The effectiveness of alcohol education continues to be widely debated. Various approaches to alcohol education have
been developed and can generally be grouped into those that support abstinence and those that view abstinence as
unrealistic, and must therefore work to equip individuals with decision-making skills for safe alcohol use. There are
both formal education, through schools and institutions, and informal education through the family and peers. While
alcohol education programs that advocate abstinence have been proven ineffective, interactive education programs have
had greater success in their ability not only to educate drinkers, but also to alter their drinking habits.
Australia has successfully implemented alcohol education programs that focus on reducing risk and promoting
responsible drinking. Rethinking Drinking, and its counterpart aimed at a younger crowd, School Health and Alcohol
Harm Reduction Project, include role playing and interactive teaching and build skills so students may safely handle
risky situations involving alcohol. These programs have shown some effectiveness in influencing young adults'
drinking behaviors.
Recently in the United States, Outside the Classroom has produced AlcoholEDU, an interactive online prevention
program used by 450 colleges and universities throughout the country. AlcoholEDU increases practical knowledge,
motivates students to change their behavior, and decreases students' risk of negative personal and academic
consequences as a result of alcohol use. In 2004, students who completed AlcoholEDU were 20% less likely to be
heavy-episodic drinkers and 30% less likely to be problematic drinkers, numbers that prove that alcohol education can
be a useful tool in altering students' drinking habits.
Upon finding a lack of thorough research regarding the effects of alcohol education, Andrew F. Wall, Ph.D., of the
University of Illinois at Urbana-Champaign began his study of the effectiveness of AlcoholEDU. He describes his
research as aiming to "determine whether an online prevention program would change behavior and consequences."
His research provides evidence for the first time that "... an interactive educational experience can substantially reduce
the negative consequences of high-risk drinking."...
The Manner in Which Alcohol Affects a Teen's Brain
[Author's Note:] We asked Dr. H.S. Swartzwelder, frequently cited expert on adolescent brain development and
substance abuse, MADD [Mothers Against Drunk Driving] consultant, and Choose Responsibility board member to
respond to this question.
"It is true that the brain continues to develop into a person's 20s, particularly the frontal lobes which are critical for
many of the higher cognitive functions that are so important for success in the adult world—such as problem solving,
mental flexibility, and planning.
"It is also clear that alcohol affects the adolescent brain differently than the adult brain, but the story is not simple and
the data should be interpreted cautiously as this complex science continues to evolve. Although alcohol affects some
brain functions more powerfully during adolescence, it affects other functions less powerfully during the same period.
For example, studies in animals clearly indicate that a single dose of alcohol can impair learning (and learning-related
brain activity) more powerfully in adolescent animals than in adults. But on the other hand a somewhat higher dose will
produce far greater sedation (and sedation-related brain activity) in adult animals than in adolescents. So, in terms of
single doses of alcohol, the adolescent brain is not uniformly more or less sensitive to alcohol—it depends on the brain
function that is being measured. Importantly, there has been little direct study of the effects of acute doses of alcohol on
adolescent humans, compared to adults. One study found that a single dose of alcohol resulting in blood alcohol levels
near 80mg/dl (the legal limit) impaired learning more powerfully among people in their early 20s than it did in people
in their late 20s, but it will take more research to answer this question with authority in human subjects.
"Since the effects of single doses of alcohol can have markedly different effects on adolescents than on adults, it makes
sense to ask whether this means that the adolescent brain is more or less sensitive to the effects of repeated doses of
alcohol over time. In my view, the jury remains out on this question, but there are some studies in animals which
suggest that the adolescent brain may be more vulnerable to long-term damage by alcohol than the adult brain.
Similarly, there are some studies of humans who consumed large quantities of alcohol over extended periods of time
during adolescence, and have relatively small hippocampi (a brain region critical for certain types of learning). All of
these studies need to be fleshed out before the issue is settled, but, if nothing else, they give teens a very good reason to
think carefully about drinking to excess ... and this is probably the pivotal issue—how much is too much?
"Most studies of the effects of chronic alcohol exposure in adolescence, compared to adulthood, have focused on
relatively high doses. Studies of lower doses, and less severe chronic dosing regimens, will be needed to determine
whether the adolescent brain is more sensitive to the long-term effects of mild to moderate drinking. There are plenty of
studies indicating that early, unsupervised drinking can lead to trouble for teens—both immediately and down the road.
But this does not mean that an 18-year-old who has a beer or two every couple of weeks is doing irreparable damage to
her brain. It is the 18-year-old (or 30-year old, for that matter!) who downs five or six drinks in a row on his way to a
dance that worries me." ...
Support Exists for Lowering the Drinking Age
There is support for lowering the drinking age, though polling data suggests this remains a minority view. Since the
Supreme Court decision in South Dakota v. Dole in 1987, however (South Dakota, joined by the states of Colorado,
Hawaii, Kansas, Louisiana, Montana, New Mexico, Ohio, South Carolina, Tennessee, Vermont, and Wyoming had
challenged the constitutionality of the 1984 legislation), there has been virtually no public discussion or debate over the
21-year-old drinking age. Twenty years have passed, during which time data have been gathered and the practical
effects of the law have been experienced. National (Chronicle of Higher Education; US News and World Report;
Newsweek; Fox News) media interest in the issue, perhaps or perhaps not reflecting a change in public opinion, has
surfaced repeatedly during the first half of 2007. This would suggest a desire to reopen debate....
Certain Approaches Are Effective in Reducing High-Risk Alcohol Use
Strategies based on harm reduction and environmental management have been successful in reducing underage alcohol
abuse. While research has shown that abstinence-based education programs alone have little to no effect on preventing
use or abuse of alcohol among underage drinkers, harm reduction strategies that address the complex psychological
expectancies that lead to excessive drinking amongst young people are effective in reducing rates and incidences of
alcohol abuse. Environmental strategies such as alcohol advertising bans, keg registration, responsible server training,
social norms marketing and community interventions are viable options for managing high-risk drinking, especially on
college campuses. Furthermore, evidence would suggest that a policy based on strengthening enforcement may be of
limited success; for every 1,000 incidences of underage alcohol consumption, only two result in arrest or citation.
Advocates of enforcement should be required to demonstrate the level of incremental expense they would recommend
in order to achieve a significantly better result. Under the 21-year-old drinking age, fewer underage individuals are
drinking, but those who do choose to drink are drinking more, are drinking in ways that are harmful to their health, and
[are] engaging in behaviors that have a negative impact on the community.
Full Text: COPYRIGHT 2004 Greenhaven Press, COPYRIGHT 2006 Gale.
Source Citation:
Responsibility, Choose. "The Minimum Legal Drinking Age Should Be Lowered." Teens at Risk. Ed. Auriana Ojeda. San Diego:
Greenhaven Press, 2004. Opposing Viewpoints. Rpt. from "Debating the Issues." chooseresponsibility.org. 2007. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
The Minimum Legal Drinking Age Should Not Be Lowered
"Increasing the age at which people can legally purchase and drink alcohol has been the most successful intervention to date in
reducing drinking and alcohol-related crashes among people under age 21."
In the following viewpoint, the U.S. Department of Health and Human Services (DHHS) breaks down the risks of alcohol
consumption of young people under the age of 21. This viewpoint contends that the most successful way to prevent underage
drinking is to keep the legal drinking age at 21. HHS estimates this minimum saves about 700 to 1,000 lives annually. HHS is the
United States government's principal agency for protecting the health of all Americans and providing essential human services,
especially for those who are least able to help themselves
As you read, consider the following questions:
1. What was the average age of first use by adolescents in 2003, as stated in the viewpoint?
2. According to the authors, what are some of the health risks of adolescent drinking?
3. What did the New Zealand study show about how much the legal drinking age relates to drinking-related crashes?
Alcohol is the drug of choice among youth. Many young people are experiencing the consequences of drinking too
much, at too early an age. As a result, underage drinking is a leading public health problem in this country.
Each year, approximately 5,000 young people under the age of 21 die as a result of underage drinking; this includes
about 1,900 deaths from motor vehicle crashes, 1,600 as a result of homicides, 300 from suicide, as well as hundreds
from other injuries such as falls, burns, and drownings.
Yet drinking continues to be widespread among adolescents, as shown by nationwide surveys as well as studies in
smaller populations. According to data from the 2005 Monitoring the Future (MTF) study, an annual survey of U.S.
youth, three-fourths of 12th graders, more than two-thirds of 10th graders, and about two in every five 8th graders have
consumed alcohol. And when youth drink they tend to drink intensively, often consuming four to five drinks at one
time. MTF data show that 11 percent of 8th graders, 22 percent of 10th graders, and 29 percent of 12th graders had
engaged in heavy episodic (or "binge") drinking within the past two weeks.
Research also shows that many adolescents start to drink at very young ages. In 2003, the average age of first use of
alcohol was about 14, compared to about 17 1/2 in 1965. People who reported starting to drink before the age of 15
were four times more likely to also report meeting the criteria for alcohol dependence at some point in their lives. In
fact, new research shows that the serious drinking problems (including what is called alcoholism) typically associated
with middle age actually begin to appear much earlier, during young adulthood and even adolescence.
Other research shows that the younger children and adolescents are when they start to drink, the more likely they will
be to engage in behaviors that harm themselves and others. For example, frequent binge drinkers (nearly 1 million high
school students nationwide) are more likely to engage in risky behaviors, including using other drugs such as marijuana
and cocaine, having sex with six or more partners, and earning grades that are mostly Ds and Fs in school.
Why Some Adolescents Drink
As children move from adolescence to young adulthood, they encounter dramatic physical, emotional, and lifestyle
changes. Developmental transitions, such as puberty and increasing independence, have been associated with alcohol
use. So in a sense, just being an adolescent may be a key risk factor not only for starting to drink but also for drinking
dangerously.
Research shows the brain keeps developing well into the twenties, during which time it continues to establish important
communication connections and further refines its function. Scientists believe that this lengthy developmental period
may help explain some of the behavior which is characteristic of adolescence—such as their propensity to seek out new
and potentially dangerous situations. For some teens, thrill-seeking might include experimenting with alcohol.
Developmental changes also offer a possible physiological explanation for why teens act so impulsively, often not
recognizing that their actions—such as drinking—have consequences.
How people view alcohol and its effects also influences their drinking behavior, including whether they begin to drink
and how much. An adolescent who expects drinking to be a pleasurable experience is more likely to drink than one
who does not. An important area of alcohol research is focusing on how expectancy influences drinking patterns from
childhood through adolescence and into young adulthood. Beliefs about alcohol are established very early in life, even
before the child begins elementary school. Before age 9, children generally view alcohol negatively and see drinking as
bad, with adverse effects. By about age 13, however, their expectancies shift, becoming more positive. As would be
expected, adolescents who drink the most also place the greatest emphasis on the positive and arousing effects of
alcohol.
Differences between the adult brain and the brain of the maturing adolescent also may help to explain why many young
drinkers are able to consume much larger amounts of alcohol than adults before experiencing the negative
consequences of drinking, such as drowsiness, lack of coordination, and withdrawal/hangover effects. This unusual
tolerance may help to explain the high rates of binge drinking among young adults. At the same time, adolescents
appear to be particularly sensitive to the positive effects of drinking, such as feeling more at ease in social situations,
and young people may drink more than adults because of these positive social experiences.
Pinpointing a genetic contribution will not tell the whole story, however, as drinking behavior reflects a complex
interplay between inherited and environmental factors, the implications of which are only beginning to be explored in
adolescents. And what influences drinking at one age may not have the same impact at another. As Rose and colleagues
show, genetic factors appear to have more influence on adolescent drinking behavior in late adolescence than in midadolescence.
Environmental factors, such as the influence of parents and peers, also play a role in alcohol use. For example, parents
who drink more and who view drinking favorably may have children who drink more, and an adolescent girl with an
older or adult boyfriend is more likely to use alcohol and other drugs and to engage in delinquent behaviors.
Researchers are examining other environmental influences as well, such as the impact of the media. Today alcohol is
widely available and aggressively promoted through television, radio, billboards, and the Internet. Researchers are
studying how young people react to these advertisements. In a study of 3rd, 6th, and 9th graders, those who found
alcohol ads desirable were more likely to view drinking positively and to want to purchase products with alcohol logos.
Research is mixed, however, on whether these positive views of alcohol actually lead to underage drinking.
Health Risks
Whatever it is that leads adolescents to begin drinking, once they start they face a number of potential health risks.
Although the severe health problems associated with harmful alcohol use are not as common in adolescents as they are
in adults, studies show that young people who drink heavily may put themselves at risk for a range of potential health
problems.
Scientists currently are examining just how alcohol affects the developing brain, but it's a difficult task. Subtle changes
in the brain may be difficult to detect but still have a significant impact on long-term thinking and memory skills. Add
to this the fact that adolescent brains are still maturing, and the study of alcohol's effects becomes even more complex.
Research has shown that animals fed alcohol during this critical developmental stage continue to show long-lasting
impairment from alcohol as they age. It's simply not known how alcohol will affect the long-term memory and learning
skills of people who began drinking heavily as adolescents.
Elevated liver enzymes, indicating some degree of liver damage, have been found in some adolescents who drink
alcohol. Young drinkers who are overweight or obese showed elevated liver enzymes even with only moderate levels
of drinking.
In both males and females, puberty is a period associated with marked hormonal changes, including increases in the sex
hormones, estrogen and testosterone. These hormones, in turn, increase production of other hormones and growth
factors, which are vital for normal organ development. Drinking alcohol during this period of rapid growth and
development (i.e., prior to or during puberty) may upset the critical hormonal balance necessary for normal
development of organs, muscles, and bones. Studies in animals also show that consuming alcohol during puberty
adversely affects the maturation of the reproductive system.
Preventing Underage Drinking Within A Developmental Framework
Complex behaviors, such as the decision to begin drinking or to continue using alcohol, are the result of a dynamic
interplay between genes and environment. For example, biological and physiological changes that occur during
adolescence may promote risk-taking behavior, leading to early experimentation with alcohol. This behavior then
shapes the child's environment, as he or she chooses friends and situations that support further drinking. Continued
drinking may lead to physiological reactions, such as depression or anxiety disorders, triggering even greater alcohol
use or dependence. In this way, youthful patterns of alcohol use can mark the start of a developmental pathway that
may lead to abuse and dependence. Then again, not all young people who travel this pathway experience the same
outcomes.
Children mature at different rates. Developmental research takes this into account, recognizing that during adolescence
there are periods of rapid growth and reorganization, alternating with periods of slower growth and integration of body
systems. Periods of rapid transitions, when social or cultural factors most strongly influence the biology and behavior
of the adolescent, may be the best time to target delivery of interventions. Interventions that focus on these critical
development periods could alter the life course of the child, perhaps placing him or her on a path to avoid problems
with alcohol.
To date, researchers have been unable to identify a single track that predicts the course of alcohol use for all or even
most young people. Instead, findings provide strong evidence for wide developmental variation in drinking patterns
within this special population.
Intervention Approaches
Intervention approaches typically fall into two distinct categories: (1) environmental-level interventions, which seek to
reduce opportunities for underage drinking, increase penalties for violating minimum legal drinking age (MLDA) and
other alcohol use laws, and reduce community tolerance for alcohol use by youth; and (2) individual-level
interventions, which seek to change knowledge, expectancies, attitudes, intentions, motivation, and skills so that youth
are better able to resist the pro-drinking influences and opportunities that surround them.
Environmental approaches include:
Raising the Price of Alcohol—A substantial body of research has shown that higher prices or taxes on alcoholic
beverages are associated with lower levels of alcohol consumption and alcohol-related problems, especially in young
people.
Increasing the Minimum Legal Drinking Age—Today all States have set the minimum legal drinking at age 21.
Increasing the age at which people can legally purchase and drink alcohol has been the most successful intervention to
date in reducing drinking and alcohol-related crashes among people under age 21. The National Highway Traffic
Safety Administration (NHTSA) estimates that a legal drinking age of 21 saves 700 to 1,000 lives annually. Since
1976, these laws have prevented more than 21,000 traffic deaths. Just how much the legal drinking age relates to
drinking-related crashes is shown by a recent study in New Zealand. Six years ago that country lowered its minimum
legal drinking age to 18. Since then, alcohol-related crashes have risen 12 percent among 18- to 19-year-olds and 14
percent among 15- to 17-year-olds. Clearly a higher minimum drinking age can help to reduce crashes and save lives,
especially in very young drivers.
Enacting Zero-Tolerance Laws—All States have zero-tolerance laws that make it illegal for people under age 21 to
drive after any drinking. When the first eight States to adopt zero-tolerance laws were compared with nearby States
without such laws, the zero-tolerance States showed a 21-percent greater decline in the proportion of single-vehicle
night-time fatal crashes involving drivers under 21, the type of crash most likely to involve alcohol.
Stepping up Enforcement of Laws—Despite their demonstrated benefits, legal drinking age and zero-tolerance laws
generally have not been vigorously enforced. Alcohol purchase laws aimed at sellers and buyers also can be effective,
but resources must be made available for enforcing these laws.
Individual-focused interventions include:
School-Based Prevention Programs—The first school-based prevention programs were primarily informational and
often used scare tactics; it was assumed that if youth understood the dangers of alcohol use, they would choose not to
drink. These programs were ineffective. Today, better programs are available and often have a number of elements in
common: They follow social influence models and include setting norms, addressing social pressures to drink, and
teaching resistance skills. These programs also offer interactive and developmentally appropriate information, include
peer-led components, and provide teacher training.
Family-Based Prevention Programs—Parents' ability to influence whether their children drink is well documented and
is consistent across racial/ethnic groups. Setting clear rules against drinking, consistently enforcing those rules, and
monitoring the child's behavior all help to reduce the likelihood of underage drinking. The Iowa Strengthening Families
Program (ISFP), delivered when students were in grade 6, is a program that has shown long-lasting preventive effects
on alcohol use.
Intervention Programs
Environmental interventions are among the recommendations included in the recent National Research Council (NRC)
and Institute of Medicine (IOM) report on underage drinking. These interventions are intended to reduce commercial
and social availability of alcohol and/or reduce driving while intoxicated. They use a variety of strategies, including
server training and compliance checks in places that sell alcohol; deterring adults from purchasing alcohol for minors
or providing alcohol to minors; restricting drinking in public places and preventing underage drinking parties;
enforcing penalties for the use of false IDs, driving while intoxicated, and violating zero-tolerance laws; and raising
public awareness of policies and sanctions.
The following community trials show how environmental strategies can be useful in reducing underage drinking and
related problems.
The Massachusetts Saving Lives Program—This intervention was designed to reduce alcohol-impaired driving and
related traffic deaths. Strategies included the use of drunk driving checkpoints, speeding and drunk driving awareness
days, speed-watch telephone hotlines, high school peer-led education, and college prevention programs. The 5-year
program decreased fatal crashes, particularly alcohol-related fatal crashes involving drivers ages 15-25, and reduced the
proportion of 16- to 19-year-olds who reported driving after drinking, in comparison with the rest of Massachusetts. It
also made teens more aware of penalties for drunk driving and for speeding.
The Community Prevention Trial Program—This program was designed to reduce alcohol-involved injuries and death.
One component sought to reduce alcohol sales to minors by enforcing underage sales laws; training sales clerks,
owners, and managers to prevent sales of alcohol to minors; and using the media to raise community awareness of
underage drinking. Sales to apparent minors (people of legal drinking age who appear younger than age 21) were
significantly reduced in the intervention communities compared with control sites.
Communities Mobilizing for Change on Alcohol—This intervention, designed to reduce the accessibility of alcoholic
beverages to people under age 21, centered on policy changes among local institutions to make underage drinking less
acceptable within the community. Alcohol sales to minors were reduced: 18- to 20-year-olds were less likely to try to
purchase alcohol or provide it to younger teens, and the number of DUI arrests declined among 18- to 20-year-olds.
Multicomponent Comprehensive Interventions—Perhaps the strongest approach for preventing underage drinking
involves the coordinated effort of all the elements that influence a child's life—including family, schools, and
community. Ideally, intervention programs also should integrate treatment for youth who are alcohol dependent.
Project Northland is an example of a comprehensive program that has been extensively evaluated.
Project Northland was tested in 22 school districts in northeastern Minnesota. The intervention included (1) school
curricula, (2) peer leadership, (3) parental involvement programs, and (4) communitywide task force activities to
address larger community norms and alcohol availability. It targeted adolescents in grades 6 through 12.
Intervention and comparison communities differed signif-icantly in "tendency to use alcohol," a composite measure
that combined items about intentions to use alcohol and actual use as well as in the likelihood of drinking "five or more
in a row." Underage drinking was less prevalent in the intervention communities during phase 1; higher during the
interim period (suggesting a "catch-up" effect while intervention activities were minimal); and again lower during
phase 2, when intervention activities resumed.
Project Northland has been designated a model program by the Substance Abuse and Mental Health Services
Administration (SAMHSA), and its materials have been adapted for a general audience. It now is being replicated in
ethnically diverse urban neighborhoods.
Stopping Problems Before They Develop
Today, alcohol is widely available and aggressively promoted throughout society. And alcohol use continues to be
regarded, by many people, as a normal part of growing up. Yet under-age drinking is dangerous, not only for the
drinker but also for society, as evident by the number of alcohol-involved motor vehicle crashes, homicides, suicides,
and other injuries.
People who begin drinking early in life run the risk of developing serious alcohol problems, including alcoholism, later
in life. They also are at greater risk for a variety of adverse consequences, including risky sexual activity and poor
performance in school.
Identifying adolescents at greatest risk can help stop problems before they develop. And innovative, comprehensive
approaches to prevention, such as Project Northland, are showing success in reducing experimentation with alcohol as
well as the problems that accompany alcohol use by young people.
Source Citation:
U.S. Department of Health and Human Services. "The Minimum Legal Drinking Age Should Not Be Lowered." Teens at Risk. Ed.
Auriana Ojeda. San Diego: Greenhaven Press, 2004. Opposing Viewpoints. Rpt. from "Underage Drinking: Why Do Adolescents
Drink, What are the Risks, and How Can Underage Drinking Be Prevented?" 2006. Gale Opposing Viewpoints In Context. Web. 10
Apr. 2012.
Drug Legalization
Drug abuse is a major problem throughout the world. The sale and use of narcotics and other illicit drugs is linked to
addiction, prostitution, government corruption, and violent crime. In much of the world, including the United States,
efforts to stop illicit drug use have focused on stricter laws and enforcement. Yet there is growing concern that this
approach may be counterproductive. Legalizing drugs, say many analysts, is a better way to curb drug use and the
myriad problems associated with it.
There is widespread agreement that preventing drug abuse is an urgent matter. Drug abuse causes serious public health
problems. Users expose themselves to increased risks of contracting HIV infection, hepatitis-C, sexually transmitted
diseases, heart infection, kidney disease, seizures, skin abscesses, pneumonia, and death by accidental overdose. Many
addicts are without medical insurance and use hospital emergency rooms as their only source of medical care,
contributing to astronomical costs to Medicaid. According to data from the Drug Abuse Warning Network, more than
1,742,800 emergency room visits in 2006 were related to drug or alcohol abuse, costing as much as $4 billion. In
addition, drug abuse causes social harms. It erodes family relationships and is linked to poverty, work problems, and
many kinds of crime. To get money for their drugs, addicts often resort to prostitution, larceny, or violent crimes such
as assault or arson. Violence is also common among dealers who vie for control of profits. Rivalries within and among
cartels is thought to be the cause of skyrocketing violence in Mexico, particularly along its northern border. In 2008
Mexico reported approximately 6,000 drug-related murders; the number increased in 2009, reaching approximately
7,300 by November that year. In Cuidad Juarez alone, a city situated just across the border from El Paso, Texas, 2,100
murders occurred in 2009, most of which were thought to be drug-related.
Supporters of drug legalization offer both philosophical and pragmatic arguments for their position. Drug use, they say,
should be an individual's free choice. The government should have no right to forbid this behavior. Supporters also
argue that efforts to crack down on drug abuse have failed, and that decriminalization is a more useful tool in reducing
drug-related violence and, ultimately, the demand for drugs. Opponents, however, say that law enforcement campaigns
are working, and that legalization sends a dangerous message that encourages people to try drugs.
Individual Freedom
Many supporters of legalization believe drug use is a personal choice that individuals should be free to make without
government interference. Sending users and low-level dealers to jail, they argue, unfairly punishes people for what is
essentially a lifestyle choice. These advocates believe that criminal penalties for personal possession of small amounts
of drugs, and for selling small amounts, should be lifted. At the same time, however, many—perhaps most—agree that
criminal penalties should still exist for those involved in production and trafficking.
This view has gained support in some parts of the world, including regions plagued with drug trafficking. Mexico, from
which drug cartels supply most of the illicit drugs entering the United States, has decriminalized possession of
marijuana, cocaine, heroin, and methamphetamine. Selling drugs, however, remains a major felony. Argentina has also
taken steps to decriminalize drug use. In 2009 its Supreme Court struck down a law imposing a jail sentence for
marijuana possession, calling this penalty a violation of privacy. Adults, said the court, are "responsible for making
decisions freely about their desired lifestyle without state interference. Private conduct is allowed unless it constitutes a
real danger or causes damage to property or the rights of others." These steps are in line with recommendations of the
Latin American Commission on Drugs and Democracy, which argues that education and prevention campaigns, rather
than prison sentences, are the most effective way to reduce drug abuse.
Though the United States has not yet taken steps to legalize drug use on privacy grounds, considerable support exists
for such an approach. Former Seattle police chief Norm Stamper is an outspoken advocate for legalization, writing in
The Seattle Times that responsible drug use should be seen as a civil liberty and that drug abuse should be considered a
medical matter, not a criminal one. "In declaring a war on drugs," he stated, "we've declared war on our fellow
citizens."
Opponents of this view, however, say that there is no such thing as a victimless crime. They argue that drug users not
only hurt themselves, but hurt their families, communities, and the larger society. Making drug use legal, they say,
allows people to think that abusing drugs is a benign activity that has no harmful consequences. As Mexican police
officer Elisio Montes explained to a London Guardian writer, "I sometimes wish drugs would be made legal so that the
gringos can get high and we can live in peace. Then I say to myself: no—these drugs are addictive after one single hit.
They're terrifying—they destroy lives, they destroy our young people. If they are legal, they will buy more."
Mandatory Sentencing
Those who support drug legalization also argue that the war on drugs has flooded U.S. prisons with inmates who have
not committed violent crimes. The United States has the highest incarceration rate in the world, and its prisons are
notoriously overcrowded. This circumstance, in large part, resulted from mandatory minimum drug sentencing
legislation passed by Congress in 1986. Since these laws went into effect, more than 80 percent of the increase in the
federal prison population has been attributed to drug convictions. And though mandatory sentencing was intended to
target high-level dealers and drug lords, in fact, the vast majority of inmates serving time for drug charges are lowerlevel dealers and users. FBI statistics released in 2009 bear out this point: some 82.3 percent of all drug arrests the
previous year were for possession only, while 44.3 percent of all those drug arrests were for possession of marijuana.
Mandatory sentencing has resulted in a more than 400 percent increase in the number of women in prison, and has led
to higher sentences for African Americans than for whites. In an interview on National Public Radio, Criminal Justice
Policy Foundation President Eric Sterling said that mandatory sentencing has "overwhelmingly been targeted at people
of color and at low-level offenders."
Mandatory sentencing, say those who support drug legalization, exhausts police resources when these could be more
effectively used against high-level dealers and cartel leaders. Indeed, several states, facing budget cuts, have begun to
reevaluate mandatory minimum sentences for nonviolent drug offenses. And at the federal level, a bill introduced in
2009 by Senator Jim Webb (D-Virginia) would establish a national commission to review current criminal justice
policies and recommend reforms, which many believe should include more flexible sentencing laws. David Shirk,
director of San Diego's Trans-Border Institute, has said that "I think is inevitable that possession of marijuana will be
legal in the U.S. within a decade."
Yet some argue that drug laws should not be weakened. The U.S. war on drugs, they say, has been effective in putting
dealers behind bars and in reducing the scale of drug abuse. Between 2001 and 2008, according to the U.S. Drug
Enforcement Administration, overall drug use by teenagers dropped by 25 percent. Teen marijuana use dropped by 25
percent, and methamphetamine use dropped by 50 percent. Though drug prohibitionists associated these declines with
strict law enforcement campaigns, anti-prohibitionists claim that the drop in drug use is the result of education and
awareness campaigns.
Black Market Dynamics
Advocates for legalization also argue that drug laws actually cause, rather than prevent, violent crime. Because it is
illegal to sell heroin, cocaine, and other drugs openly, the trade is forced underground, where it is controlled by
criminal cartels. These cartels, operating without any respect for international laws, engage in bribery, extortion,
murder, and other crimes to get their product to markets where it can generate exorbitant profits. The lure of such
profits leads to escalating crime as drug lords vie to control larger and larger shares of the market and as new
participants enter the business in hopes of making a quick and easy fortune. If drug use were made legal, say advocates,
this nefarious black market would be eliminated.
According to the organization Law Enforcement Against Prohibition (LEAP), which supports legalizing drugs, the U.S.
war on drugs has cost taxpayers more than one trillion dollars and resulted in the arrest of 37 million people for
nonviolent drug offenses. Yet "people continue dying in our streets while drug barons and terrorists continue to grow
richer than ever before." Legalizing drugs, says LEAP, presents the opportunity to regulate production and distribution
of these substances, resulting in a "far more effective and ethical" way to deal with drug abuse than laws that prohibit
possession and thus, inadvertently, encourage black market activity. If drugs were legalized, said LEAP project director
Kristin Daley in The Guardian article, they would be controlled like alcohol is. "Instead of criminals getting richer,
violence escalating and drug-related deaths on the rise, we would live under a system of established pricing, peaceful
purchase and a regulated labeling system." What is more, LEAP reported in 2009, an analysis it commissioned by a
Harvard economist shows that legalizing and regulating drugs would boost the U.S. economy by some $77 billion each
year.
Mexico
Conditions in Mexico are often cited to support the view that stricter law enforcement actually exacerbates drug-related
violence. Since the 1990s Mexican cartels have been the major suppliers of illicit drugs to the United States. About 90
percent of the cocaine entering the United States goes through Mexico, and Mexican cartels also supply most of the
marijuana and methamphetamine, and much of the heroin, entering the country. In addition, cartels have established
close ties with gangs in the United States, who play a key role in distribution. The United States has pressured Mexico
to curb this flood of drugs and, in response, President Felipe Calderon has taken an extremely hard line. Between
January 2000 and September 2006, the Mexican government arrested more than 79,000 people for drug trafficking.
Though the vast majority of these were low-level dealers, the arrests included fifteen cartel leaders as well as seventyfour lieutenants, fifty-three financial officers, and 428 hitmen. Mexico has also extradited cartel leaders and traffickers
to the United States.
While Mexico believes that its hard-line approach is working, critics say that the government's crackdown has actually
caused violence to escalate because, as high-ranking drug lords are captured, their rivals scramble for control in the
ensuing power vacuum. According to a June 1, 2009 report in The New York Times, more than 10,750 people have
died in drug-related violence since Mexico began its campaign against drug cartels in late 2006. This disturbing pattern,
say critics, suggests that stricter law enforcement cannot solve the problem of drug trafficking. As cartel leaders are
jailed, others take their place; as hubs of drug trafficking are weakened, the business moves to regions where it is easier
to operate. Only by eliminating the demand for drugs, say these critics, can trafficking be eliminated. This view is
gaining support from government officials in the United States. In Texas, for instance, where drug-related violence
along the Mexican border has skyrocketed, El Paso Councilman Robert O'Rourke told CNN that decriminalization
offers "the least worst option to ending the cartel violence … Decriminalizing drugs would take away a lot of the
financial incentive for the cartels to kill."
Europe
Recent developments in Europe have lent support to the drug legalization argument. In 2001 Portugal changed its drug
laws, abolishing criminal penalties for personal possession. Instead, offenders were offered therapy. Within five years
the country's rate of illicit drug use among teens dropped, and the rate of new HIV infections caused by sharing dirty
needles fell by 17 percent. In addition, the number of deaths related to street drugs fell by more than half. These data,
say advocates, show that decriminalizing drugs does not result in higher rates of drug use. Nor, as opponents had
feared, does it encourage "drug tourism"—an environment that entices people to visit the country to use illicit drugs. In
fact, Portugal now has the lowest rates in the European Union of marijuana use among people over age fifteen. The
new drug policy in Portugal has also allowed law enforcement officers to focus on going after major drug dealers.
Though Portugal's case suggests that decriminalization can reduce drug use, not everyone is convinced that its policy
should be a model for the United States. Portugal is a small country with a relatively homogeneous culture. Its total
number of drug users, compared to the United States, is very small, making treatment programs relatively affordable
and easy to manage. What is more, there is considerable public support for the new policy. But the United States is
large and culturally diverse. Support for legalizing drugs is less consistent, and it would be more difficult to manage
treatment programs for users. These differences, argue some analysts, suggest that Portugal's approach may not be
workable in the United States.
Drug laws in Europe vary by country, but drug abuse is generally approached as an illness instead of a crime. For
example, though marijuana is a controlled substance in the Netherlands, use of marijuana is openly tolerated. Those
who use injection drugs are encouraged to seek medical treatment, instead of being given jail sentences. While such
permissiveness worries those who believe it encourages drug use, the proportion of Europeans who used illicit drugs,
according to a 2007 United Nations report, was only about half that of Americans. Furthermore, Europe has fewer than
half the rate of fatal drug overdoses than does the United States.
Source Citation:
"Drug Legalization." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Drug Policies Should Be Liberalized
For more than 30 years, American public policy has advanced an escalating "war on drugs" that seeks to eradicate
illegal drugs from our society. It is increasingly clear that this effort has failed. Our current drug policy has consumed
tens of billions of dollars and wrecked countless lives. The costs of this policy include the increasing breakdown of
families and neighborhoods, endangerment of children, widespread violation of civil liberties, escalating rates of
incarceration, political corruption, and the imposition of United States policy abroad. For United States taxpayers, the
price tag on the drug offensive has soared from $66 million in 1968 to almost $20 billion in 2000, an increase of over
30,000 percent. In practice the drug war disproportionately targets people of color and people who are poverty-stricken.
Coercive measures have not reduced drug use, but they have clogged our criminal justice system with non-violent
offenders. It is time to explore alternative approaches and to end this costly war.
The war on drugs has blurred the distinction between drug use and drug abuse. Drug use is erroneously perceived as
behavior that is out of control and harmful to others. Illegal drug use is thus portrayed as threatening to society. As a
result, drug policy has been closed to study, discussion, and consideration of alternatives by legislative bodies. Yet
many people who use both legal and illegal drugs live productive, functional lives and do no harm to society.
As Unitarian Universalists committed to a free and responsible search for truth, we must protest the misguided policies
that shape current practice. We cannot in good conscience remain quiet when it is becoming clear that we have been
misled for decades about illegal drugs. United States government drug policy-makers have misled the world about the
purported success of the war on drugs. They tell the public that success is dependent upon even more laws restricting
constitutional protections and the allocation of billions of dollars for drug law enforcement. They mislead the public
about the extent of corruption and environmental degradation in other countries that the American war on drugs has left
in its wake.
As Unitarian Universalists committed to the inherent worth and dignity of every person and to justice, equity, and
compassion in human relations, we call for thoughtful consideration and implementation of alternatives that regard the
reduction of harm as the appropriate standard by which to assess drug policies. We seek a compassionate reduction of
harm associated with drugs, both legal and illegal, with special attention to the harm unleashed by policies established
in the war on drugs.
As Unitarian Universalists committed to respecting the interdependent web of existence of which we are a part, we find
irresponsible and morally wrong the practices of scorching the earth and poisoning the soil and ground water in other
countries to stop the production of drugs that are illegal in the United States.
As a community of faith, Unitarian Universalists have both a moral imperative and a personal responsibility to ask the
difficult questions that so many within our society are unable, unwilling, or too afraid to ask. In asking these questions
and in weighing our findings, we are compelled to consider a different approach to national drug policy.
A Different Approach
To conceive and develop a more just and compassionate drug policy, it is necessary to transform how we view drugs
and particularly drug addiction. Drug use, drug abuse, and drug addiction are distinct from one another. Using a drug
does not necessarily mean abusing the drug, much less addiction to it. Drug abuse issues are essentially matters for
medical attention. We do not believe that drug use should be considered criminal behavior. Advocates for harsh drug
policies with severe penalties for drug use often cite violent crime as a direct result of drug use. Drugs alone do not
cause crime. Legal prohibition of drugs leads to inflated street value, which in turn incites violent turf wars among
distributors. The whole pattern is reminiscent of the proliferation of organized crime at the time of alcohol prohibition
in the early twentieth century. That policy also failed.
We believe that the vision of a drug-free America is unrealistic. Many programs for school children have misled
participants and the public by teaching that all illicit drugs are equally harmful in spite of current scientific research to
the contrary. "Just Say No" is not a viable policy. The consequences of the current drug war are cruel and
counterproductive. At issue here are the health and well being of our families and our communities, our societal fabric
and our global community. Alternatives exist.
Alternative Goals
Based on this perspective, we believe appropriate and achievable goals for reformed national drug policies include:







To prevent consumption of drugs, including alcohol and nicotine, that are harmful to health among children and
adolescents;
To reduce the likelihood that drug users will become drug abusers;
To minimize the harmful effects of drug use, such as disease contracted from the use of contaminated needles and
overdosing as a result of unwittingly using impure drugs;
To increase the availability and affordability of quality drug treatment and eliminate the stigma associated with
accessing it;
To significantly reduce violent and predatory drug-related crime;
To minimize the harmful consequences of current drug policy, such as racial profiling, property confiscation without
conviction, and unnecessary incarceration; and
To reduce the harm to our earth now caused by the practice of destroying crops intended for the production of drugs.
Alternative Policies
Instead of the current war on drugs, we offer the following policies for study, debate, and implementation:












Shift budget priorities from spending for pursuing, prosecuting, and imprisoning drug-law offenders to spending for
education, treatment, and research.
Develop and implement age-appropriate drug education programs that are grounded in research and fact and that
promote dialogue without fear of censure or reprisal.
Undertake research to assess the effects of currently illegal drugs. Ensure that findings and conclusions are publicly
accessible, serving as a basis for responsible decision-making by individuals and in arenas of public policy and practice.
Research the sociological factors that contribute to the likelihood of drug use becoming habitual, addictive, and
destructive, such as poverty, poor mental health, sexual or other physical abuse, and lack of education or medical
treatment.
Research and expand a range of management and on-demand treatment programs for drug abuse and addiction.
Examples include nutritional counseling, job training, psychiatric evaluation and treatment, psychological counseling,
parent training and assistance, support groups, clean needle distribution and exchange, substitution of safer drugs (e.g.,
methadone or marijuana), medically administered drug maintenance, disease screening, and acupuncture and other
alternative and complementary treatments. Publish the results of studies of these programs.
Require health insurance providers to cover in-patient and out-patient treatment for substance abuse on the same basis
as other chronic health conditions.
Make all drugs legally available with a prescription by a licensed physician, subject to professional oversight. End the
practice of punishing an individual for obtaining, possessing, or using an otherwise illegal substance to treat a medical
condition. End the threat to impose sanctions on physicians who treat patients with opiates for alleviation of pain.
Prohibit civil liberties violations and other intrusive law enforcement practices. Violations of the right to privacy such as
urine testing should be imposed only upon employees in safety-sensitive occupations.
Establish a legal, regulated, and taxed market for marijuana. Treat marijuana as we treat alcohol.
Modify civil forfeiture laws to require conviction before seizure of assets. Prohibit the eviction of family, friends, and cohabitants or the loss of government entitlements.
Abolish mandatory minimum prison sentences for the use and distribution of currently illicit drugs. Legislation should
specify only maximum prison sentences.
Remove criminal penalties for possession and use of currently illegal drugs, with drug abusers subject to arrest and
imprisonment only if they commit an actual crime (e.g., assault, burglary, impaired driving, vandalism). End sentencing
inequities driven by racial profiling.


Establish and make more accessible prison-based drug treatment, education, job training, and transition programs
designed for inmates.
End the financing of anti-drug campaigns in Central and South America, campaigns that include the widespread spraying
of herbicides, contribute to the destruction of rainforests, and are responsible for uprooting peoples from their
homelands.
Our Call to Act as a People of Faith
We must begin with ourselves. Our congregations can offer safe space for open and honest discussion among
congregants about the complex issues of drug use, abuse, and addiction. Through acceptance of one another and
encouragement of spiritual growth, we should be able to acknowledge and address our own drug use without fear of
censure or reprisal.
We can recognize that drugs include not only currently illegal substances but also alcohol, nicotine, caffeine, over-thecounter pain relievers, and prescription drugs. We can learn to distinguish among use, abuse, and addiction. We can
support one another in recognizing drug-related problems and seeking help. We can seek to understand those among us
who use drugs for relief or escape. With compassion, we can cultivate reflection and analysis of drug policy. In the safe
space of our own congregations, we can begin to prevent destructive relationships with drugs. We can lend necessary
support to individuals and families when a loved one needs treatment for an addiction problem. We can encourage our
congregations to partner with and follow the lead of groups representing individuals whose lives are most severely
undermined by current drug policy—people of color and of low income. We can learn from health care professionals
what unique patterns of substance abuse exist in our local areas. We can go beyond our walls and bring our perspective
to the interfaith community, other nonprofit organizations, and elected officials.
Our Unitarian Universalist history calls us to pursue a more just world. Our faith compels us to hold our leaders
accountable for their policies. In calling for alternatives to the war on drugs, we are mindful of its victims. Drug use
should be addressed solely as a public health problem, not as a criminal justice issue. Dependence upon any illegal
drugs or inappropriate use of legal drugs may point to deep, unmet human needs. We have a moral obligation to
advocate compassionate, harm-reducing policy. We believe that our nations have the imagination and capability to
address effectively the complex issues of the demand for drugs, both legal and illegal.
We reaffirm the spirit of our social witness positions taken on drugs in resolutions adopted from 1965 to 1991.
Recognizing the right of conscience for all who differ, we denounce the war on drugs and recommend alternative goals
and policies. Let not fear or any other barrier prevent us from advocating a more just, compassionate world.
Source Citation:
Unitarian Universalist Association. "Drug Policies Should Be Liberalized." Drug Legalization. Ed. Karen F. Balkin. San Diego:
Greenhaven Press, 2005. Current Controversies. Rpt. from "Alternatives to the War on Drugs." 2002. Gale Opposing Viewpoints
In Context. Web. 10 Apr. 2012.
Liberalizing Drug Policies Would Increase Crime and Violence
An oft-repeated mantra of both the liberal left and the far right is that antidrug laws do greater harm to society than
illicit drugs. To defend this claim, they cite high rates of incarceration in the United States compared with more drugtolerant societies. In this bumper-sticker vernacular, the drug war in the United States has created an "incarceration
nation."
But is it true? Certainly rates of incarceration in the United States are up (and crime is down). Do harsh antidrug laws
drive up the numbers? Are the laws causing more harm than the drugs themselves? These are questions worth
exploring, especially if their presumptive outcome is to change policy by, say, decriminalizing drug use.
It is, after all, an end to the "drug war" that both the left and the right say they want. For example, William F. Buckley
Jr. devoted the Feb. 26, 1996, issue of his conservative journal, National Review, to "the war on drugs," announcing
that it was lost and bemoaning the overcrowding in state prisons, "notwithstanding that the national increase in prison
space is threefold since we decided to wage hard war on drugs." James Gray, a California judge who speaks often on
behalf of drug-decriminalization movements, devoted a major section of his book, Why Our Drug Laws Have Failed
and What We Can Do About It, to what he calls the "prison-industrial complex." Ethan Nadelmann, executive director
of the Drug Policy Alliance and perhaps the most unabashed of the "incarceration-nation" drumbeaters, says in his Web
article, "Eroding Hope for a Kinder, Gentler Drug Policy," that he believes "criminal-justice measures to control drug
use are mostly ineffective, counterproductive and unethical" and that administration "policies are really about punishing
people for the sin of drug use." Nadelmann goes on to attack the drug-court system as well, which offers treatment in
lieu of incarceration, as too coercive since it uses the threat of the criminal-justice system as an inducement to stay the
course on treatment.
False Assertions Win Converts for Decriminalizers
In essence, the advocates of decriminalization of illegal drug use assert that incarceration rates are increasing because
of bad drug laws resulting from an inane drug war, most of whose victims otherwise are well-behaved citizens who
happen to use illegal drugs. But that infraction alone, they say, has led directly to their arrest, prosecution and
imprisonment, thereby attacking the public purse by fostering growth of the prison population.
Almost constant repetition of such assertions, unanswered by voices challenging their validity, has resulted in the
decriminalizers gaining many converts. This in turn has begotten yet stronger assertions: the drug war is racist (because
the prison population is overrepresentative of minorities); major illegal drugs are benign (ecstasy is "therapeutic,"
"medical" marijuana is a "wonder" drug, etc.); policies are polarized as "either-or" options ("treatment not
criminalization") instead of a search for balance between demand reduction and other law-enforcement programs; harm
reduction (read: needle distribution, heroin-shooting "clinics," "safe drug-use" brochures, etc.) becomes the only
"responsible" public policy on drugs.
But the central assertion, that drug laws are driving high prison populations, begins to break down upon closer scrutiny.
Consider these numbers from the U.S. Bureau of Justice Statistics compilation, Felony Sentences in State Courts, 2000.
Across the United States, state courts convicted about 924,700 adults of a felony in 2000. About one-third of these
(34.6 percent) were drug offenders. Of the total number of convicted felons for all charges, about one-third (32 percent)
went straight to probation. Some of these were rearrested for subsequent violations, as were other probationers from
past years. In the end, 1,195,714 offenders entered state correctional facilities in 2000 for all categories of felonies. Of
that number, 21 percent were drug offenders. Seventy-nine percent were imprisoned for other crimes.
Therefore, about one-fifth of those entering state prisons in 2000 were there for drug offenses. But drug offenses
comprise a category consisting of several different charges, of which possession is but one. Also included are
trafficking, delivery and manufacturing. Of those incarcerated for drug offenses only about one-fourth (27 percent)
were convicted of possession. One-fourth of one-fifth is 5 percent. Of that small amount, 13 percent were incarcerated
for marijuana possession, meaning that in the end less than 1 percent (0.73 percent to be exact) of all those incarcerated
in state-level facilities were there for marijuana possession. The data are similar in state after state. At the high end, the
rates stay under 2 percent. Alabama's rate, for example, was 1.72 percent. At the low end, it falls under one-tenth of 1
percent. Maryland's rate, for example, was 0.08 percent. The rate among federal prisoners is 0.27 percent.
If we consider cocaine possession, the rates of incarceration also remain low 2.75 percent for state inmates, 0.34
percent for federal. The data, in short, present a far different picture from the one projected by drug critics such as
Nadelmann, who decries the wanton imprisonment of people whose offense is only the "sin of drug use."
Drug Laws Are Not Harmful
But what of those who are behind bars for possession? Are they not otherwise productive and contributing citizens
whose only offense was smoking a joint? If Florida's data are reflective of the other states and there is no reason why
they should not be the answer is no. In early 2003, Florida had a total of 88 inmates in state prison for possession of
marijuana out of an overall population of 75,236 (0.12 percent). And of those 88, 40 (45 percent) had been in prison
before. Of the remaining 48 who were in prison for the first time, 43 (90 percent) had prior probation sentences and the
probation of all but four of them had been revoked at least once. Similar profiles appear for those in Florida prisons for
cocaine possession (3.2 percent of the prison population in early 2003). They typically have extensive arrest histories
for offenses ranging from burglary and prostitution to violent crimes such as armed robbery, sexual battery and
aggravated assault. The overwhelming majority (70.2 percent) had been in prison before. Of those who had not been
imprisoned previously, 90 percent had prior probation sentences and the supervision of 96 percent had been revoked at
least once.
The notion that harsh drug laws are to blame for filling prisons to the bursting point, therefore, appears to be dubious.
Simultaneously, the proposition that drug laws do more harm than illegal drugs themselves falls into disarray even if
we restrict our examination to the realm of drugs and crime, overlooking the extensive damage drug use causes to
public health, family cohesion, the workplace and the community.
Law-enforcement officers routinely report that the majority (i.e., between 60 and 80 percent) of crime stems from a
relationship to substance abuse, a view that the bulk of crimes are committed by people who are high, seeking ways to
obtain money to get high or both. These observations are supported by the data. The national Arrests and Drug Abuse
Monitoring (ADAM) program reports on drugs present in arrestees at the time of their arrest in various urban areas
around the country. In 2000, more than 70 percent of people arrested in Atlanta had drugs in their system; 80 percent in
New York City; 75 percent in Chicago; and so on. For all cities measured, the median was 64.2 percent. The results are
equally disturbing for cocaine use alone, according to Department of Justice statistics for 2000. In Atlanta, 49 percent
of those arrested tested positive for cocaine; in New York City, 49 percent; in Chicago, 37 percent. Moreover, more
than one-fifth of all arrestees reviewed in 35 cities around the nation had more than one drug in their bodies at the time
of their arrest, according to the National Household Survey on Drug Abuse.
If the correlation between drug use and criminality is high for adults, the correlation between drug use and misbehavior
among youth is equally high. For children ages 12 to 17, delinquency and marijuana use show a proportional
relationship. The greater the frequency of marijuana use, the greater the incidents of cutting class, stealing, physically
attacking others and destroying other peoples' property.
A youth who smoked marijuana six times in the last year was twice as likely physically to attack someone else than one
who didn't smoke marijuana at all. A child who smoked marijuana six times a month in the last year was five times as
likely to assault another than a child who did not smoke marijuana. Both delinquent and aggressive antisocial behavior
were linked to marijuana use: the more marijuana, the worse the behavior.
Strict Drug Laws Are Necessary
Even more tragic is the suffering caused children by substance abuse within their families. A survey of state childwelfare agencies by the National Committee to Prevent Child Abuse found substance abuse to be one of the top two
problems exhibited by 81 percent of families reported for child maltreatment. Additional research found that chemical
dependence is present in one-half of the families involved in the child-welfare system. In a report entitled No Safe
Haven: Children of Substance-Abusing Parents, the National Center on Addiction and Substance Abuse at Columbia
University estimates that substance abuse causes or contributes to seven of 10 cases of child maltreatment and puts the
federal, state and local bill for dealing with it at $10 billion.
Are the drug laws, therefore, the root of a burgeoning prison population? And are the drug laws themselves a greater
evil than the drugs themselves? The answer to the first question is a clear no. When we restricted our review to
incarcerated felons, we found only about one-fifth of them were in prison for crimes related to drug laws. And even the
miniscule proportion that were behind bars for possession seemed to have serious criminal records that indicate
criminal behavior well beyond the possession charge for which they may have plea-bargained, and it is noteworthy that
95 percent of all convicted felons in state courts in 2000 pleaded guilty, according to the Bureau of Justice Statistics.
The answer to the second question also is no. Looking only at crime and drugs, it is apparent that drugs drive crime.
While it is true that no traffickers, dealers or manufacturers of drugs would be arrested if all drugs were legal, the same
could be said of drunk drivers if drunken driving were legalized. Indeed, we could bring prison population down to
zero if there were no laws at all. But we do have laws, and for good reason. When we look beyond the crime driven by
drugs and factor in the lost human potential, the family tragedies, massive health costs, business losses and
neighborhood blights instigated by drug use, it is clear that the greater harm is in the drugs themselves, not in the laws
that curtail their use.
Source Citation:
McDonough, James R. "Liberalizing Drug Policies Would Increase Crime and Violence." Drug Legalization. Ed. Karen F. Balkin. San
Diego: Greenhaven Press, 2005. Current Controversies. Rpt. from "Critics Scapegoat the Antidrug Laws: Advocates Pushing for
Decriminalization of Drug Use Blame the War on Drugs for Creating an 'Incarceration Nation.' But a Hard Look at the Facts
Proves Otherwise." Insight on the News (10 Nov. 2003). Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Freedom of Speech
Freedom of speech is one of several rights guaranteed in the First Amendment to the United States Constitution. It is
considered to be one of the foundations of a democratic society, since the expression of varying viewpoints is
fundamental to the notion of a government freely chosen by its citizens. However, freedom of speech is not an absolute
right; there are many ways in which personal expression can be limited for safety, privacy, and other concerns. In
addition, at least one recent United States Supreme Court ruling has led to a broader understanding of what constitutes
free speech in the eyes of the law, as well as how the principle of free speech is applied at the group level rather than to
individual citizens.
The Origin and Evolution of Free Speech in America
For the framers of the United States Constitution, the issue of freedom of speech was an important one. The right to
express views critical of the British government was invaluable in rallying support from colonial citizens in favor of the
American Revolution. At the time, it was illegal for a British subject to make statements intended to damage or subvert
the British government—even if the statements were true. This crime was known as sedition, and it remained a part of
English common law until 2009, though modern charges of sedition were rare. When the United States Constitution
was ratified in 1788, some state representatives were critical because the original document did not contain specific
protections for the rights of American citizens, such as the right to free speech. In 1791, the first ten amendments to the
Constitution, collectively known as the Bill of Rights, were added in an effort to address these concerns.
However, American freedom of speech faced its first challenge just seven years later with the passage of the Sedition
Act under President John Adams. The law forbade any criticism of the U.S. government or its president, much like the
sedition laws of England; Thomas Jefferson, who served as Adams’s vice president, was a vocal critic of the law,
which he viewed as unconstitutional. The unpopularity of the Sedition Act played at least some role in the 1800
presidential election, in which Jefferson defeated Adams, and the Sedition Act expired as Adams left office.
Supporters of sedition law throughout history have asserted that criticism directed at the government poses a legitimate
threat to the continued effectiveness of that government, rather than a legal attempt to improve the existing system. A
similar U.S. law, the Espionage Act of 1917 (later modified and known as the Sedition Act of 1918), made it illegal to
speak out against American involvement in World War I. Labor leader Eugene V. Debs was one of many Americans
arrested and jailed for making public statements condemning the drafting of American soldiers for the war. The law
was repealed on December 13, 1920, but Debs remained in jail until President Warren G. Harding commuted his
sentence more than one full year later.
Another law, the Smith Act, was instituted in 1940, even before the United States became involved in World War II.
Although this law was written to outlaw speech aimed at overthrowing the government, it was interpreted broadly
enough to allow the persecution of anyone critical of the government, including several workers’ rights groups. Unlike
the previous laws, the Smith Act remains a part of U.S. law even though it is widely acknowledged that much of its
application in the past has been unconstitutional.
Modern Views on Free Speech
Aside from national security, there are other situations in which free speech may also be limited. Statements intended to
incite panic or riot, such as the famous example of shouting “Fire!” in a crowded theater when no fire exists, are illegal.
Sexually explicit or otherwise obscene material may be considered illegal depending upon the state and community in
which the viewer lives, based on local obscenity laws. Privately held businesses may also restrict the speech rights of
their workers during their work shifts as a condition of employment.
One fairly recent development in the debate over free speech is the issue of “hate speech.” Hate speech is defined as
speech or conduct that negatively targets a group or individual based on race, religion, or sexual orientation. In recent
years, many countries around the world have instituted laws banning hate speech to various degrees; these countries
include India, Germany, Brazil, Canada and the United Kingdom. In the United States, however, hateful speech
regarding individuals or groups is largely protected under the First Amendment. An individual can still be prosecuted
under other laws for violating the rights of a person or group, such as threatening harm or violence.
One recent notable case involving hate speech in the United States is Snyder v. Phelps, argued in front of the Supreme
Court in October 2010. The plaintiff in the case was Albert Snyder, the father of a U.S. Marine who died in 2006 while
serving in Iraq. During his son’s funeral, a group known as the Westboro Baptist Church—led by pastor Fred Phelps—
staged a demonstration near the church praising God for the killing of American soldiers, and condemning the
American military for its inclusion of homosexuals in its ranks. In the original jury trial, held in Maryland in 2007, the
jury found in favor of Snyder and awarded him more than $10 million in damages. In 2009, however, an appeals court
reversed the jury’s verdict, stating that the demonstrators were protected by the First Amendment’s guarantee of
freedom of speech. The case was recently decided in favor of the Westboro Baptist Church and the funeral protesters
by the United States Supreme Court in an 8-1 decision on March 2, 2011.
Freedom of speech issues can also apply to groups that function as a single entity, such as corporations. The 2002
Bipartisan Campaign Reform Act restricted corporations, nonprofit groups, and unions from paying for advertisements
that support a political candidate in a general or primary election. This was intended to create a more level playing
field, since candidates without wealthy supporters would end up with far fewer ads than those backed by large
organizations. This was in many ways an extension of existing campaign finance laws, which restrict individual
donations to $1000 for any single political candidate. However, in the 2010 Supreme Court ruling Citizens United v.
Federal Election Commission, the Court ruled that such a restriction on group-funded ads was a violation of free
speech, opening the door for organizations to run as many ads as they want favoring one candidate over another. The
ruling has proven widely unpopular among politicians and the public alike, some of whom assert that it allows wealthy
supporters to “buy” elections through overwhelming media coverage under the guise of free speech.
Source Citation:
"Freedom of Speech." Opposing Viewpoints Online Collection. Gale, Cengage Learning, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Hate Speech on the Internet Should Be Regulated
"It comes as no surprise that the Internet is being used to recruit, disseminate, and incite hatred."
Ronald Eissens is the secretariat for the International Network Against Cyber Hate (INACH), an organization based in Amsterdam,
the Netherlands. In the following viewpoint, Eissens declares that online hate speech should be regulated to deter extremist
groups from using it as a tool to incite racist, religious, or discriminatory violence and crime in real life. He claims that hate
groups use the Internet to spread their hateful messages and to threaten and target individuals and organizations through hit
lists. Also, Eissens states that regulating online hate speech does not aim to change the bigoted or extremist ideologies of
individuals or restrict the freedom of expression, but to deter hate crimes.
As you read, consider the following questions:
1. In the author's view, what causes people to abandon citizenship?
2. How does Eissens support his position that the prohibition of hate speech is not contradictory to free speech?
3. According to Eissens, what is the best protection against hate speech?
Racism, anti-Semitism, Islamophobia, discrimination—more than ever since the Second World War, hate and its
ideologies are alive and well, giving cause to misery, conflict, murder, genocide and war. Humanity is repeating its
historical mistakes, seemingly unable to learn from the past. The political climate in this world is hardening. The
contemporary problems are not only racism, anti-Semitism, Islamophobia and other forms of hate, but also the fact that
the idea of citizenship is by and large being abandoned in favor of ethnic, religious and political agendas, which give
cause to more 'us and them' feelings on which extremists and fundamentalists feed. Liberals and moderates on all sides
are either not being heard or have to shout so loudly that they are being lumped together with the extremists, extremists
who, in order to get more support, use the Internet as their tool of choice.
It comes as no surprise that the Internet is being used to recruit, disseminate and incite hatred. The Internet is the
biggest information and communication device in the world and neo-Nazis saw the potential in its very early stages,
using bulletin board systems (BBS) already in the pre-World Wide Web age. By now, the amount of extremist Web
sites runs in the tens of thousands. Hate on the Net has become a virtual nursery for In Real Life crime, the 'Real Life'
bit becoming a moot point, since the Internet is an integral part of society, not a separate entity. It is just the latest in
communication and dissemination tools which can, as any other tool, be used or abused. Incitement through electronic
means is not different from incitement by traditional means. In that sense, you could ask yourself if there is a relation
between a paper pamphlet with the text 'kill all Muslims' being handed out in the streets and the actual killing of
Muslims. The direct link between those acts will have to be proved while you can say with certainty that calling for the
killing of Muslims in a pamphlet (or by other means like the Internet) is incitement and adds to a negative atmosphere
towards Muslims, raising the probability of violence. The linkage between racist speech and violations of individual
civil liberties is as topical as newspaper headlines. What's more, to some of us it is an everyday reality.
Little Sparks
Little sparks can kindle big fires, which was proved by all the hate speech and dehumanization that was dished-out by
media (including the Internet) during the Balkan war, conditioning the public to support any new conflict.
As you will see, the most dominant examples in this booklet of the linkage between incitement on the Internet and
actual hate crimes In Real Life are the cases in which Web sites or e-mail were used to deliver the message in the shape
of threats, incitement or online hit lists targeting individuals or organizations, sometimes with terrible results.
We all have our separate responsibilities in dealing with incitement to hatred on the Internet, industry, NGOs
[nongovernmental organizations] and governments. Which does not mean that we can't cooperate. In fact, it is
imperative that we do. Hate has consequences that go further than violence and murder; hate disrupts society in all of
its facets, including government and commerce.
In combating hate on the Internet, we do not aim to hinder free speech, nor do we think we will be able to 'change the
hearts and minds' of hatemongers. There will always be people who hate. Rather, by using the various national and
international anti-hate speech legislation, we aim to curb the communication of hate speech, by this preventing the
recruitment of others who do not yet hate, and prevent In Real Life hate crime.
It's a Crime
As for free speech, the European antiracist maxim goes, 'Racism is not an opinion, it's a crime,' or to quote Sartre on
anti-Semitism, 'it is not an idea as such, it's a passion.' Having said that, we do recognize and support free speech as an
important value in any democratic society. However, we do strongly oppose free speech extremism, the idea that even
incitement to murder can be considered free speech, as was the case with the 'Nuremberg Files' [in which an
antiabortion activist posted the personal data of abortion clinic workers online to target them for violence], or the abuse
of free speech as a means of propagating hate speech and incitement to violence. People tend to think that freedom of
speech and the prohibition of hate speech are contradictory. They're not. If we look at the UN International Covenant
on Civil and Political Rights, we will see that article 20 quite clearly states that
any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be
prohibited by law, while article 19 states that everyone shall have the right to hold opinions without interference. Everyone shall
have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all
kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice. The
exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities. It may therefore
be subject to certain restrictions, but these shall only be such as are provided by law and are necessary: (a) For respect of the
rights or reputations of others; (b) For the protection of national security or of public order (ordre public), or of public health or
morals.
So 143 countries think those are not opposing or conflicting obligations. In fact, most constitutions of Western states
show more or less the same situation; an article prohibiting hate speech or discrimination, in close companionship with
one securing the freedom of speech. Even the Constitution of the United States (and for that matter, US jurisprudence),
much quoted by freedom of speech advocates, does recognize situations in which hate speech can be harmful and
should be illegal, for the simple fact that, whereas freedom of speech is a condition for a successful democracy,
tolerance is essential for the survival of a democracy. Would one allow hate speech to run rampant, democracy will in
the end be destroyed and tyranny would result, bringing with it the abolition of free speech....
[A]t the end of the day, we must conclude that the differences in legislation between the US and Europe are not as big
as often perceived. Moreover, neither 'side' will change its constitution, but that is also not necessary. As our work
shows, US-European cooperation in fighting hate comes quite easy and is successful. As most Internet providers in the
United States have terms of service that strictly prohibit the dissemination of hate speech through their services, it is not
as hard as it seems to get material like that removed.
The Best Protection
However, in the end the best protection against hate speech, which can be implemented everywhere no matter what, is
education, teaching how information on the Internet can be assessed for its validity and how to recognize the rhetoric of
hate. Lots of low-profile Web sites and hate language on Web forums never comes to the attention of law enforcement,
or agencies that combat hate on the Net. By and large, it is this material that creates an atmosphere of hate and
intolerance, and ultimately generates an environment in which hate becomes acceptable behaviour to people who are
infected with prejudiced information. Especially, youth runs the risk of being misled, indoctrinated and recruited. We
think it is imperative to educate and promote attitude change....
Again, the relation between hate speech (online or off-line) and hate crime is not a question, but an everyday reality.
After all, Auschwitz was not built out of bricks and stones, it was built with words. Words that were also at the roots of
the Rwandan genocide, the Balkan war and other massacres.
Source Citation:
Eissens, Ronald. "Hate Speech on the Internet Should Be Regulated." Hate on the Net: Virtual Nursery for In Real Life Crime.
Amsterdam, The Netherlands: International Network Against Cyber Hate (INACH), 2004. Rpt. in Civil Liberties. Ed. Auriana Ojeda.
San Diego: Greenhaven Press, 2004. Opposing Viewpoints. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Hate Speech on the Internet Should Not Be Regulated
"Initiatives to combat online hate speech threaten to neuter the Internet's most progressive attribute—the fact that anyone,
anywhere, who has a computer and a connection, can express themselves freely on it."
In the following viewpoint, Sandy Starr argues that regulating hate speech on the Internet restricts free speech. For example,
Starr claims that such initiatives call for online censorship that could be applied to the Bible or Qur'an and artistic and
documentary works. The author also maintains that this impedes free and open debate, which is needed to criticize bigoted
views and opinions. And he insists that the Internet gives the most radical and fanatical voices false degrees of prevalence and
legitimacy. Starr is the science and technology editor for spiked, a politics and culture online magazine based in London, England.
As you read, consider the following questions:
1. According to Starr, what action does the Internet Watch Foundation advise when encountering racist content online?
2. How does the author support her assertion that hate speech is statistically very uncommon on the World Wide Web?
3. In Starr's view, how could the Remove Jew Watch campaign have more effectively dealt with the Web site Jew Watch?
The Internet continues to be perceived as a place of unregulated and unregulable anarchy. But this impression is
becoming less and less accurate, as governments seek to monitor and rein in our online activities.
Initiatives to combat online hate speech threaten to neuter the Internet's most progressive attribute—the fact that
anyone, anywhere, who has a computer and a connection, can express themselves freely on it. In the UK, regulator the
Internet Watch Foundation (IWF) advises that if you "see racist content on the Internet", then "the IWF and police will
work in partnership with the hosting service provider to remove the content as soon as possible".
The presumption here is clearly in favour of censorship—the IWF adds that "if you are unsure as to whether the content
is legal or not, be on the safe side and report it". Not only are the authorities increasingly seeking out and censoring
Internet content that they disapprove of, but those sensitive souls who are most easily offended are being enlisted in this
process, and given a veto over what the rest of us can peruse online.
Take the Additional Protocol to the [international treaty] Convention on Cybercrime, which seeks to prohibit "racist
and xenophobic material" on the Internet. The Additional Protocol defines such material as "any written material, any
image or any other representation of ideas or theories, which advocates, promotes or incites hatred, discrimination or
violence, against any individual or group of individuals, based on race, colour, descent or national or ethnic origin, as
well as religion if used as a pretext for any of these factors". It doesn't take much imagination to see how the Bible or
the Qur'an could fall afoul of such extensive regulation, not to mention countless other texts and artistic and
documentary works.
In accordance with the commonly stated aim of hate speech regulation, to avert the threat of fascism, the Additional
Protocol also seeks to outlaw the "denial, gross minimisation, approval or justification of genocide or crimes against
humanity". According to the Council of Europe, "the drafters considered it necessary not to limit the scope of this
provision only to the crimes committed by the Nazi regime during the Second World War and established as such by
the Nuremberg Tribunal, but also to genocides and crimes against humanity established by other international courts set
up since 1945 by relevant international legal instruments."
Disconcertingly Authoritarian
This is an instance in which the proponents of hate speech regulation, while ostensibly guarding against the spectre of
totalitarianism, are behaving in a disconcertingly authoritarian manner themselves. Aside from the fact that Holocaust
revisionism can and should be contested with actual arguments, rather than being censored, the scale and causes of later
atrocities such as those in Rwanda or former Yugoslavia are still matters for legitimate debate—as is whether the term
"genocide" should be applied to them. The European authorities claim to oppose historical revisionism, and yet they
stand to enjoy new powers that will entitle them to impose upon us their definitive account of recent history, which we
must then accept as true on pain of prosecution.
Remarkably, the restrictions on free speech contained in the Additional Protocol could have been even more severe.
Apparently, "the committee drafting the Convention discussed the possibility of including other content-related
offences", but "was not in a position to reach consensus on the criminalisation of such conduct". Still, the Additional
Protocol as it stands is a significant impediment to free speech, and an impediment to the process of contesting bigoted
opinions in free and open debate. As one of the Additional Protocol's more acerbic critics remarks: "Criminalising
certain forms of speech is scientifically proven to eliminate the underlying sentiment. Really, I read that on a match
cover."
Putting the Internet into Perspective
The Internet lends itself to lazy and hysterical thinking about social problems. Because of the enormous diversity of
material available on it, people with a particular axe to grind can simply log on and discover whatever truths about
society they wish to. Online, one's perspective on society is distorted. When there are so few obstacles to setting up a
Web site, or posting on a message board, all voices appear equal.
The Internet is a distorted reflection of society, where minority and extreme opinion are indistinguishable from the
mainstream. Methodological rigour is needed, if any useful insights into society are to be drawn from what one finds
online. Such rigour is often lacking in discussions of online hate speech.
For example, the academic Tara McPherson has written about the problem of deep-South redneck Web sites—what she
calls "the many outposts of Dixie in cyberspace". As one reads through the examples she provides of neo-Confederate
eccentrics, one could be forgiven for believing that "The South Will Rise Again", as the flags and bumper stickers put
it. But by that token, the world must also be under dire threat from paedophiles, Satanists, and every other crackpot to
whom the Internet provides a free platform.
"How could we narrate other versions of Southern history and place that are not bleached to a blinding whiteness?"
asks McPherson, as though digital Dixie were a major social problem. In its present form, the Internet inevitably
appears to privilege the expression of marginal views, by making it so easy to express them. But we must remember
that the mere fact of an idea being represented online, does not grant that idea any great social consequence.
A Platform for Our Beliefs
Of course, the Internet has made it easier for like-minded individuals on the margins to communicate and collaborate.
Mark Potok, editor of the Southern Poverty Law Centre's Intelligence Report—which "monitors hate groups and
extremist activities"—has a point when he says: "In the 1970s and 80s the average white supremacist was isolated,
shaking his fist at the sky in his front room. The net changed that." French minister of foreign affairs Michel Barnier
makes a similar point more forcefully, when he says: "The Internet has had a seductive influence on networks of
intolerance. It has placed at their disposal its formidable power of amplification, diffusion and connection."
But to perceive this "power of amplification, diffusion and connection" as a momentous problem is to ignore its
corollary—the fact that the Internet also enables the rest of us to communicate and collaborate, to more positive ends.
The principle of free speech benefits us all, from the mainstream to the margins, and invites us to make the case for
what we see as the truth. New technologies that make it easier to communicate benefit us all in the same way, and we
should concentrate on exploiting them as a platform for our beliefs, rather than trying to withdraw them as a platform
for other people's beliefs.
We should always keep our wits about us, when confronted with supposed evidence that online hate speech is a
massive problem. A much-cited survey by the Web and e-mail filtering company SurfControl concludes that there was
a 26 percent increase in "Web sites promoting hate against Americans, Muslims, Jews, homosexuals and African
Americans, as well as graphic violence" between January and May 2004, "nearly surpassing the growth in all of 2003".
But it is far from clear how such precise quantitative statistics can be derived from subjective descriptions of the
content of Web sites, and from a subjective emotional category like "hate".
Stirring Up a Panic
SurfControl survey unwittingly illustrates how any old piece of anecdotal evidence can be used to stir up a panic over
Internet content, claiming: "Existing sites that were already being monitored by SurfControl have expanded in shocking
or curious ways. Some sites carry graphic photos of dead and mutilated human beings." If SurfControl had got in touch
with me a few years earlier, I could still easily have found a few photos of dead and mutilated human beings on the
Internet for them to be shocked by. Maybe then, they would have tried to start the same panic a few years earlier? Or
maybe they wheel out the same shocking claims every year, in order to sell a bit more of their filtering software—who
knows?
Certainly, it's possible to put a completely opposite spin on the amount of hate speech that exists on the Internet. For
example, Karin Spaink, chair of the privacy and digital rights organization Bits of Freedom, concludes that "slightly
over 0.015 percent of all Web pages contain hate speech or something similar"—a far less frightening assessment.
It's also inaccurate to suggest that the kind of Internet content that gets labelled as hate speech goes unchallenged.
When it transpired that the anti-Semitic Web site Jew Watch ranked highest in the search engine Google's results for
the search term "Jew", a Remove Jew Watch campaign was established, to demand that Google remove the offending
Web site from its listings. Fortunately for the principle of free speech, Google did not capitulate to this particular
demand—even though in other instances, the search engine has been guilty of purging its results, at the behest of
governments and other concerned parties.
Forced to act on its own initiative, Remove Jew Watch successfully used Googlebombing—creating and managing
Web links in order to trick Google's search algorithms into associating particular search terms with particular results—
to knock Jew Watch off the top spot. This was fair game, and certainly preferable to Google (further) compromising its
ranking criteria. Better still would have been either a proper contest of ideas between Jew Watch and Remove Jew
Watch, or alternatively a decision that Jew Watch was beneath contempt and should simply be ignored. Not every
crank and extremist warrants attention, even if they do occasionally manage to spoof search engine rankings.
Entirely Inadequate
If we ask the authorities to shield us from hate speech today, the danger is that we will be left with no protection from
those same authorities tomorrow, once they start telling us what we're allowed to read, watch, listen to, and download.
According to the Additional Protocol to the Convention on Cybercrime, "national and international law need to provide
adequate legal responses to propaganda of a racist and xenophobic [fear of foreigners] nature committed through
computer systems". But legal responses are entirely inadequate for this purpose. If anything, legal responses to hateful
opinions inadvertently bolster them, by removing them from the far more effective and democratic mechanism of
public scrutiny and political debate.
"Hate speech" is not a useful way of categorizing ideas that we find objectionable. Just about the only thing that the
category does usefully convey is the attitude of the policy makers, regulators and campaigners who use it. Inasmuch as
they can't bear to see a no-holds-barred public discussion about a controversial issue, these are the people who really
hate speech.
Source Citation:
Starr, Sandy. "Hate Speech on the Internet Should Not Be Regulated." The Media Freedom Internet Cookbook. Vienna, Austria:
Organization for Security and Co-operation in Europe (OSCE), 2004. Rpt. in Civil Liberties. Ed. Auriana Ojeda. San Diego:
Greenhaven Press, 2004. Opposing Viewpoints. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Gambling
Gambling, or "gaming," is betting money on any game or event. It takes a variety of forms, from nickel-and-dime poker
to state-sponsored lotteries. Different forms of gambling are legal in different parts of the United States.
Americans disagree on the extent to which the government should control or prohibit gambling. Some people claim that
gambling is a dangerous and potentially addictive habit that can harm people emotionally and socially as well as
financially. Some religions discourage gambling as well. Others say that for most people, gambling is a harmless form
of entertainment. They argue that legalized gambling can actually benefit the economy without causing serious social
problems.
Forms of Gambling
In the early 1900s, most forms of gambling were illegal in the United States. However, legalized gambling has been on
the rise since the 1950s. Only two states—Hawaii and Utah—prohibit all forms of gambling. Other states allow various
forms, including casinos, state lotteries, and betting on sporting events.
Casinos are establishments where people can place bets on games. They commonly offer a variety of card games, dice
games, and games of chance. In 1931, Nevada became the first state to allow casino gambling. New Jersey followed in
1978, making casinos legal in Atlantic City. In 2009, legal casinos were operating in forty-seven states. Today,
different states have different regulations on casinos. In some states, they are still prohibited entirely. In others, casinos
may only be run on the water, like riverboat casinos. Some allow casinos so long as there are no games in which
players play against the house.
Casinos also exist on Indian reservations throughout the country. The Indian Gaming Regulatory Act, passed in 1988,
declared that Native American tribes have the right to run gaming establishments on their reservations, as long as they
are in a state that permits some form of gambling. By 1998, nearly three hundred Indian-operated casinos existed in
thirty-one states. Casinos have generated wealth and increased employment rates among Native Americans. However,
many Native Americans, especially older people, consider the casinos a threat to their traditional values and way of
life.
The newest form of casino gambling is the online casino, which allows players to place bets over the Internet. Online
casinos raise complicated legal issues. For example, if casinos are only legal in certain parts of a state, is it legal to
make online casinos available in other parts of the state? If players are placing bets on the outcome of a game in a real,
legal casino in another country, does that mean they are actually gambling in that country and not in their homes?
Because of these legal problems, American companies have been reluctant to invest in online casinos. Nonetheless,
consumers spent about $3 billion in online casinos in 2000. Further restriction on internet gambling came when
Congress passed the Unlawful Internet Gambling Enforcement Act of 2006. The purpose of the law is to prevent the
use of certain kinds of payment, credit cards, and fund transfers for unlawful internet gambling.
Another common form of legal gambling is the state lottery. A lottery is a drawing in which people purchase tickets. A
ticket number is selected at random and anyone holding a ticket with that number wins a cash prize. The first state
lottery opened in New Hampshire in 1964. By 2009, lotteries were operating in forty-one states, the District of
Columbia, and Puerto Rico.
Issues Related to Gambling
Gambling is big business in the United States. In 2006, Americans gambled away $91 billion—more than they spent on
recorded music, video games, movies, sports tickets, and theme parks combined. Many studies suggest that the people
who spend the most money on gambling are those who can least afford it.
In theory, state-sponsored lotteries can benefit low-income people. They allow governments to raise money they need
for social services without increasing taxes. Since buying lottery tickets is optional, it is not considered a tax. However,
people with low incomes are much more likely to spend money on the lottery because they may see it as their only
chance to get rich. A study in Michigan found that people with incomes under $10,000 per year spent the same amount
each year on lottery tickets as people who made $70,000 or more. This meant that they spent eight times as large a
portion of their income on the lottery. Lottery gambling is also more common among African Americans and people
with less education.
Another major problem is pathological gambling, also known as compulsive gambling. Compulsive gamblers are
unable to control their gambling. Over time, they spend more time gambling and bet more money, often driving
themselves into debt. This can lead them to gamble even more in an attempt to recover the money they have lost. A
gambling problem can also harm the gambler’s family. Parents who gamble compulsively may not provide adequate
care for their children, or they may abuse their children physically or emotionally. Moreover, the children are at risk of
becoming pathological gamblers themselves.
Pathological gambling has been described as an addiction, similar to drug or alcohol dependency. Psychiatrists
officially recognized pathological gambling as a mental disorder in 1980. Approximately 5 percent of all people who
gamble are compulsive gamblers. However, this small group accounts for a large portion of all the money spent on
legal gambling. Between one-quarter and two-thirds of the money spent in casinos, on lottery tickets, and at other
gambling establishments comes from problem gamblers.
An organization called Gamblers Anonymous, founded in 1957, helps compulsive gamblers deal with their problems. It
uses the same principles as other twelve-step programs, such as Alcoholics Anonymous.
Source Citation:
"Gambling." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints In
Context. Web. 10 Apr. 2012.
A Government Ban on Internet Gambling Is Hypocritical
"Half the proposals these days to curb gambling are really about protecting a gambling monopoly."
In the following viewpoint, Froma Harrop argues that the U.S. government holds a monopoly on gambling and that much of the
legislation to prohibit gambling—including Internet gambling—is created only to strengthen this monopoly. She states that she
has opposed the spread of legalized gambling in the past, but concedes that if one form of gambling is legal, then all forms
should be legal. Harrop is a syndicated columnist and a member of the editorial board of the Providence Journal in Rhode Island.
As you read, consider the following questions:
1. Where is sports betting legal in the United States, according to the author?
2. What hypocrisy exists within the Internet Gambling Prohibition Act, according to Harrop?
3. In the author's view, what are the two "good things" about online gambling?
The FBI is shocked, SHOCKED that Americans will illegally bet an estimated $2.4 billion on March Madness college
basketball [the National Collegiate Athletic Association (NCAA) basketball tournament]. Perhaps they'll round up the
usual suspects—all several million of them.
Gambling Ban Is Immoral
You see, gambling is immoral, except when done through lotteries, keno, off-track betting, Indian casinos, riverboat
casinos, dog races, horse races, jai-alai frontons, card rooms and other wagering venues blessed by the states. And
betting on college sports is especially evil, unless you do it in Las Vegas, whose casinos expect to make about $90
million off March Madness alone.
I have long opposed the proliferation of gambling, but the time's come to give up. Let the state-approved slot machines
multiply—but also the online gambling sites, most of which happen to be (who cares anymore?) illegal.
The monopoly of gambling has become more immoral than the activity itself. Wherever a politician can deliver the
right to virtually print money, corruption breeds. The most colorful example is lobbyist-crook Jack Abramoff, who
made millions defrauding the [Native American] tribes that hired him to guard their casino monopolies.
Bans on Gambling Protect Monopolies
It's against the law to bet on sporting events, with Nevada the grandfathered exception. Casinos in other states want a
piece of the action, but Nevada has opposed changing the law, for obvious reasons. And efforts to simply outlaw
gambling on college games have failed, again with Nevada leading the opposition.
The NCAA basketball tournament is second only to the Super Bowl as the biggest sports-gambling event. An estimated
$4 billion in wagers will be made during March Madness, with online betting sites expected to scoop up a third of the
total. That can't be good news either for the state-approved gambling ventures or the underworld ones. Internet
gambling, most of it run from overseas, is still in its infancy. And it is as unstoppable as it is illegal.
Not that Washington hasn't tried to stifle the online competition. The U.S. Justice Department, for example, ordered
American radio and television stations not to run ads for Internet gambling sites. But Antigua dragged the United States
before the World Trade Organization [WTO] over the matter [in 2004]. The Caribbean nation, home to many of the
Websites, argued that the United States was violating free-trade agreements to protect the industry at home. The WTO
agreed with Antigua.
Congress is considering the Internet Gambling Prohibition Act [in 2006]. It would stop banks and credit card
companies from processing transactions with overseas betting sites. One of the sponsors, Arizona Sen. Jon Kyl,
produced a report [in 2003] explaining why a similar bill was necessary—basically, to protect the social fabric.
Internet betting "encourages youth gambling," Kyl wrote, and "exacerbates pathological gambling." The report ended
with a cymbal crash of hypocrisy: the legislation, Kyl noted, "contains language to ensure the continuation of currently
lawful Internet gambling by the Indian tribes." It comes as no surprise that the latest version does the same.
A Threat to Honest Government
Either we let Americans gamble legally or we don't. The growth of gambling outlets has indeed led to a rise in
bankruptcy, suicide, robbery, embezzlement, divorce and other social ills. And state governments fool no one when
they fund counseling organizations for people brought low by gambling activities that the states themselves raise
revenues from.
But there seems little point in having a Connecticut Council on Problem Gambling when Connecticut has no problem
hosting [the tribal casino] Foxwoods, the biggest slot-machine emporium in the United States, another major casino
nearby and a state lottery. Let it be noted that half the people who call the council's hotline have annual incomes of less
than $35,000. (Foxwoods now wants to open a slots operation on Philadelphia's riverfront.)
While the proliferation of gambling hurts the weakest members of society, its biggest threat right now is to honest
government. Thus, there are two good things about online gambling: One is that it cannibalizes the government-created
monopolies. The other is it's not in everyone's face. The office betting pool, though also illegal, shares the merit of
leaving politicians out of the transaction.
March Madness [is] over in a few days, but the mad rush to carve out exclusive rights to milk the public is year-round.
And remember: Half the proposals these days to curb gambling are really about protecting a gambling monopoly.
Source Citation:
Harrop, Froma. "A Government Ban on Internet Gambling Is Hypocritical." Gambling. Ed. David Haugen and Susan Musser.
Detroit: Greenhaven Press, 2007. Opposing Viewpoints. Rpt. from "The Government's Monopoly on Gambling."
www.realclearpolitics.com. 2006. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Internet Gambling Should Be Curbed to Protect the Economy
"If lawmakers do not aggressively combat the growth of Internet gambling, the effects on our economy will be damaging."
In the following viewpoint, Ryan D. Hammer argues for strict control of Internet gambling, which would reduce the potential
growth of the industry and in return protect the U.S. economy from many of the negative consequences associated with this
growth. After outlining these consequences—including lost tax revenue, consumer credit card problems, and social concerns—
Hammer advocates government legislation as a possible solution to the problem. One course of action he suggests would be to
limit the use of consumer credit cards in making online gambling wagers. Such legislation was proposed in bill H.R. 4411, which
President George W. Bush signed into law in October 2006. Hammer was a graduate student at Indiana University at the time
this paper was published.
As you read, consider the following questions:
1. Why is Internet gambling a loss to states, according to Hammer?
2. Who are the biggest losers when credit cards are used for online gambling, in the author's opinion?
3. What percentage of online gamblers does Hammer say make wagers using credit cards?
The recent explosion of Internet gambling poses serious concerns to the U.S. economy. With the U.S. economy slowing
significantly after a decade of expansion [in the 1990s] the impact of Internet gambling will be detrimental. One effect
will be the reduction in tax revenues collected by state and federal governments from legalized gambling operations.
The gambling industry in America represents a significant source of tax revenues to the various jurisdictions in which
gambling operates. A second area that will be affected by the Internet gambling phenomenon is the consumer credit
card industry. Thirdly, Internet gambling harms families, leads to crime, and increases addiction. Although difficult to
quantify, these areas of concern will negatively impair the economy.
Internet Gambling Reduces Tax Revenue
Legal gambling operations in the United States pay millions of dollars in taxes annually to local and federal
governments. Without question, these taxes contribute to the overall revenues in the vast majority of states with
legalized gambling. [According to journalist Patrick Strawbridge] "State and local governments in Iowa collected more
than $197 million in taxes and fees from Iowa casinos and racetracks [in 2000]." The Casino Queen Riverboat in East
St. Louis generates between $10 million and $12 million annually in tax revenues for the city. In addition, the riverboat
casino created more than 1,200 full-time jobs. "Gaming revenues have enabled the city to make dramatic strides in its
quality of life," [states professor Kenneth M. Reardon.] The willingness of states to legalize certain forms of gambling,
such as lotteries, often hinges on revenue shortfalls of their treasuries. During the 1980s, sixteen of the twenty-two
states with the greatest increase in unemployment created lotteries. It is always easier for politicians to support a lottery
or a casino riverboat than to propose a tax increase on their constituents.
When Americans participate in Internet gambling, however, no state budget receives a windfall of revenues. The
money gambled by Americans on the Internet is done so with companies that pay no taxes in the United States. With
over $2 billion gambled on the Internet in 2000, the amount of tax revenues that the United States loses is staggering.
Included in this loss of revenues are secondary items purchased when one attends a gambling facility, such as food,
souvenirs, and clothing....
While any gambler desires to win money, the depression of losing can be somewhat alleviated when the money is being
reinvested to improve the economy. This is the case when people lose money in regulated gambling environments. For
example, when an individual buys a lottery ticket at a convenience store, a portion of the cost of that ticket will be used
to improve education or to build better roads. When an individual plays an online lottery, the proceeds are not
reinvested to improve any government projects. Legal gambling operations are permitted to function in the United
States when they comply with strict regulations such as accounting procedures. No such procedures exist in the world
of Internet gambling, which deprives the United States of millions of dollars annually in tax revenues.
Internet Gambling Threatens Credit-Card Industry
Internet gambling places banks and credit card companies in a precarious position. On the one hand, these institutions
can profit greatly by offering credit to individuals to gamble online. Credit card charges for Internet gambling are often
posted as cash advances, which carry higher interest rates than ordinary purchases. The cash advance rate for most
credit cards exceeds 20%. The downside to credit card companies stems from the processing of Internet gambling
transactions. Numerous lawsuits are filed by individuals who have lost money gambling online and who refuse to pay
their gambling debts. These lawsuits could leave banks unable to collect debts from individuals who partake in Internet
gambling....
The biggest losers with respect to the use of credit cards in Internet gambling transactions are those who do not gamble
online. Regardless of how the litigation evolves in cases of Internet gamblers against credit card companies, the
ordinary American loses. If Internet gamblers are successful in having their debts alleviated, non-Internet gamblers will
ultimately pay the economic price for their fellow Americans' victory. This price will come in the form of higher fees,
charges, and interest rates passed on to all American credit card holders. Because the number of those in the nonInternet gambling community far outweighs the number of those who gamble online, a vast majority of Americans will
experience the negative effects of credit card use in Internet gambling transactions.
Even if credit card companies are successful in litigation against Internet gamblers, Americans will still feel negative
effects. Victories for credit card companies would provide credibility to the Internet gambling industry and encourage
more people to participate. The result of this certification of the Internet gambling industry would cause more and more
people to accumulate large Internet gambling debts. When the factor of gambling addiction is added, inevitably many
individuals would assume debts unrecoverable to credit card companies. Once again, higher interest rates and fees will
be passed on to non-Internet gamblers as a result of the use of credit cards in Internet gambling transactions.
Internet Gambling Disrupts Society
The societal concerns that led to the intense regulation of traditional forms of gambling do not disappear when dealing
with Internet gambling. As Internet gambling invades American households, society is "left to deal with the crime,
bankruptcy, and gambling disorders that may result," [argues Rhodes College professor Michael Nelson]. Among the
many problems exacerbated by Internet gambling are gambling addiction and gambling by minors. Pathological
gambling negatively affects not only the gambler, but also the gambler's family and friends, and society at large.
Societal costs of pathological gambling includes the expenditure of unemployment benefits, physical and mental health
problems, theft, embezzlement, bankruptcy, suicide, domestic violence, and child abuse and neglect.
Experts predict that "the number of compulsive gamblers could soon quadruple from 5 million to 20 million addicts
nationwide." The primary reason for this anticipated increase in compulsive gambling is the Internet. With the
accessibility of the Internet, gamblers do not have to travel to casinos or contact their local bookie to place a bet.
Internet gambling is more addictive than other forms of gambling because it combines high-speed, instant gratification
with the anonymity of gambling from home. The temptations that lead to compulsive gambling are as close as one's
computer.
Despite the severe impact that pathological gambling has on Americans, minimal research exists on the topic. The
research performed on pathological gambling has often been half-hearted....
Compulsive gamblers are responsible for an estimated fifteen percent of the dollars lost in gambling. Beyond this
monetary figure, how can society quantify a divorce caused by a gambling addiction or a gambling-induced suicide?...
Options to Stop Internet Gambling
After analyzing the statutory landscape of Internet gambling and assessing its negative economic consequences, the
question that remains is what can be done to limit Internet gambling? One option is ... to limit Internet gambling
through enhanced enforcement mechanisms against credit card providers and money transfer agents. The Internet
gambling industry is dependent on transactions from money transfer agents; thus, discouraging transactions will limit
Internet gambling....
Credit Card Use Must Be Limited
An estimated ninety-five percent of Internet gamblers worldwide make their wagers with credit cards. Without
question, credit cards are vital to the Internet gambling industry. It would seem logical that limiting the use of credit
cards in Internet gambling would decimate the industry. Representative Jim Leach [of Iowa] believes the number of
personal bankruptcies will greatly increase if credit card companies continue to allow gamblers to use their products to
pay for Internet gambling. Leach introduced a bill in 2001 "that would ban the use of credit cards ... to pay for Internet
gambling." He believed that "the banning of major credit cards may take a thirty percent bite out of the Internet
gambling industry in the short run."
One option to impede the use of credit cards and money transfers in Internet gambling would be to enact legislation
prohibiting wire transfers to known Internet gambling sites. A problem with this proposal is that with the fluidity of the
Internet, alternative forms of payment, such as digital cash, could likely be utilized. Cardholders could easily
circumvent the law by buying "electronic cash" at a site such as PayPal. PayPal is described as "an e-commerce
provider that allows individuals to establish a PayPal account by depositing funds." Once the account is established,
individuals can purchase goods from any site that uses the PayPal system, including Internet gambling sites.
Additionally, credit card companies believe a ban on credit card use in Internet gambling transactions would place an
unreasonable burden on themselves to enforce federal law. Nonetheless, legislation that hinders an individual's ability
to use a credit card for Internet gambling transactions would clearly affect the industry in the near term. At a time when
the laws surrounding Internet gambling are ambiguous, any action that would limit the growth of the industry would be
beneficial to the economy.
One final possibility would be to enact legislation that made any credit card debt incurred while gambling online
unrecoverable. While this type of legislation would not promote consumer responsibility, it would crush the Internet
gambling industry. If banks and credit card companies had no avenue to enforce debt collection from Internet gamblers,
they would inevitably refuse transactions with Internet gambling sites....
As this [viewpoint] demonstrates, the Internet gambling industry yields a negative impact on the U.S. economy.
Internet gambling deprives state and local governments of valuable tax revenues required to maintain services. Internet
gambling also forces consumers to pay higher fees and interest rates as a result of uncollectible gambling debts. Finally,
Internet gambling adversely affects our society in ways that cannot easily be quantified, such as addiction, pathological
behavior, and family disintegration.
In order to limit the negative effects of Internet gambling on our economy, legislators need to take aggressive
measures.... The negative effects of Internet gambling are already being perceived by the U.S. economy. If lawmakers
do not aggressively combat the growth of Internet gambling, the effects on our economy will be damaging.
Source Citation:
Hammer, Ryan D. "Internet Gambling Should Be Curbed to Protect the Economy." Gambling. Ed. David Haugen and Susan
Musser. Detroit: Greenhaven Press, 2007. Opposing Viewpoints. Rpt. from "Does Internet Gambling Strengthen the U.S.
Economy? Don't Bet on It." Federal Communications Law Journal 54 (31 Oct. 2001): 117-127. Gale Opposing Viewpoints In
Context. Web. 10 Apr. 2012.
Gay Parents
Gay parenting is a part of the overall issue of gay rights in America. Same-sex marriage, the military's ‘Don't Ask,
Don't Tell' policy, hate crimes legislation, and employment rights keep gay and lesbian citizens in the spotlight. Highprofile events and political controversy underscore the difficulties lesbians and gays encounter as they push for the
same equality and access their heterosexual counterparts receive. Lesbian and gay parents and their children share the
same experiences that their childless peers do, but also face challenges unique to parenting children as part of a
minority group.
Historical Context
Lesbians and gay men have raised children for years; historically, however, discrimination has kept many of these
families invisible to the heterosexual majority. Current trends indicate that prejudice against lesbians and gay men is
decreasing. This growing climate of tolerance has helped families feel safer living in a more public fashion and has led
more lesbians and gay men to become parents. A 2007 national survey by the Pew Research Center confirms the
benefits of increased visibility, suggesting that familiarity is closely linked to tolerance.
Using United States Census data, researchers estimate that in 1990 approximately one in twenty male same-sex couples
and one in five female same-sex couples were raising children. By 2000, census data showed that number had risen to
one in five for male couples and one in three for female couples. Based on these estimates, at least 250,000 children
nationwide are being raised by same-sex couples. Census analysts note that these numbers may be significantly
underestimated, because many gays and lesbians do not self-report their sexual orientation to the government. Further,
census data indicate that lesbian and gay households can be found in 99 percent of all U.S. counties.
Unique Challenges
Parenting outside the traditional family boundaries brings many challenges. Although discrimination appears to be
lessening, by no means are alternative families universally accepted. These families exist within a much larger social
context in which they confront a lack of recognition and support for their relationships. Same-sex couples and their
families are often denied basic rights and protections afforded to heterosexual married couples and families. This lack
of support increases their vulnerability in a number of ways.
Abbie E. Goldberg writes in her book, Lesbian and Gay Parents and Their Children published in 2010, "Denial of such
protections is often defended on the grounds that to protect same-sex couples would be to approve of or encourage such
relationships; furthermore, protecting such relationships is viewed as a threat to the institution of marriage and the
stability of marriage." Goldberg argues that this structural discrimination affects the stability of same-sex couples and
their families.
Rights Denied
Discrimination is evident when marriage is not an option for same-sex couples. In 2004 the Congressional Budget
Office prepared a report for Congress analyzing potential budgetary effects of recognizing same-sex marriage. They
reported 1,138 statutory provisions in which marital status is a factor in determining or receiving "benefits, rights and
privileges." Marital status has a direct impact on people's eligibility for some federal payments, such as Social Security
benefits, veterans' benefits, and civil service and military pensions. Numerous state-based programs also hinge on
marital status, and are prohibitive to same-sex families.
Recognition as a Family
Same-sex families encounter difficulties in medical, educational, social, and other areas when they do not receive
recognition as a family. Within many of society's structures, the biological or adoptive parent is the only one officially
acknowledged as a child's parent.
The perils of this arrangement are many. If the biological or adoptive parent dies, the surviving parent has no legal right
to custody of his or her child, because custody is legally granted to the next of kin. The non-biological or non-adoptive
parent has no authority in educational settings to register a child for school or be involved in a child's educational plan,
nor is he or she legally allowed to sign permission slips or consent forms. This parent is also not recognized as next of
kin in a medical setting, and thus is often excluded from making treatment decisions or visiting a child in the hospital.
Same-sex families often are dependent upon their employers to offer domestic partnership benefits to receive the same
medical and dental coverage that married couples obtain automatically. Further, the benefits domestic partners or their
children do receive are treated as taxable income, placing an additional financial burden on same-sex families.
Opponents of gay rights maintain that all of these rights can be obtained through the use of legal agreements, such as a
will or power of attorney. The reality is, however, legal agreements only protect a small number of basic rights. And
often, without precedents or legal guidance, legal advisors can only attempt to draft a document that will not be
challenged in court. Challenges, especially regarding child custody and property rights, are not uncommon given the
lack of acceptance for same-sex partnerships. The absence of these basic protections contributes to the psychological
stress same-sex families live with on a daily basis.
Social Support
Family and community support are crucial mechanisms in the stability of families. But Goldberg points out that,
"Sexual minorities who become parents in the current sociopolitical climate may encounter more support than in
decades past, but they are nevertheless vulnerable to stigmatization and marginalization by dominant societal
institutions and ideologies, which continue to condemn and denigrate lesbian and gay-parent families."
The birth or adoption of a child into a same-sex family is not always met with the same enthusiasm as those born into a
heterosexual family. Lesbian and gay couples often report that they do not receive the same quality of support from
their families as their heterosexual siblings do. If there is lack of approval of a same-sex relationship prior to having a
child, the new arrival may add to that existing tension. However, lesbian and gays learn to cope with this dilemma in
many cases by creating their own chosen families—networks of friends they rely on for social support.
Community support is strongly related to the mental well-being of both heterosexual and homosexual individuals.
Although it appears that strides have been made toward greater tolerance of same-sex relationships, recent political and
religious debates over same-sex marriage in states such as California and Maine have intensified the climate of hostility
maintained by some anti-gay marriage activists as they defend the status quo.
Defending the Traditional Family
James C. Dobson is the founder of the conservative Christian group Focus on the Family. In his December 12, 2006,
article as a guest contributor to Time magazine, titled, "Two Mommies Is One Too Many," Dobson comments on the
pregnancy of Vice President Dick Cheney's daughter, Mary, who stated that she would raise the child with her lesbian
partner. Dobson writes, "With all due respect to Cheney and her partner, Heather Poe, the majority of more than 30
years of social-science evidence indicates that children do best on every measure of well-being when raised by their
married mother and father." However, two researchers whose work Dobson cited in support of his position have
accused him of distorting their research results. Educational psychologist Carol Gilligan and Dr. Kyle Pruett felt
strongly enough about this mischaracterization, that they each sent letters to Dobson requesting that he cease using their
research to support his positions.
The Family Research Institute, founded by psychologist Dr. Paul Cameron, has been an outspoken opponent of gay
rights. According to its website, "The Family Research Institute was founded in 1982 with one overriding mission: to
generate empirical research on issues that threaten the traditional family, particularly homosexuality, AIDS, sexual
social policy, and drug abuse." Southern Poverty Law Center, however, a non-profit civil rights organization dedicated
to fighting hate and bigotry, added Cameron's Family Research Institute to their list of Hate Groups in 2005 stating,
"the Family Research Institute churns out hate literature masquerading as legitimate science."
Additionally the American Psychological Association (APA) expelled Cameron in 1983 for non-cooperation with an
ethics investigation. The American Sociological Association (ASA) and the Canadian Psychological Association have
accused Cameron of misrepresenting social science research. The ASA declared that "Dr. Cameron has consistently
misinterpreted and misrepresented sociological research on sexuality, homosexuality, and lesbianism." Social and
religious conservatives, however, widely repeat Cameron's teachings as fact. Although social conservatives continue to
believe that same-sex parents are detrimental to children, established verifiable research tells a different story.
Another Point of View
"The entrenched conviction that children need both a mother and a father inflames culture wars over single
motherhood, divorce, gay marriage, and gay parenting. Research to date, however, does not support this claim." These
were the findings of researchers Timothy J. Biblarz and Judith Stacey, who spent five years reviewing eighty-one
studies of one- and two-parent families which included gay, lesbian, and heterosexual families. Their analysis, titled,
"How Does the Gender of Parents Matter?" was published in the January 2010 edition of the Journal of Marriage and
Family.
The American Psychiatric Association agrees with these findings. The group's 2002 position statement titled,
"Adoption and Co-parenting of Children of Same-sex Couples" states, "Numerous studies over the last three decades
consistently demonstrate that children raised by gay or lesbian parents exhibit the same level of emotional, cognitive,
social, and sexual functioning as children raised by heterosexual parents. The research indicates that optimal
development for children is based not on the sexual orientation of the parents, but on stable attachments to committed
and nurturing adults."
Protections for Children
Research indicates that children in same-sex families have been found to be as well-adjusted as children raised in
heterosexual families, but the lack of basic protections sets them apart from their peers. The legalization of secondparent adoption or civil marriage for gays and lesbians would assist in closing this gap. Second-parent adoption allows
the partner of the biological or primary adoptive parent to adopt the child with parental rights equal to the primary
parent. Although a handful of states allow lesbians and gays to coparent, laws vary widely from state to state. Many
states have general restrictions on lesbians and gays adopting or fostering children that, in effect, prohibit second-parent
adoptions as well. Allowing same-sex couples to enter into civil unions or marry would provide essentially the same
benefits as second-parent adoption.
Although considered a controversial issue by many, the American Psychiatric Association maintains that second-parent
adoption and same-sex marriage are important factors in the health and well-being of same-sex families. "Removing
legal barriers that adversely affect the emotional and physical health of children raised by lesbian and gay parents is
consistent with the goals of the APA. The American Psychiatric Association supports initiatives which allow same-sex
couples to adopt and co-parent children and supports all the associated legal rights, benefits, and responsibilities which
arise from such initiatives."
They are joined by their colleagues at the American Academy of Pediatrics, who issued a policy statement in 2002 and
reissued it in 2010, affirming their support for second-parent adoption. The statement asserts: "Children who are born
to or adopted by 1 member of a same-sex couple deserve the security of 2 legally recognized parents. Therefore, the
American Academy of Pediatrics supports legislative and legal efforts to provide the possibility of adoption of the child
by the second parent or coparent in these families."
Judicial System Decisions
Recent court rulings appear to concur with professional organizations. Florida is the only state with a law that explicitly
prohibits lesbians and gay men, both couples and individuals, from adopting children. But in recent years the
constitutionality of this law has been challenged with positive results for lesbians and gays. A January 2010 ruling by
Miami-Dade circuit judge Maria Sampredo-Iglesia marked the third gay adoption approved in Florida courts within a
year. Sampredo-Iglesia wrote in her order, "There is no rational connection between sexual orientation and what is or is
not in the best interest of a child," adding that Florida's adoption law is "unconstitutional on its face."
What Does the Future Hold?
Same-sex families necessarily find ways to cope with the hardships they encounter as a minority group. That most
families manage to also thrive under these conditions is admirable but does not mean that it constitutes an ideal
situation. Even as states pass legislation prohibiting recognition of same-sex relationships, attitudes are changing
quickly. Younger generations appear to be more tolerant of alternative lifestyles, and reproductive technology advances
make it likely that a growing number of same-sex couples will choose to have children. Ostracism by religious and
social conservatives will not change the fact that lesbians and gays will continue to have and raise families that they
will love just as much as heterosexual families love theirs. Society as a whole may be better served if every child were
afforded all the benefits and protections he or she deserves regardless of family structure.
Source Citation:
"Gay Parents." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints In
Context. Web. 10 Apr. 2012.
Gays and Lesbians Should Be Allowed to Adopt
The American Civil Liberties Union (ACLU) is a nonprofit, nonpartisan organization founded in 1920. It is dedicated
to preserving Americans' rights to equal protection, due process, privacy, and those rights guaranteed by the First
Amendment.
Major child welfare organizations have denounced claims that gays and lesbians should be restricted from adopting children,
based in part on social scientific evidence that shows that this group is no less fit for parenting than heterosexuals. In addition,
child welfare experts have agreed since the 1970s that the best means for placing a child in a home is on a case-by-case basis,
rather than ruling out entire groups of people. Such exclusions only result in reducing the number of loving homes for the many
children in need of them. Instead of preventing gays and lesbians from adopting, adoption agencies should continue to screen
potential parents rigorously and make placement decisions based on the best interest of each particular child.
All of the major children's health and welfare organizations, whose only agenda is to serve the best interest of children,
have issued statements opposing restrictions on adopting and fostering by lesbians and gay men. Those policy
statements were informed in part by the social science research on lesbian and gay parents and their children, which
firmly establishes that there is no child welfare basis for such restrictions because being raised by lesbians or gay men
poses no disadvantage to children. But they were also informed by well-established child welfare policy that rejects
categorical exclusions of groups of people as contrary to the best interests of children in the child welfare system.
Child welfare experts agree that child placement decisions should be based on children's specific needs and prospective
parents' ability to meet those needs. Child welfare professionals understand that every child is unique and has
individual needs. Children have diverse personalities, family experiences and physical and emotional needs that all
need to be taken into account when making a placement. Similarly, adults seeking to adopt and foster are not all alike.
They are diverse individuals who have different skills, qualities, and family environments to offer a child.
Adoption and foster placement is a matching process. Caseworkers seek to find the family that is the best match for
each child. For example, one child may fare better with adoptive parents who have other children; another may be
better off as an only child. A child may have medical problems and would benefit from being placed with someone
who has medical expertise. Some children might do well with a couple; others might be better off with a single parent
(e.g., children who have experienced sexual abuse or who need focused attention). In other words, there is no one-sizefits-all when it comes to children. The bigger and more diverse the pool of prospective adoptive and foster parents, the
greater the likelihood that placement professionals will be able to make good matches. Categorical exclusions, which
throw away individuals who could meet the needs of children, seriously undermine this goal.
Placing a Child
The rejection of blanket exclusions in favor of the principle that placement decisions should be made on a case-by-case
basis is well-established in the child welfare field. Indeed, it is reflected in the Child Welfare League of America's
Standards of Excellence for Adoption Services [CWLA Standards]:
When the agency providing adoption services is responsible for selecting the adoptive family, it should base its selection of a
family for a particular child on a careful review of the information collected in the child assessment and on a determination of
which of the approved and prepared adoptive families could most likely meet the child's needs.
Applicants should be assessed on the basis of their abilities to successfully parent a child needing family membership and not on
their race, ethnicity or culture, income, age, marital status, religion, appearance, differing life style, or sexual orientation.
Applicants should be accepted on the basis of an individual assessment of their capacity to understand and meet the needs of a
particular available child at the point of the adoption and in the future.
Categorical exclusions have become aberrations in child welfare law around the country.
The CWLA Standards are widely accepted as the foundation for sound child welfare practice in the United States. They
are a source relied upon by the group's 900 member agencies, which include the state child welfare department in
almost every state. The Standards are formulated "based on current knowledge, the developmental needs of children,
and tested ways of meeting these needs most effectively." State child welfare departments are significantly involved in
the development of the Standards.
Groups No Longer Excluded
Case-by-case evaluation is such a central principle of child welfare practice that categorical exclusions have become
aberrations in child welfare law around the country, the only exceptions being for those who have demonstrated
conduct that is dangerous to children, such as those convicted of violent crimes or drug offenses. This was not always
the case. Until the 1970s, generally only middle-class, white, married, infertile couples in their late twenties to early
forties, who were free of any significant disability were considered suitable to adopt. Many agencies excluded
applicants who did not meet this ideal such as older couples, low-income families, disabled people, and single adults.
But by the 1970s, adoption policy and practice moved away from such exclusions as the field recognized that they were
arbitrary and that many individuals who were rejected were valuable parenting resources. It is now the consensus in the
child welfare field that case-by-case evaluation is the best practice.
The child welfare professionals agree that the way to ensure healthy, positive placements is to do what every state child
welfare agency currently does: subject every applicant to a rigorous evaluation process. There are good and bad parents
in every group; thus, every applicant must be seriously scrutinized. Whether gay or straight, no one is approved to
adopt or foster a child unless he or she clears a child abuse and criminal records check, a reference check, an evaluation
of physical and mental health, and a detailed home study that examines the applicant's maturity, family stability, and
capacity to parent. Applicants will not be approved unless they are deemed able to protect and nurture and provide a
safe, loving family for a child. And no adoption or foster care placement is made unless a caseworker first determines
that the placement is the best match available for a particular child. ...
Depriving Children of Good Parents
Blanket exclusions of lesbians and gay men from adopting or fostering—like any other blanket exclusions—deny
children access to available safe, stable, and loving families. For some children, such exclusions mean that they cannot
be placed with the family that is best suited to meet their needs. Categorical exclusions tie the hands of caseworkers
and prohibit them from making what they deem to be the best placements for some children. For example, a
caseworker could not place a child with a gay nurse who is willing to adopt a child with severe medical needs even if
there are no other available prospective adoptive parents with the skills necessary to take care of that child. Similarly, a
blanket rule would prevent a caseworker from placing a child with a lesbian aunt with whom the child has a close
relationship. Instead, that child would have to be placed with strangers, even though the child welfare profession agrees
that, wherever possible, children should be placed with relatives.
Blanket exclusions do not just deprive children of the best possible placement. By reducing the number of potential
adoptive and foster parents, categorical exclusions of lesbians and gay men condemn many children to a childhood with
no family at all. Most states in this country have a critical shortage of adoptive and foster parents. Across the country,
more than 118,000 children are waiting to be adopted. Many wait for years in foster care or institutions; some wait out
their entire childhoods, never having a family of their own. ... Many people are not aware of this problem because we
often hear about couples who spend years waiting to adopt a baby. But most of the children in the child welfare system
in this country are not healthy infants. They are older children and teens, children with serious psychological and
behavioral problems, children with challenging medical needs, and groups of siblings who need to be placed together.
It is difficult to find families willing to take care of these children.
What This Means
The child welfare agencies go to great lengths to recruit adoptive and foster parents for these children, even posting
photos and profiles of waiting children on the Internet. They provide financial subsidies to people who adopt children
who are in state care so that the expense of caring for a child is not a barrier to low-income people adopting. Yet
thousands of children are still left waiting for families.
The shortage of foster families means that some children get placed far away from their biological families,
communities and schools; some get placed in overcrowded foster homes; and some get no foster family at all and
instead are placed in institutional settings.
For children waiting to be adopted, the shortage of adoptive families means that some will remain in foster care for
years, where they often move around among temporary placements. Some will have to be separated from their siblings
in order to be adopted. Some will be placed with families that are not well-suited to meet their needs. And some will
never be adopted, and instead "age out" of the system without ever getting to have a family of their own.
Excluding gay people ... does nothing whatsoever to protect children or promote good placements.
You do not have to be a child welfare expert to understand how scarring it is for a child to grow up without the love
and security of a parent. And the scientific research confirms the importance to children's development of forming a
parent-child relationship and having a secure family life. Thus, children who are adopted are much less likely than
children who spend much of their childhoods in foster care or residential institutions to be maladjusted.
The Long-Term Effects
Young people who age out of foster care without ever becoming part of a family are the most seriously affected. These
young people are significantly more likely than their peers to drop out of school, be unemployed, end up homeless and
get involved in criminal conduct. According to the federal government, approximately 20,000 young people between
the ages of 18 and 21 are discharged from foster care each year. A national study prepared for the federal government
reported that within two years after discharge, only 54% had completed high school, fewer than half were employed,
60% of the young women had given birth to a child, 25% had been homeless, and 30% were receiving public
assistance.
Blanket exclusions throw away qualified parents, which we cannot afford to do. We don't know how many lesbians and
gay men are adopting children, as no such statistics are kept. But we do know that each qualified lesbian or gay parent
who is excluded because of his or her sexual orientation represents a potential loving family for a waiting child.
Under the governing child welfare policy across the country, no child is placed with an applicant unless, after a
rigorous screening, a caseworker concludes that the applicant is the best match for the child. Excluding gay people (or
any group) from being considered therefore does nothing whatsoever to protect children or promote good placements.
All such exclusions do is prevent placement professionals from making some placements that they deem to be best for
a particular child. Reducing the pool of available adoptive and foster parents from which caseworkers can choose
provides no conceivable benefit to children and it creates harms that are all too real.
Source Citation:
"Gays and Lesbians Should Be Allowed to Adopt." Too High a Price: The Case Against Restricting Gay Parenting. New York, NY:
American Civil Liberties Union (ACLU), 2006. Rpt. in Are Adoption Policies Fair? Ed. Amanda Hiber. Detroit: Greenhaven Press,
2008. At Issue. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Gays and Lesbians Should Not Be Allowed to Adopt
Gary Glenn is president of the American Family Association of Michigan. He coauthored the Marriage Protection
Amendment that Michigan voters approved in 2004.
While much was made of comedienne Rosie O'Donnell's announcement that she is a lesbian, very little attention has been paid
to some starkly contradictory statements she made in an interview with Diane Sawyer. O'Donnell admitted that she thought her
children's lives would be easier if she were married to a man, and that she hoped they would grow up to be heterosexual. These
admissions exemplify the selfishness of gays and lesbians who adopt children to satisfy their own desires, despite overwhelming
evidence that children's health is at greater risk when they are adopted into homosexual households. Studies have also shown
that children of homosexual parents are much more likely to engage in homosexual behavior as adults, which carries with it
extraordinary dangers to their physical and mental health. When children are placed in adopted homes, their welfare must be
the first priority, not the political agenda of gay rights groups.
The media glorified [comedienne and talk show host] Rosie O'Donnell's public announcement that she has sex with
other women. But it glossed over these startling contradictions: O'Donnell's frank admission that she believes her own
adopted children would be better off being raised by a married mother and father, bolstered by the hope that they won't
follow her example of choosing to engage in homosexual behavior.
"Would it be easier for [my kids] if I were married to a man? It probably would," O'Donnell told ABC Primetime
Thursday reporter Diane Sawyer [in 2002].
And when asked if she hopes her adopted children will grow up to be "straight." ...
"Yes, I do," Rosie said. "I think life is easier if you're straight. ... If I were to pick, would I rather have my children have
to go through the struggles of being gay in America, or being heterosexual, I would say heterosexual."
Rosie also revealed that her six-year-old son Parker has told her, "I want to have a daddy." She responded, "If you were
to have a daddy, you wouldn't have me as a mommy, because I'm the kind of mommy who wants another mommy.
This is the way mommy got born."
Thus the biggest conclusion Americans should draw from the Rosie O'Donnell confessional is this—that Miss
O'Donnell is a spoiled, privileged adult who put her own feelings ahead of what even she believes would be in the best
interests of the children. She used her privilege and wealth to place children too young to object in an environment two
recent studies indicate will make them more likely to engage in the very high risk behavior Rosie hopes they won't.
Putting Children in Danger
The scientific fact is that children's health is endangered if they are adopted into households in which the adults—as a
direct consequence of their homosexual behavior—experience dramatically higher risks of domestic violence, mental
illness, substance abuse, life-threatening disease, and premature death by up to 20 years.



"The probability of violence occurring in a gay couple is mathematically double the probability of that in a heterosexual
couple," write the editors of the National Gay & Lesbian Domestic Violence Network newsletter.
The Journal of the American Medical Association reports that "people with same-sex sexual behavior are at greater risk
for psychiatric disorders"—including bipolar, obsessive-compulsive, and anxiety disorders, major depression, and
substance abuse.
The Medical Institute of Sexual Health reports: "Homosexual men are at significantly increased risk of HIV/AIDS,
hepatitis, anal cancer, gonorrhea and gastrointestinal infections as a result of their sexual practices. Women who have
sex with women are at significantly increased risk of bacterial vaginosis, breast cancer and ovarian cancer than are
heterosexual women." (Executive Summary, "Health Implications Associated with Homosexuality," 1999.)


The Institute reports that "significantly higher percentages of homosexual men and women abuse drugs, alcohol and
tobacco than do heterosexuals."
Oxford University's International Journal of Epidemiology reports: "Life expectancy at age 20 years for gay and bisexual
men is 8 to 20 years less than for all men. ... Nearly half of gay and bisexual men currently aged 20 years will not reach
their 65th birthday."
Is it healthy for children to be adopted by adults whose lifestyle is characterized by promiscuity and the medical
hazards of multiple sex partners?


A homosexual newsmagazine columnist in Detroit last month [February 2002] wrote regarding his partner: "This is his
first relationship, so he has not yet been ruined by all the heartache, lies, deceit, and game-playing that are the hallmark
of gay relationships. ... A study I once read suggested that nine out of 10 gay men cheat on their lovers" [emphasis
added].
The Centers for Disease Control warns that men who have sex with men "have large numbers of anonymous partners,
which can result in rapid, extensive transmission of sexually transmitted diseases."
Risk-Taking Adults
How will being adopted by adults involved in homosexual behavior affect the behavior of children themselves?


Associated Press reported last June [2001] that a "new study by two University of Southern California sociologists says
children with lesbian or gay parents ... are probably more likely to explore homosexual activity themselves ... (and) grow
up to be more open to homoerotic relations." [emphasis added]
A major Australian newspaper reported February 4 [2002] on a British sociologist's review of 144 academic papers on
homosexual parenting: "Children raised by gay couples will suffer serious problems in later life, a study into parenting
has found. The biggest investigation into same-sex parenting to be published in Europe claims children brought up by
gay couples are more likely to experiment with homosexual behavior and be confused about their sexuality." [emphasis
added]
Which means children adopted by adults involved in homosexual behavior face not only secondhand exposure to the
risks of such behavior by their "parents," but are more likely to suffer firsthand by engaging in the same high-risk
behavior themselves.
Young people who model the homosexual behavior of their adopted "parents" face other risks:


The Journal of the American Academy of Child & Adolescent Psychiatry published a study of 4,000 high school students
by Harvard Medical School, which found that "gay-lesbian-bisexual youth report disproportionate risk for a variety of
health risk and problem behaviors ... [from] engag[ing] in twice the mean number of risk behaviors as did the overall
population." (Garofalo, Robert, et al, "The Association Between Health Risk Behaviors and Sexual Orientation Among a
School-based Sample of Adolescents," Pediatrics 101, no. 5, May 1998: 895-902.)
"GLB [gay, lesbian, bisexual] orientation was associated with increased ... use of cocaine (and other illegal) drugs. GLB
youth were more likely to report using tobacco, marijuana, and cocaine before 13 years of age. Among sexual risk
behaviors, sexual intercourse before 13 years of age, sexual intercourse with four or more partners ... and sexual contact
against one's will all were associated with GLB orientation."
Child Welfare Before Politics
The sheer weight of evidence makes the issue clear: Should children be handed over as trophies to the homosexual
"rights" movement—adopting them into households where they'll face dramatically higher risk of exposure to domestic
violence, mental illness, life-threatening disease and premature death? An environment which increases the chances
they'll engage in high-risk homosexual behavior themselves?
Not on your life, Rosie.
And certainly not theirs.
Source Citation:
Glenn, Gary. "Gays and Lesbians Should Not Be Allowed to Adopt." Are Adoption Policies Fair? Ed. Amanda Hiber. Detroit:
Greenhaven Press, 2008. At Issue. Rpt. from "Even Rosie Knows Homosexual Adoption Puts Children at Risk." www.cwfa.org.
2002. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Gun Control
The United States is the leader in per-capita gun deaths among industrial nations. This unenviable distinction has
resulted in various gun-control laws at the federal and state levels that seek to reduce crime and violence by restricting
private gun ownership. Supporters of gun control would like even tighter restrictions on the sale and circulation of
firearms. But they face fierce opposition from citizen groups and arms manufacturers who are trying to protect what
they view as the right to own and bear firearms for self-defense and recreational activities. These groups aim to prevent
new legislation, and if possible, roll back the laws already on the books.
The Constitutional Framework
Discussions about citizens’ rights to bear arms extend back to ancient times. Political theorists from Cicero of ancient
Rome to John Locke (1632–1704) of England and Jean-Jacques Rousseau (1712–1778) of France viewed the
possession of arms as a symbol of personal freedom and an indispensable element of popular government.
Similar sentiments were echoed in the Federalist Papers, a series of essays written by the supporters of the Constitution
to explain and defend the document. In the Federalist No. 46, James Madison observed that Americans would never
have to fear the power of the federal government because of "the advantage of being armed, which you possess over the
people of almost every other nation." Patrick Henry declared, "The great principle is that every man be armed.
Everyone who is able may have a gun." Samuel Adams argued that the Constitution should never be interpreted "to
prevent the people of the United States who are peaceable citizens from keeping their own arms." Accordingly, when
adding the Bill of Rights to the Constitution, the founders included the Second Amendment, which reads, "A wellregulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not
be infringed."
The precise meaning and purpose of the Second Amendment has been an issue of major controversy. Gun control
advocates argue that when the Second Amendment was adopted in 1791, each state maintained a militia, composed of
ordinary citizens who served as part-time soldiers. These militias were "well-regulated"—subject to state requirements
concerning training, firearms, and periodic military exercises. Fearing that the federal government would use its
standing army to force its will on the states, the authors of the Second Amendment intended to protect the state militias’
right to bear arms. According to gun control supporters, in modern times the amendment should protect only the states’
right to arm their own military forces, including their National Guard units.
Opponents of gun control interpret the Second Amendment as the guarantee of a personal right to keep and bear arms.
They claim that the amendment protects the general public, who were viewed as part of the general militia, as
distinguished from the "select militia" controlled by the state. By colonial law, every household was required to possess
arms and every male of military age was required to be ready for military emergencies, bearing his own arms. The
amendment, in guaranteeing the arms of each citizen, simultaneously guaranteed arms for the militia. Furthermore, gun
control opponents feel that the words "right of the people" in the Second Amendment hold the same meaning as they do
in the First Amendment, where they describe an individual right (such as freedom of assembly).
Standing in fierce opposition to any and all efforts to restrict gun sales and ownership is the National Rifle Association
(NRA). The leading pro-gun group in the United States, the NRA is dedicated to protecting an individual right to bear
arms. The NRA bases its view on the Second Amendment and insists that its stand is consistent with the intentions of
the nation’s founders. The NRA believes that any form of gun control will eventually lead to a complete ban on the
private possession of firearms.
Court Decisions
For many years the U.S. Supreme Court generally restricted the right of individuals "to keep and bear Arms." In United
States v. Miller (1939), the Court upheld a federal law forbidding the interstate transportation of an unregistered sawedoff shotgun. The Court concluded that the Second Amendment did not apply to this case because there was no evidence
in the record that this type of arm was "any part of ordinary military equipment" or that its use could "contribute to the
common defense." The amendment’s purpose, according to the Court, was to "assure the continuation and render
possible the effectiveness" of the state militia. In two other rulings, the Supreme Court reaffirmed this view in
upholding New Jersey’s tough gun control law in 1969 (Burton v. Sills) and in supporting the federal ban on possession
of firearms by felons in 1980 (Lewis v. United States).
The Miller ruling established a foundation for more than thirty subsequent lower court decisions on the meaning of the
Second Amendment. The lower courts reaffirmed the interpretation of the Second Amendment as a limited states-rights
measure, relating only to individuals in active, controlled state guard or militia units. An exception to this trend
occurred in 1999. In United States v. Emerson, U.S. District Judge for Northern Texas Sam R. Cummings upheld the
right of a man under a temporary restraining order to retain his firearms under the protection of the Second
Amendment. However, the U.S. Court of Appeals for the Fifth Circuit overturned this ruling in 2001.
The Supreme Court reversed course in 2008 in the landmark case of District of Columbia v. Heller. The Court ruled in
Heller that the Second Amendment prohibits the federal government from making it illegal for private individuals to
keep loaded handguns in their homes. It was the first Supreme Court decision ever to explicitly rule that the Second
Amendment protects an individual, personal right to keep and bear arms. Heller was a major development in the law,
but it left many questions unanswered. For example, it is not clear whether the Second Amendment would also prohibit
state governments from passing laws like the federal law at issue in that case.
Gun Control Laws
Gun control laws have several functions. They may be designed to hinder certain people from gaining access to any
firearms. The laws may limit possession of certain types of weapons to the police and the military. A person who wants
to make a gun purchase or obtain a gun license may be subject to a waiting period. Gun-control laws vary from country
to country. In Britain, the national government exercises strict control, requiring all gun owners to be licensed and to
obtain permits in order to buy ammunition. In Australia, states enact their own laws in accordance with guidelines from
the federal government. Some countries, such as Japan and New Zealand, are working to pass tougher gun-control
legislation.
In the United States, the lack of agreement on gun control has led to a wide variety of state and local laws regarding
licensing and registration of handguns. In most states, as long as a person has not been convicted of a felony, he or she
can receive a permit to carry a loaded and concealed handgun. About 50 percent of all homes in the United States
contain at least one firearm. More than half of these are loaded or have ammunition stored with the gun. The National
Institute of Justice estimated that, in the year 2006, 68 percent of murders, 42 percent of robberies, and 22 percent of
aggravated assaults that were committed in the United States were committed with firearms. According to a study
conducted by Johns Hopkins University, stricter requirements on registration and licensing would prevent criminals
from buying guns.
The first major federal gun law went into effect in 1934. It restricted the sale and ownership of high-risk weapons such
as machine guns and sawed-off shotguns. Congress passed the 1968 Gun Control Act in the wake of the assassinations
of Martin Luther King, Jr., and Senator Robert F. Kennedy. In addition to ending mail-order sales of all firearms and
ammunition, the law banned the sale of guns to felons, fugitives from justice, minors, the mentally ill, those
dishonorably discharged from the armed forces, those who have left the United States to live in another country, and
illegal aliens.
Although the 1968 law established a foundation for subsequent gun control legislation, it was flawed by its "honor
system" enforcement scheme. It required prospective purchasers to sign a statement of eligibility to buy a gun, but most
states did not follow up to confirm the claims. This weakness was addressed in the Brady Handgun Violence
Prevention Act. Named in honor of James Brady, the press secretary to President Ronald Reagan who suffered a nearfatal wound during the attempt on Reagan’s life in 1981, the Brady Law became effective in 1994. It required a fiveday waiting period for all handgun sales, during which a background check was to be made on all prospective
purchasers. This provision expired in 1998 and was replaced by the National Instant Check System (NICS), an on-thespot computer scan for any criminal record on the part of the buyer of any type of gun.
Following passage of the Brady Law, aggravated assaults involving firearms declined 12.4 percent between 1994 and
1999. The Justice Department reported that background checks also prevented more than 500,000 people with criminal
records from legally purchasing a gun during that time period. In addition, violent crimes committed with guns
decreased by 35 percent between 1992 and 2000.
Another major landmark in gun control was President George H. W. Bush’s 1989 ban on importing assault rifles. In
1990 the number of imported assault rifles traced to crime dropped 45 percent. In 1994 a federal ban on assault
weapons outlawed the manufacture and sale of the nineteen most lethal assault weapons and various duplicates. The
following year, the number of assault weapons traced to crime declined 18 percent.
Although legislation has had some effect on gun violence in the United States, a series of loopholes enables many
people who do not meet the legal requirements to obtain guns. For example, the law does not require adults to store
guns out of the reach of children. Private collectors can get around the Brady Law’s background-check requirement by
selling their wares at private gun shows, and individuals can buy some guns on the Internet without a background
check. Federal law and most state laws still allow juveniles to purchase "long guns," which include hunting rifles,
shotguns, semiautomatic AK-47s, AR-15s, and other assault rifles manufactured before 1994. Finally, the database for
the national "instant-check" system lacks data in many categories, especially in non-felony areas such as domestic
violence and mental health.
Some of the most dramatic and tragic effects of loopholes have been evident in gun-related school violence. In 1999
two high school students in Littleton, Colorado, obtained shotguns and other weapons and killed thirteen people before
turning the guns on themselves. The weapons were bought from private sellers at gun shows. In 2007 a college student
shot and killed 32 people at Virginia Tech University before turning his weapon on himself. His history of mental
illness had not prevented him from purchasing the semi-automatic pistols used in the shooting. This incident was the
deadliest mass shooting in U.S. history and sparked further debate on the issue of gun control.
Source Citation:
"Gun Control." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints In
Context. Web. 10 Apr. 2012.
The Right to Own a Gun Is Guaranteed by the Constitution
Antonin Scalia is an associate justice of the Supreme Court. He received his AB from Georgetown University and the
University of Fribourg, Switzerland, and his LLB from Harvard Law School. He was appointed Judge of the United
States Court of Appeals for the District of Columbia Circuit in 1982. President Reagan nominated him as an Associate
Justice of the Supreme Court, and he took his seat September 26, 1986. The Reporter of Decisions prepares a syllabus
of a decision of the Court. The syllabus "constitutes no part of the opinion of the Court but has been prepared by the
Report of Decisions for the convenience of the reader."
The Second Amendment was reviewed in the case District of Columbia et al. v. Heller, which was argued before the U.S.
Supreme Court in March 2008. In a 5-4 decision, the Court held that the Second Amendment does protect an individual right to
gun ownership and thereby determined that the District of Columbia's ban on handguns was unconstitutional. Justice Scalia
wrote the opinion of the Court. The following includes text from the syllabus prepared by the Reporter of Decisions and from
Justice Scalia's written opinion.
District of Columbia law bans handgun possession by making it a crime to carry an unregistered firearm and
prohibiting the registration of handguns; provides separately that no person may carry an unlicensed handgun, but
authorizes the police chief to issue 1-year licenses; and requires residents to keep lawfully owned firearms unloaded
and disassembled or bound by a trigger lock or similar device. Respondent Heller, a D.C. special policeman, applied to
register a handgun he wished to keep at home, but the District refused. He filed this suit seeking, on Second
Amendment grounds, to enjoin the city from enforcing the ban on handgun registration, the licensing requirement
insofar as it prohibits carrying an unlicensed firearm in the home, and the trigger-lock requirement insofar as it
prohibits the use of functional firearms in the home. The District Court dismissed the suit, but the D.C. Circuit
reversed, holding that the Second Amendment protects an individual's right to possess firearms and that the city's total
ban on handguns, as well as its requirement that firearms in the home be kept nonfunctional even when necessary for
self-defense, violated that right.
Held:
1. The Second Amendment protects an individual right to possess a firearm unconnected with service in a militia, and
to use that arm for traditionally lawful purposes, such as self-defense within the home....
Assuming he is not disqualified from exercising Second Amendment rights, the District must permit Heller to register his
handgun and must issue him a license to carry it in the home.
Individuals Have Second Amendment Rights
2. Like most rights, the Second Amendment right is not unlimited. It is not a right to keep and carry any weapon
whatsoever in any manner whatsoever and for whatever purpose: For example, concealed weapons prohibitions have
been upheld under the Amendment or state analogues [similar laws]. The Court's opinion should not be taken to cast
doubt on longstanding prohibitions on the possession of firearms by felons and the mentally ill, or laws forbidding the
carrying of firearms in sensitive places such as schools and government buildings, or laws imposing conditions and
qualifications on the commercial sale of arms. Miller's holding that the sorts of weapons protected are those "in
common use at the time" [U.S. v. Miller (1939)] finds support in the historical tradition of prohibiting the carrying of
dangerous and unusual weapons.
The District of Columbia's Handgun Ban Is Unconstitutional
3. The handgun ban and the trigger-lock requirement (as applied to self-defense) violate the Second Amendment. The
District's total ban on handgun possession in the home amounts to a prohibition on an entire class of "arms" that
Americans overwhelmingly choose for the lawful purpose of self-defense. Under any of the standards of scrutiny the
Court has applied to enumerated constitutional rights, this prohibition—in the place where the importance of the lawful
defense of self, family, and property is most acute—would fail constitutional muster. Similarly, the requirement that
any lawful firearm in the home be disassembled or bound by a trigger lock makes it impossible for citizens to use arms
for the core lawful purpose of self-defense and is hence unconstitutional. Because Heller conceded at oral argument
that the D.C. licensing law is permissible if it is not enforced arbitrarily and capriciously, the Court assumes that a
license will satisfy his prayer for relief and does not address the licensing requirement. Assuming he is not disqualified
from exercising Second Amendment rights, the District must permit Heller to register his handgun and must issue him
a license to carry it in the home....
Gun Violence and Gun Rights
We are aware of the problem of handgun violence in this country, and we take seriously the concerns raised by the
many amici [friends] who believe that prohibition of handgun ownership is a solution. The Constitution leaves the
District of Columbia a variety of tools for combating that problem, including some measures regulating handguns. But
the enshrinement of constitutional rights necessarily takes certain policy choices off the table. These include the
absolute prohibition of handguns held and used for self-defense in the home. Undoubtedly some think that the Second
Amendment is outmoded in a society where our standing army is the pride of our Nation, where well-trained police
forces provide personal security, and where gun violence is a serious problem. That is perhaps debatable, but what is
not debatable is that it is not the role of this Court to pronounce the Second Amendment extinct.
We affirm the judgment of the Court of Appeals.
iodicals

























Joan Biskupic and Kevin Johnson "Landmark Ruling Fires Challenges to Gun Laws," USA Today, June 27, 2008.
Geoff Brown "Cities Under Fire," Johns Hopkins Public Health, Fall 2008.
Brad Cain "Lawmakers Move to Close Gun Permit Records," The World, February 23, 2009.
Bob Egelko "Ruling's Ricochet, A Right to Own Guns: Supreme Court Defines 2nd Amendment—Gun Lobby Expected to
Challenge S.F. Ban on Handgun Possession in Public," San Francisco Chronicle, June 27, 2008.
Annette Fuentes "Guns Don't Make Homes Safer," The Progressive, July 2, 2008.
Deborah Hastings "Licensed to kill? Gunmen in Killings Had Permits," The Washington Post, April 7, 2009.
Ashley Johnson "Guns out of Control," The Atlantic, June 26, 2008.
Allison Kasic "DC Gun Ban Lift Empowers Women," Independent Women's Forum, July 7, 2008.
Jim Kessler "Deepen Gun Ownership," Democracy, a Journal of Ideas, Spring 2008.
Dave Kopel "Conservative Activists Key to DC Handgun Decision," Human Events, June 27, 2008.
Wayne LaPierre "Self-Defense Is a Basic Human Right," National Rifle Association/Political Victory Fund, April 3, 2008.
Daniel Lazare "Arms and the Right," The Nation, April 17, 2008.
Michael A. Lindenberger "Ten Years After Columbine, It's Easier to Bear Arms," Time, April 20, 2009.
Adam Liptak "Gun Laws and Crime: A Complex Relationship," The New York Times, June 29, 2009.
Nelson Lund "The Second Amendment Comes Before the Supreme Court: The Issues and the Arguments," The Heritage
Foundation, March 14, 2008.
Warren Richey "The Historic 5 to 4 Ruling Says the Right to Bear Arms Applies to Individuals," The Christian Science
Monitor, June 27, 2008.
Amanda Ripley "Ignoring Virginia Tech," Time, April 15, 2008.
Lydia Saad "Before Recent Shootings, Gun-Control Support Was Fading," Gallup, Inc., April 8, 2009.
Jake Smilovitz "To Push Gun Rights, Group Offers a Gun Voucher," The Michigan Daily, November 12, 2007.
James Taranto "How a Young Lawyer Saved the Second Amendment," Wall Street Journal, July 19, 2008.
Stuart Taylor, Jr. "Recent Supreme Court Decisions Show," Newsweek, July 7-14, 2008.
Robert VerBruggen "Repeal the Second Amendment?" The American Spectator, December 3, 2007.
Edwin Vieira, Jr. "Gun Rights on Trial," The New American, September 1, 2008.
Nicholas Wapshott "Disney Under Fire, Battle Lines Have Been Drawn After a Disney Employee Was Sacked for Bringing
a .45-Calibre Pistol to Work," New Statesman, July 31, 2008.
Drew Westen "Guns on the Brain," The American Prospect, July 13, 2007.
Source Citation:
Antonin Scalia and the Reporter of Decisions. "The Right to Own a Gun Is Guaranteed by the Constitution." Is Gun Ownership a
Right? Ed. Kelly Doyle. San Diego: Greenhaven Press, 2005. At Issue. Rpt. from "Syllabus, and Opinion of the Court, in Supreme
Court of the United States." District of Columbia ET AL. v. Heller. 2008. 1-64. Gale Opposing Viewpoints In Context. Web. 10 Apr.
2012.
The Right to Own a Gun Is Not Guaranteed by the Constitution
John Paul Stevens is an associate justice of the Supreme Court. He received an AB from the University of Chicago, and
a JD from Northwestern University School of Law. From 1970 until 1975, he served as a Judge of the United States
Court of Appeals for the Seventh Circuit. President Ford nominated him as an Associate Justice of the Supreme Court,
and he took his seat December 19, 1975.
Justice Stevens wrote a forty-six-page dissenting opinion on the District of Columbia v. Heller (2008) case that held that the
Second Amendment protects an individual's right to gun ownership. He sees "conflicting pronouncements" with almost every
interpretation in the majority opinion, including the amendment's purpose, language, history, nature of a militia, and the role of
Congress. Justice Stevens writes that the majority opinion has weaknesses and often lacks accuracy in interpretation.
The question presented by this case is not whether the Second Amendment protects a "collective right" or an
"individual right." Surely it protects a right that can be enforced by individuals. But a conclusion that the Second
Amendment protects an individual right does not tell us anything about the scope of that right.
Guns are used to hunt, for self-defense, to commit crimes, for sporting activities, and to perform military duties. The
Second Amendment plainly does not protect the right to use a gun to rob a bank; it is equally clear that it does
encompass the right to use weapons for certain military purposes. Whether it also protects the right to possess and use
guns for nonmilitary purposes like hunting and personal self-defense is the question presented by this case. The text of
the Amendment, its history, and our decision in United States v. Miller, (1939), provide a clear answer to that question.
The Original Purpose of the Second Amendment
The Second Amendment was adopted to protect the right of the people of each of the several States to maintain a wellregulated militia. It was a response to concerns raised during the ratification of the Constitution that the power of
Congress to disarm the state militias and create a national standing army posed an intolerable threat to the sovereignty
of the several States. Neither the text of the Amendment nor the arguments advanced by its proponents evidenced the
slightest interest in limiting any legislature's authority to regulate private civilian uses of firearms. Specifically, there is
no indication that the Framers of the Amendment intended to enshrine the common-law right of self-defense in the
Constitution.
The Amendment should not be interpreted as limiting the authority of Congress to regulate the use or possession of firearms for
purely civilian purposes.
In 1934, Congress enacted the National Firearms Act, the first major federal firearms law. Sustaining an indictment
under the Act, this Court held that, "[i]n the absence of any evidence tending to show that possession or use of a
'shotgun having a barrel of less than eighteen inches in length' at this time has some reasonable relationship to the
preservation or efficiency of a well regulated militia, we cannot say that the Second Amendment guarantees the right to
keep and bear such an instrument." The view of the Amendment we took in Miller—that it protects the right to keep
and bear arms for certain military purposes, but that it does not curtail the Legislature's power to regulate the
nonmilitary use and ownership of weapons—is both the most natural reading of the Amendment's text and the
interpretation most faithful to the history of its adoption.
Specifically, there is no indication that the Framers of the Amendment intended to enshrine the common-law right of selfdefense in the Constitution.
Since our decision in Miller, hundreds of judges have relied on the view of the Amendment we endorsed there; we
ourselves affirmed it in 1980. No new evidence has surfaced since 1980 supporting the view that the Amendment was
intended to curtail the power of Congress to regulate civilian use or misuse of weapons. Indeed, a review of the drafting
history of the Amendment demonstrates that its Framers rejected proposals that would have broadened its coverage to
include such uses....
In this dissent I shall first explain why our decision in Miller was faithful to the text of the Second Amendment and the
purposes revealed in its drafting history. I shall then comment on the postratification history of the Amendment, which
makes abundantly clear that the Amendment should not be interpreted as limiting the authority of Congress to regulate
the use or possession of firearms for purely civilian purposes.
The Language of the Amendment Is Dissected
The text of the Second Amendment is brief. It provides: "A well regulated Militia, being necessary to the security of a
free State, the right of the people to keep and bear Arms, shall not be infringed." ...
But the right the Court announces was not "enshrined" in the Second Amendment by the Framers; it is the product of today's
law-changing decision.
The preamble to the Second Amendment makes three important points. It identifies the preservation of the militia as
the Amendment's purpose; it explains that the militia is necessary to the security of a free State; and it recognizes that
the militia must be "well regulated." In all three respects it is comparable to provisions in several State Declarations of
Rights that were adopted roughly contemporaneously with the Declaration of Independence. Those state provisions
highlight the importance members of the founding generation attached to the maintenance of state militias; they also
underscore the profound fear shared by many in that era of the dangers posed by standing armies. While the need for
state militias has not been a matter of significant public interest for almost two centuries, that fact should not obscure
the contemporary concerns that animated the Framers.
The parallels between the Second Amendment and these state declarations, and the Second Amendment's omission of
any statement of purpose related to the right to use firearms for hunting or personal self-defense, is especially striking
in light of the fact that the Declarations of Rights of Pennsylvania and Vermont did expressly protect such civilian uses
at the time....
Permissible Regulations Will Be Decided in the Future
The Court concludes its opinion by declaring that it is not the proper role of this Court to change the meaning of rights
"enshrine[d]" in the Constitution. But the right the Court announces was not "enshrined" in the Second Amendment by
the Framers; it is the product of today's law-changing decision. The majority's exegesis [analysis] has utterly failed to
establish that as a matter of text or history, "the right of law-abiding, responsible citizens to use arms in defense of
hearth and home" is "elevate[d] above all other interests" by the Second Amendment.
Until today, it has been understood that legislatures may regulate the civilian use and misuse of firearms so long as they
do not interfere with the preservation of a well-regulated militia. The Court's announcement of a new constitutional
right to own and use firearms for private purposes upsets that settled understanding, but leaves for future cases the
formidable task of defining the scope of permissible regulations. Today judicial craftsmen have confidently asserted
that a policy choice that denies a "law-abiding, responsible citize[n]" the right to keep and use weapons in the home for
self-defense is "off the table." Given the presumption that most citizens are law abiding, and the reality that the need to
defend oneself may suddenly arise in a host of locations outside the home, I fear that the District's policy choice may
well be just the first of an unknown number of dominoes to be knocked off the table....
The Court properly disclaims any interest in evaluating the wisdom of the specific policy choice challenged in this
case, but it fails to pay heed to a far more important policy choice—the choice made by the Framers themselves. The
Court would have us believe that over 200 years ago, the Framers made a choice to limit the tools available to elected
officials wishing to regulate civilian uses of weapons, and to authorize this Court to use the common-law process of
case-by-case judicial lawmaking to define the contours of acceptable gun control policy. Absent compelling evidence
that is nowhere to be found in the Court's opinion, I could not possibly conclude that the Framers made such a choice.
For these reasons, I respectfully dissent.
Source Citation:
Stevens, John Paul. "The Right to Own a Gun Is Not Guaranteed by the Constitution." Is Gun Ownership a Right? Ed. Kelly Doyle.
San Diego: Greenhaven Press, 2005. At Issue. Rpt. from "Dissenting Opinion, in Supreme Court of the United States, District of
Columbia ET AL. v. Heller." 2008. 1-46. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Guns and Violence
On Good Friday 2009, Anthony Powell shot and killed a female drama classmate as she rehearsed a scene at Henry
Ford Community College in Dearborn, Michigan. Minutes later as police arrived on the scene, he turned the gun on
himself. On New Year's Eve 2009, Matthew Dubois shot and killed his fifteen-year-old girlfriend after reading a
message posted on her MySpace page by a former boyfriend. On January 7, 2010, Darrell Evans shot and killed both
his nine-year-old daughter and her forty-two-year-old babysitter in a New York City suburb before surrendering to
police.
Stories like these appear in papers across the country every day. In fact, numerous sources, including the Centers for
Disease Control (CDC), list the United States as having the highest rate of firearm violence among industrialized
nations. According to the CDC's Fatal and Non-Fatal Injury Database, in 2004 alone 11,344 people were murdered
with a firearm. Safety expert Gavin de Becker at FamilyEducation.com writes, "Every day, about seventy-five
American children are shot. Most recover; fifteen do not." Additionally, he points out, gunshot wounds are now the
leading cause of death for teenage boys.
These statistics and the individual stories behind them have people from across the country asking why so many lives
are lost to gun violence in the United States each year. Blame is placed on everything from violence in the media, video
games, and music to the popularity of gun ownership and what some feel are overly lenient gun laws. Although the
reasons may be debated, there is no doubt that the United States is facing a gun-violence crisis.
The Second Amendment
Although Americans have had the right to bear arms for more than 200 years, considerable confusion still exists about
this right. The American Bar Association stated in the "National Coalition to Ban Handguns Statement on the Second
Amendment," which was issued June 26, 1981, "There is probably less agreement, more misinformation, and less
understanding of the right to keep and bear arms than any other current controversial constitutional issue." The Second
Amendment to the United States Constitution adopted on December 15, 1791, reads, "A well regulated militia, being
necessary to the security of a free state, the right of the people to bear arms, shall not be infringed."
In the Supreme Court case United States v. Miller in 1939, the Court ruled that the amendment only guarantees the
right to bear arms in the context of a state's ability to form a militia. The court ruled that self-protection was not
guaranteed by the amendment. This ruling, however, was seemingly overturned by a ruling in June 2008 in the case of
D.C. v. Heller, which clearly established the right of gun ownership for individuals rather than for a state's collective
right to form a militia. The ruling only applies at the federal level since because District of Columbia is not a state, and
the Justice Department stressed that prohibitions against carrying firearms in schools and government buildings would
still stand as do laws regarding the sale to or possession of firearms by felons or mentally ill persons. It also stated that
the "carrying of dangerous and unusual weapons" such as machine guns is not guaranteed by the Second Amendment.
Still the D.C. v. Heller decision was a major victory for supporters of gun rights. Shortly after the ruling, other suits
were filed challenging city ordinances that limit possession of firearms, citing Section One of the Fourteenth
Amendment. The Amendment reads: "No State shall make or enforce any law which shall abridge the privileges or
immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without
due process of law; nor deny to any person within its jurisdiction the equal protection of the laws." This means that
individual states cannot make a law that takes away a right given by the federal government. Although the effects of the
D.C. v. Heller ruling are yet to be seen in the individual suits now being heard, it is likely to have an impact.
Gun Control Laws
Congress enacted what is considered the first major gun-control law in 1934 to regulate the sale of automatic weapons.
It was followed throughout the years by additional legislation that attempted to further control gun ownership. The year
1994 brought the passage of the Brady Bill, which was the largest attempt to control guns since 1934. The Brady Bill
imposed a waiting period of five days to purchase a handgun and required background checks. Although laws still vary
dramatically from state to state, most states have legislation aimed at reducing children's access to weapons, legislating
who can carry a concealed weapon, and prohibiting the sale of weapons to minors. Some states, however, do not
regulate secondary sales made by private owners. This is often referred to as the gun-show loophole.
Do Gun Control Laws Reduce Crime?
Whereas most states prohibit local gun ordinances, a few states still allow them. Although critics claim this infringes on
the rights of gun owners and creates a complex network of restrictions, some communities credit local ordinances with
a drop in crime. New York City, for example, credits the Sullivan Law, a controversial gun control law that requires
New Yorkers to obtain licenses to carry any firearms small enough to conceal. Although the law had been on the books
for more than one hundred years, a 1994 program designed to deter people from carrying guns in high-crime areas gave
it a boost. According to FBI crime statistics, New York City is now the safest of the ten largest cities in the United
States. Syracuse.com quoted Mayor Michael Bloomberg in a December 28, 2009 article as stating, "Strict gun laws and
specialized programs targeting dangerous areas combined with good old-fashioned policing have kept crime on the
decline in New York City for much of the decade … So far this year, there have been 461 murders, which means the
city is on track to have the lowest number since record keeping began in the 1960s."
Gun ordinances have numerous critics, however, and may soon be deemed unconstitutional due to the D.C. v. Heller
ruling of 2008. The National Rifle Association posted a statement regarding the problem with multiple firearm
ordinances on their Web site on December 16, 2006: "The problem with local firearm ordinances is also one of sheer
variety. Where no uniform state laws are in place, the result can be a complex patchwork of restrictions that change
from one local jurisdiction to the next … individuals who travel with firearms for personal protection are at risk of
breaking the law simply by crossing from one municipality to another."
Guns in the Home
Although some gun owners argue that they need guns in the house for self protection, relative to the number of
shootings in the United States, few guns are used this way. According to the CDC, in 2006 only 360 people were killed
in self-defense, which accounted for a small fraction of the total 30,896 firearm deaths that year. Children are at
particular risk when guns are kept in the house. A 2007 study conducted by the CDC found more than 1.7 million
children live in homes with loaded and unlocked guns, and more than 500 children die in accidental shootings every
year. Further, Gavin de Becker points out at FamilyEducation.com that not only do most fatal accidents happen at
home, but adolescents are "twice as likely to commit suicide if a gun is kept in the home."
Schools and Guns
The massacre at Columbine High School in Littleton, Colorado, in 1999 caught the attention of the nation and brought
the issue of gun control to the forefront. Eric Harris and Dylan Klebold killed twelve students and one teacher and
injured another twenty-one students before committing suicide. It was the deadliest shooting at an American high
school, and yet it only accounted for a small fraction of the total number of youth killed with guns in a year.
Following the tragedy at Columbine, the U.S. Secret Service and the U.S. Department of Education created the Safe
School Initiative to examine thirty-seven separate incidents of school shootings and other school attacks for
information to help parents or administrators prevent future attacks. The report offered several key findings about
school violence. Perhaps most notably, it indicated that attackers generally form a plan over a period of time, meaning
that it is possible discover and thwart their efforts. The report points out, "Prior to most incidents, other people knew
about the attacker's idea and/or plan to attack. In most cases, those who knew were other kids—friends, schoolmates,
siblings, and others. However, this information rarely made its way to an adult." This finding is particularly important
because it suggests that other youth are the best sources concerning an impending incident, so they could contact
authorities who can help. Additionally, the report found that most attackers acquired their guns from their own homes.
Gun Shows
Four of the guns used in the shooting at Columbine High School were obtained at gun shows. Under certain
circumstances, gun owners can sell guns at gun shows without a waiting period or a background check. Colin Goddard,
a student who survived four bullet wounds during the Virginia Tech tragedy, a school shooting that claimed the lives of
32 people in 2007, worked undercover for the Brady Campaign to show how easy it was to purchase a gun. He walked
into several guns shows and purchased firearms with no background check, without showing any identification, and by
paying in cash. According to the Brady Campaign site, "The Brady Law requires criminal background checks of gun
buyers at federally licensed gun dealers, but since unlicensed sellers are not required to do background checks, this
loophole causes particular problems at gun shows which give these unlicensed sellers a guaranteed venue. In most
states convicted felons, domestic violence abusers, and those who are dangerously mentally ill can walk into any gun
show and buy weapons from unlicensed sellers, who operate week-to-week with no established place of business,
without being stopped, no questions asked." Current legislation in Congress would require Brady criminal-background
checks for all buyers at gun shows, but the legislation is currently stalled.
Violence in Video Games and Movies
The role of video games in increasing rates of violence continues to be hotly debated, particularly following the
Columbine tragedy, because Harris and Klebold were avid fans of violent games, including the popular Doom.
However, the courts have consistently not supported a connection between violent video games and acts of violence.
Although some parents of the victims of Columbine attempted to bring lawsuits against the video game manufacturers,
none were successful. The California Supreme Court also ruled against a law requiring youth to be at least eighteen to
purchase video games with violent content. The California Supreme Court cited free speech and stated that the
evidence linking video games and violence is weak.
Although the courts have not been convinced, Craig A. Anderson writing in 2003 in the Psychological Science Agenda,
a publication of the American Psychiatric Association, does feel sufficient evidence exists to conclude that video games
do increase violent behavior. He writes, "Some studies have yielded nonsignificant video game effects, just as some
smoking studies failed to find a significant link to lung cancer. But when one combines all relevant empirical studies
using meta-analytic techniques, five separate effects emerge with considerable consistency." He continues by saying
that violence in video games is "significantly associated with: increased aggressive behavior, thoughts, and affect;
increased physiological arousal; and decreased prosocial (helping) behavior."
Gun Safety
Critics lament that teens have sex education in the schools and state-required drivers' education, but they are not taught
gun safety. Writing in the Milwaukee Journal Sentinel on August 23, 2009, James E. Causey talks about meeting with a
small group of children involved in an anti-gun and anti-violence campaign at the Safe & Sound program in
Milwaukee. The children talked about their experiences with guns; all of them had handled guns that were either loaded
or might have been loaded. One student stated that he was only seven the first time he held a gun. Causey argues, "If
guns are as accessible as is sex these days—shouldn't they be taught how to protect themselves? … Guns are readily
obtainable by young people. Social service agencies should be as concerned with teaching safe, responsible gun use as
they are concerned about teaching safe sex."
Conclusion
As courts debate gun-control laws and researchers look at the possible reasons for increases in gun violence,
communities are organizing to address this epidemic with or without the courts' help. Neighborhood and grassroots
organizations are promoting gun safety and awareness through programs that offer free gun locks, educate residents
about the dangers of having guns in the home, and encourage teens to report peers engaged in gun-related activities.
Stronger gun-control laws are beneficial, but real change will depend on community groups actively working together
to change the country's mindset toward guns and gun violence.
Source Citation:
"Guns and Violence." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
The Increased Availability of Guns Reduces Crime
John Luik, a Canadian philosopher and health policy analyst, has worked at several conservative Canadian think tanks,
including the Niagara and Fraser institutes.
Studies show that the chance of being the victim of a violent crime decreases as the availability of guns increases. In fact, citizens
with guns have thwarted several potentially tragic school shootings. Laws banning guns will not, therefore, protect people.
Indeed, Virginia Tech's campus gun ban was unable to protect the students murdered there on April 16, 2007.
[In early May 2007], a Grade 9 student in Calgary [Alberta, Canada] confessed to his parents he was planning to carry
out an attack on some of his teachers on the April 20 anniversary of the 1999 Columbine massacre [in Colorado, in
which fifteen died]. Then came the [April 16, 2007,] tragedy at Virginia Polytechnic Institute and State University
[Virginia Tech]. Invariably, in these circumstances, the talk in Canada and the U.S. turns to gun control.
Among the first off the mark was California Senator Dianne Feinstein, who said, "It is my deep belief that shootings
like these are enabled by the unparalleled ease with which people procure weapons in this country." Feinstein, who has
had a concealed-handgun permit herself, was quickly followed by presidential candidate John Edwards, who opined
that the Virginia shooting showed the need for new gun restrictions. Indeed, the subtext for much of the media
coverage of the Virginia Tech story, both in Canada and the U.S., was that bulletproofing the United States from
subsequent carnage requires tough new controls on guns.
Canadians often feel unjustifiably smug about the "effectiveness" of gun control here versus in the U.S. But research
has shown that even with widely differing gun ownership rates and regulations, neighbouring Canadian and U.S
communities that are socially, economically and demographically alike have similar homicide rates.
The chances of innocent people being the victims of violent crime, including murder, decrease—not increase—when access to
guns is made easier.
Looking at the Evidence
If the evidence suggests anything about gun control, it is that the chances of innocent people being the victims of
violent crime, including murder, decrease—not increase—when access to guns is made easier. The more people who
own guns, the less violent crime there is. The evidence for this comes in a number of forms, most definitively in John
Lott's massive study of guns and crime in the U.S. from 1977 to 1996, titled More Guns, Less Crime.
Lott's question was whether allowing people to carry concealed handguns deters violent crime. His assumption was that
criminals are rational in that their reaction to an increase in the number of armed potential victims is to commit fewer
crimes. Lott looked at the FBI's crime statistics for all 3,054 U.S. counties. He found that, over the period of his study,
gun ownership had been increasing across the country—from 27.4 per cent in 1988 to 37 per cent by 1996—yet crime
rates had been falling. More specifically, states with the greatest decrease in crime rates were those with the fastest
increases in gun ownership.
According to Lott, for each additional year that laws allowing people to carry concealed handguns were on the books,
robberies declined by two per cent, rapes by two per cent and murders by three per cent. If all states that did not permit
carrying concealed handguns had allowed them in 1992, for instance, there would have been 1,839 fewer murders,
3,727 rapes and 10,990 aggravated assaults.
Allowing Access to Guns
The importance of allowing citizens access to guns as a life-saving measure is even more evident when it comes to
instances of multiple-victim public shootings at schools. Lott, for instance, looks at the eight public-school shootings
that occurred from 1997 to 2000. In two of these cases—Pearl, Miss., and Edinboro, Pa.—the attacks were stopped by
citizens with guns. Interestingly, as Lott notes in his 2003 book, The Bias Against Guns, during Virginia's other
university shooting, at the Appalachian School of Law in January 2002, it was three students, two of them armed, who
overcame the attacker and prevented further killing.
Lott and co-researcher William Landes also looked at all multiple-victim public shootings in the U.S. from 1977 to
1995. Over this period, 14 states adopted right-to-carry gun laws, and the number of such shootings declined by 84 per
cent, with deaths in the shootings reduced by 90 per cent.
It is sadly ironic that the Virginia Tech story might have been different if a bill to prohibit so-called "gun-free zones,"
such as the one at Virginia Tech, had passed the Virginia General Assembly in [2006]. That legislation was drafted to
prevent state universities like Virginia Tech from prohibiting students with concealed handgun permits from carrying
guns on campus.
Once the grieving has abated, there will be time to look at the events at Virginia Tech more through the lens of policy
and less through emotion. Then we will find that, while it is clear no government policy could have prevented such
horrible sadness, it is equally clear that allowing people to carry guns was not the real cause.
Source Citation:
Luik, John. "The Increased Availability of Guns Reduces Crime." Guns and Crime. Ed. Tamara L. Roleff. San Diego: Greenhaven
Press, 2000. At Issue. Rpt. from "Bulletproofing Canada: Gun Control Won't Prevent School Shootings. But Having Guns Might
Help Individuals Mitigate Them." Western Standard (21 May 2007): 41-45. Gale Opposing Viewpoints In Context. Web. 10 Apr.
2012.
The Claim That Increased Gun Availability Reduces Crime Is Unfounded
Sabina Thaler is a 2007 graduate of Virginia Tech who lives in Roanoke, Virginia.
Several flaws have been found in the oft-cited theory that communities where people are allowed to carry concealed weapons
experience lower crime rates. John Lott and David Mustard's premise that crime victims uniformly use guns in self-defense is
false. Women, for example, are less likely to use guns to protect themselves and are in fact in greater danger when doing so.
Moreover, critics of Lott and Mustard's research found that when using actual crime data, in the rare instance when increased
gun availability does lead to less crime, the difference is not significant.
Joe Painter presented a compelling challenge in the June 14 [2007] issue of The Roanoke Times.
By way of a specific scientific analysis, Painter hypothesized that more guns equal less crime. Because I love a good
debate, and because I believe my rights to not be searched or shot are greater than your right to bear arms, I am
accepting this challenge.
While Painter develops an excellent hypothesis, he commits the serious fallacy of approaching the data from the
perspective of a lawyer as opposed to that of a scientist. The difference between lawyers and scientists is that lawyers
seek to prove their side right; whereas, scientists go to great lengths to prove they're wrong.
I admit John Lott and David Mustard's data, to which Painter refers, sound convincing. Indeed, if these results are
accurate we can no longer find irony with conservatives who are simultaneously pro-life and pro-gun.
A Study Plagued with Errors
Luckily for the challenge at hand, Lott and Mustard's study is plagued with errors. For one, this study relies on the
premise that guns are uniformly used for protection. In other words, when using a gun defensively, a person will
behave the same regardless of his or her age, sex, social status, gang affiliation, etc.
If this seems plausible, read Stephen Schnebly's analysis of the impact of the victim on defensive gun use. Schnebly
finds that victims behave differently when using a gun for protection. A woman is much less likely to fire a gun in
defending herself than is a man. In fact, with the exception of domestic situations, statistically a woman is in more
danger if she attempts to use a firearm to protect herself than she would be without a gun.
Readers might be thinking: "Schnebly's theory does not necessarily disprove Lott and Mustard's theory. Couldn't there
still be a chance that, even when controlling for this disparity, guns will prove their superior powers of crime control?"
No.
In 1998 two scientists, Hashem Dezhbakhsh and Paul Rubin, sensed something fishy with Lott and Mustard's study.
Dezhbakhsh and Rubin decided to see what would happen if they replaced Lott and Mustard's "dummy" variables with
the crime rate that actually occurred. To the National Rifle Association's dismay, they found that Lott and Mustard's
theory did not hold up in the real world.
Concealed gun laws are correlated with an increase in crime; for those few instances where guns are correlated with lessened
crime, the difference is much less significant.
In many cases, Lott and Mustard's concealed gun laws are correlated with an increase in crime; for those few instances
where guns are correlated with lessened crime, the difference is much less significant than Lott and Mustard theorized.
The reason people don't tout Lott and Mustard's analysis is because it was based on several problematic assumptions
that ultimately devastated and discredited their findings.
Crime is almost certainly caused by multiple factors. Guns should not be America's whipping boy. Nevertheless, guns
make taking lives too easy. We cannot identify and detain every potential criminal; but, by eliminating firearms, we
will make it much more difficult for robbers to become murderers.
Source Citation:
Thaler, Sabina. "The Claim That Increased Gun Availability Reduces Crime Is Unfounded." Guns and Crime. Ed. Tamara L. Roleff.
San Diego: Greenhaven Press, 2000. At Issue. Rpt. from "Guns = Less Crime: Equation Doesn't Hold Up." Roanoke (VA) Times 22
July 2007. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Health Care Issues
In recent years, the availability and affordability of health insurance in the United States has become the subject of
much debate. The United Nations' Universal Declaration of Human Rights lists medical care among the basic human
rights to which all people are entitled. However, in 2009 about one in six Americans had no health insurance at all. For
many people who are insured, the cost of coverage is a financial hardship. This situation has led some people to call for
the government to provide health insurance for all citizens. Others, however, are skeptical of government's ability to
efficiently manage health insurance and oppose any plans that involve government. The issue is made more urgent by
rapidly rising health care costs that threaten to overwhelm the country's current system of health insurance, and the
national economy in general. Health-care reform has become one of the most important issues in contemporary
American politics.
The Basics of Health Care
In most developed countries, health care systems involve government control or sponsorship. For instance, in Great
Britain, Scandinavia, and the countries of the former Soviet Union, the government controls almost all aspects of health
care, including access and delivery. For the most part, health services in these countries are free to everyone; the
systems are financed primarily by taxes. Other countries, such as Germany and France, guarantee health insurance for
almost all their citizens, but the government plays a smaller role in managing health care. Both systems are financed at
least in part by taxes on wages. The World Health Organization (WHO) 2007 World Health Report ranked France's
health care system the best in the world.
The U.S. government, by contrast, does not pay for most of its citizens' health care. Generally, Americans receive
health care through employer-sponsored insurance, or they arrange to pay for insurance on their own. Like all forms of
insurance, health insurance operates by pooling the resources of a group of people who face similar risks. This creates a
common fund that members can draw upon when needed. Each person in the group pays a certain amount, called a
premium, every month. These premiums are used to cover the medical expenses of group members who become sick or
injured.
Health Insurance in the United States
Today, most Americans receive health insurance through their place of work. Employers typically pay for part of the
premiums. Most employer-sponsored plans are administered through payroll contributions. People who are selfemployed and whose employers do not provide health insurance must purchase individual health insurance. Individual
plans are generally more expensive than group plans. Certain low-income individuals and families may be eligible for
Medicaid, a form of government-sponsored health insurance. In 1997 the U.S. government introduced CHIP
(Children's Health Insurance Program) to assist the children of families who do not qualify for Medicaid but cannot
afford the cost of private insurance. People over sixty-five years of age and people with certain disabilities may be
eligible for Medicare, another federally funded health insurance plan.
There are two basic types of health insurance plans in the United States: indemnity plans and managed care. Under an
indemnity plan (also called fee-for-service plan), the insurance company pays a percentage of the cost of medical
services provided (typically 70 to 80 percent). The insured person is responsible for the remaining 20 to 30 percent.
Indemnity plans do not limit patients in their choice of doctors or hospitals. Managed care controls the use of medical
services in an effort to keep costs low. An example of a managed-care plan is a health maintenance organization
(HMO). Participants in an HMO plan are limited in their choice of doctors and hospitals. They must receive medical
services at HMO-operated facilities or visit physicians and hospitals that are affiliated with the plan. However, the cost
to the participant is usually much lower, limited to a small co-payment for visits to a doctor or hospital emergency
room.
Each type of plan has advantages and drawbacks. Indemnity plans are more expensive than managed care plans but
they offer great flexibility. Managed care plans may require individuals to choose primary care physicians, doctors who
monitor their health care. Plan participants must consult their primary care physicians to get a referral to a specialist.
Managed care plans emphasize preventive care such as office visits and immunizations, but they may limit coverage for
medical tests, surgery, mental health care, and other support. By contrast, indemnity plans may not pay for some types
of preventive care, such as checkups and immunizations.
Comparing Systems
Both government-based health care systems and the mixed public/private system of the United States offer benefits but
also have serious flaws. The former provide universal coverage, guaranteeing access to health care regardless of
income or employment. Most government-based plans also provide better care for pregnant women and newborn
babies than the U.S. system. However, supporting these health care systems requires higher levels of government
spending than the public/private system.
Furthermore, the goal of providing good care for everyone cannot always be reached in government-based systems
because of limited money and resources. The pressure to keep spending under control leads to tight government
restrictions. As a result, patients in some countries, such as Canada and Sweden, have sometimes had to wait a long
time for certain services (although other countries, such as France, have managed to avoid this problem).
The health care system in the United States is more flexible than government-controlled systems because providing
universal health care and containing costs are not its main goals. In the United States, patients can obtain virtually any
kind of medical service. However, when a person becomes ill, treatment will usually depend on the nature of his or her
health insurance. Someone who does not have insurance or the resources to pay the health care provider may not be
able to get the necessary treatment.
The Debate Over Health-Care Reform
In recent years many politicians, academics, and citizens have been advocating the position that the American healthcare system is in need of comprehensive reform. Proponents of this position rely on three major factors: the high costs
of health care, the relatively low quality of care, and the large number of persons who are uninsured.
The costs of health care are escalating rapidly. Health-care costs in the United States more than doubled between 1997
and 2007. Health-insurance premiums have risen five times as fast as wages. Americans spend more on health care than
people in all other nations. Per capita spending in the United States is more than $5,400 per person per year, a figure
that is 134 times higher than the average of other industrialized nations, according to public health researchers Susan
Starr Sered and Rushika Fernandopulle. In 2002, Americans spent a total of $1.6 trillion dollars on health care,
including $486 billion on hospitals, $340 million on doctors, $162 billion on prescription drugs, and $139 billion on
nursing home and home health care. Some health economists have predicted that America’s health spending will reach
$4 trillion by 2015.
Statistical measures and indicators suggest that, despite this high level of health-care spending, the quality of health
care available in the United States is low. According to statistics compiled by the U.S. Census Bureau and the National
Center for Health Statistics, a baby born in the United States has an average life expectancy of 77.9 years. While the
life expectancy is higher than in previous years, it places the United States 42nd in the world, down from 11th place
two decades ago. Among the countries with higher average life expectancies than the United States are most European
nations, Japan, Canada, Australia, Singapore, Jordan, and Guam. Another measure of a nation’s health care system is
infant mortality rates. Here again, the United States lags behind comparable industrialized countries. America’s rate of
6.8 deaths for every 1,000 live births ranks it 41st among the world’s nations, and is comparable with such poorer
countries as Cuba and Croatia. For African Americans, the rate is 13.7 deaths, the same as in Saudi Arabia.
Finally, many millions of Americans are forced to go without health insurance entirely. In 2006, 46.6 million
Americans were without health insurance. A 2009 study found that 86.7 million Americans had been without health
insurance at some point in 2007 or 2008, nearly 75 percent of whom has been uninsured for six months or more. This
problem is exacerbated by the rising costs of care. Rising costs are the main reason why hundreds of thousands of
companies have stopped offering health coverage for their workers. Only 60 percent of Americans received health
insurance through their employers in 2007, down from 69 percent in 2000.
For people who are underinsured or who lack insurance, getting health coverage can create a financial crisis. The high
numbers of uninsured has health and social implications as well. While laws mandate that hospital emergency rooms
take in all people, including the poor and uninsured, these people still often do not get the medical care they need,
especially at the appropriate time. "The uninsured are less likely to see a doctor … and are less likely to receive
preventative services," notes Arthur Kellerman, a medical professor and co-chair of an Institute of Medicine (IOM)
panel that has studied the problem. Thus they often wait until medical problems become severe before seeking medical
care, creating more costs for the health care system that are passed on to others.
Proposed Solutions
Health-care reform has become one of the most important issues in American politics. It took center stage for the entire
first year of the presidency of Barack Obama. Advocates of health-care reform have proposed several different systems
that they believe would solve these and other problems. Among the most popular proposed solutions are a single-payer
system, a rationing plan, and mandates.
A single-payer system is defined has a health-care system that has doctors, hospitals, and other health care providers
paid for out of a single fund. (Doctors and other providers can work either in the government or private sector.) Canada
utilizes such a system nationwide, and Medicare is a domestic example. In Canada, for example, doctors are paid by a
fund with money taken from taxes collected by the national and provincial governments. The government collects
funds, sets fees for medical services, and pays health care providers.
Proponents of a single-payer system argue that it would simplify America’s patchwork system of multiple insurance
providers and greatly reduce administrative costs in health care. Doctors would benefit by not dealing with multiple
insurance forms. Nonaffluent patients would benefit from more affordable health insurance. Opponents argue that a
single payer system is too radical a change from the American status quo. They also argue that having the government
control medical prices and costs will inhibit the development of new medicines and technologies and may compromise
the quality of health care.
The term "rationing" means dividing up scarce resources among people who want access to them. Many experts have
concluded that offering every type of medical service to all the people who need or want it is not possible. They believe
it is important to develop guidelines on what type of care should be offered and who should be eligible for it. At the
small-scale level, allocation may affect decisions about individual care. If five people need a heart transplant, but only
one heart is available, a decision must be made about whom should receive the transplant. On a larger scale, allocation
might involve a government's decision on how much money to spend on expensive drugs and high technology
equipment for the elderly and how much to spend to prevent childhood diseases.
Advocates of a rationing plan argue that rationing care is the only way to rein in the growth of health-care costs. They
also argue that it is a way to ensure that everyone receives some basic level of care, which is key to preventing the
widespread health problems faced by the uninsured. Opponents of rationing argue that it raises basic issues of fairness.
Who would set the guidelines about what care is available? Other people think that health care services should be
available to anyone who needs them. They argue that government and the medical community have a moral obligation
to ensure that everyone has access to the health care they require. Finally, many opponents of rationing are opposed to
having the government play a role in private medical-care decisions. In the summer of 2009, opponents of rationing
said that government-run care rationing would result in the creation of "death panels" in which government bureaucrats
would literally decide who should live or die. This assertion had no basis in fact, but it was sufficient to frighten many
Americans, especially elderly Americans, who were already concerned about how proposed healthcare reform might
affect them personally.
Another approach, being tried by the state of Massachusetts, builds on America’s system of private health insurance.
Under legislation passed in 2006, all residents were required to obtain health care coverage, either by purchase or
through their employers. In addition, people under a certain income threshold receive subsidies from the state
government. A state agency, the Connector Authority, helps package insurance options and negotiates rates from
insurance companies. Part of the theory is that by mandating everybody to participate in buying health insurance, there
are more funds available to provide universal health coverage. The Massachusetts plan had some initial success,
enabling 150,000 previously uninsured state residents to obtain affordable coverage. But some argue that maintaining
universal coverage will become increasingly expensive, especially five or ten years in the future. Interestingly, while
running for the 2008 Republican presidential nomination, Massachusetts Governor Mitt Romney explicitly promised
not to expand the Massachusetts model to the rest of the nation, arguing against a "one-size fits all" approach, even as
several Democratic candidates were touting Massachusetts as an inspiration for their own health care proposals.
Source Citation:
"Health Care Issues." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Access to Health Care Is a Human Right
"For decades ... we have accepted the barbaric consequences of a profit-driven health care system that bullies and denies us
basic freedoms."
In this viewpoint Helen Redmond, a Licensed Clinical Social Worker and a member of Chicago Single-Payer Action Network
(CSPAN), argues that the profit-driven health care system in the United States places an oppressive burden on individuals and
families. Mental illnesses among the uninsured often go untreated, or benefits are limited, with the result that some uninsured
end up in prison. Substance abuse problems are often uncovered, or coverage has been severely curtailed, while other kinds of
illnesses drive the uninsured into credit card debt, or force them to continue working in jobs they do not like in order to hold on
to health care benefits. For-profit health care is a form of bondage.
As you read, consider the following questions:
1. The author argues that many mentally ill people who are eligible for health care coverage do not receive it. What are
some of the reasons for this?
2. How serious is the shortage of methadone treatment programs? Does the author provide an explanation for the
shortage?
3. What is the average amount of credit card debt among the uninsured, according to the author?
At the core of the idea that health care is a human right is freedom. The for-profit health care system in the United
States severely restricts our freedom in a number of subtle and not so subtle ways. Instead of freedom there is fear.
The health care crisis impacts every aspect of our lives down to the most seemingly insignificant personal decisions we
make. This national bully terrorizes and forces us to live in fear. It determines what is possible and not possible, it
crushes hopes and dreams and imprisons people into lives they did not choose. For decades in this country we have
accepted the barbaric consequences of a profit-driven health care system that bullies and denies us basic freedoms.
Therefore, we are not free.
How does the bully do this? Let me count the ways.
Health Care Terror and the Mentally Ill
Arguably one of the most inhuman consequences of the health care crisis is the predicament of the mentally ill. People
with serious mental illness encounter stigma, discrimination and difficulty accessing treatment. Millions of adults and
children suffer from a variety of treatable mental health problems: depression, anxiety, schizophrenia, and pervasive
developmental disorder. But insurers avoid covering those with a diagnosed mental disability because of the chronic
nature of the problem, which means treatment is often needed for years, and medications are expensive. This cuts into
profit margins. Moreover, mental illness is not covered on a par with physical illness by most health insurance policies.
The number of visits to mental health providers is limited, typically 20 sessions with a therapist per calendar year, and
admission to inpatient psychiatric hospitalization is often restricted to fourteen days and not reimbursed at a hundred
percent. This discrimination is perfectly legal and even in states where parity laws have been passed coverage is still
uneven. A study titled, "Design of Mental Health Benefits: Still Unequal After All These Years," found that forty-eight
percent of workers in employer-sponsored health plans were subjected to the limiting of inpatient days, caps on
outpatient visits, and higher co-payments. Leaders in the field of mental health have made the case over and over again
that treatment must be both affordable and open-ended because mental illnesses don't respond to rigid timetables.
The barriers for those with insurance coverage are numerous, but for the mentally ill who are uninsured they are almost
insurmountable. In major cities, streets and shelters are full of mentally ill people who are not receiving any type of
treatment. Most qualify for Medicaid. The problem is actually getting Medicaid coverage. For people with a serious
and persistent mental illness—especially the homeless—to negotiate the system and gather all the information needed
to apply is almost impossible. They need proof of homelessness and income, a birth certificate, photo identification,
copies of bills and a mandatory interview with a case worker. Good luck. The consequence is hundreds of thousands of
mentally ill are eligible for coverage but don't get it. Instead, they wander the streets talking to themselves, hearing
voices, dirty, hungry, and begging for money.
And they end up in jail. It's shocking: jails and prisons have become de facto psychiatric treatment facilities for the
mentally ill. The US Department of Justice reports about sixteen percent of inmates—more than 300,000 people—
[have mental illnesses]. One study found that Los Angeles County Jail and Rikers Island in New York City each held
more people with mental illness than the largest psychiatric inpatient facility in the United States. In fact, Los Angeles
County Jail, to its shame, has become the largest mental health care institution (if you can call a penal institution such a
thing) in the country. The jail treats 3,200 seriously mentally ill prisoners every day! For many, it's the first time they've
ever received treatment, and some inmates improve quickly. But once they are dumped back on the streets without
structure, access to counselors, and medication, they deteriorate. Homeless, delusional, and out of control, they are
inevitably rearrested for behaviors related to their untreated mental illness.
The mentally ill are not free.
Health Care Terror and the Addicted
Those with addictions are similarly discriminated against. Addictions to alcohol, opiates, crack/cocaine, and
prescription drugs are mental health problems that need ongoing treatment. Here again, insurers restrict benefits to save
money. Inpatient treatment used to be twenty-one days; now it has been cut in half to ten, and some plans provide even
fewer days. Outpatient treatment is typically twenty visits with a therapist per calendar year. For people struggling with
a long-standing addiction, twenty sessions is a cruel joke.
The shortage of treatment slots results in millions being denied care. According to the Illinois Alcohol and Drug
Dependence Association, in 2004, 1.5 million Illinois residents didn't receive treatment because they couldn't afford it.
A report by Join Together, a national resource center, reported that in San Francisco, 1,500 drug and alcohol users were
shut out of treatment daily.
Methadone maintenance, despite being the most successful, and cost-effective treatment for heroin addiction, is in
seriously short supply. There are roughly 810,000 heroin addicts and only 170,000 funded methadone treatment slots.
The wait lists are legendary, at one point in the state of Washington the wait was up to 18 months, and in New York
there were 8,000 people on a waiting list! In Columbus, Ohio, it took Heather Bara eighteen months to get into a
methadone program. While waiting, she overdosed twice.
The drug addicted are not free.
The Danger of Bankruptcy
The health care medical industrial complex is an enormous part of the economy and health care spending now accounts
for 16 percent of Gross Domestic Product. Half of all personal bankruptcies are caused by illness or medical bills. The
number of medical bankruptcies has increased by 2,200 percent since 1981. Have you ever tried to pay back half a
million dollars for an unplanned and uninsured "stay" in an intensive care unit? Shit-out-[of-]luck stroke that I had! But
even those with insurance have good reason to fear bankruptcy. Just ask the parents of three-year-old Elly Bachman.
She was bitten by a snake. The treatment for the bite—including antivenins and several surgeries to save the leg—cost
the family nearly $91,000 after insurance paid out. The hospital, in a moment of charity, waived $49,000. Now the
Bachmans owe $42,000. They have set up a website. Go to www.ellysnakebitefund.org to make a donation.
The Bachmans are not free.
Credit/debit cards are increasingly used to pay for co-pays, deductibles, medication, medical supplies, routine exams,
and diagnostic testing. An MRI costs over one thousand dollars. If you had a suspicious mass in your brain would you
put the MRI on your Visa? MRI one-thousand dollars, hospital charges two-thousand dollars, medication three-hundred
dollars. Peace of mind that you don't have a malignant brain tumor: PRICELESS! Except there is a price. Many
hospitals and clinics prominently display their price list like a menu, as if purchasing health care was akin to going to a
restaurant: I'll have the mammogram, well done, please.
A study titled Borrowing to Stay Healthy: How Credit Card Debt Is Related to Medical Expenses by Cindy Zeldin and
Mark Rukaivan (www.accessproject.org), illustrates how deeply indebted millions of people are due to the high cost of
health care. The cost of health insurance continues to outpace inflation and wage growth. In other words, health care is
more expensive and there is less money to pay for it. Now, about 29 million adults have medical debt and—no surprise
here—debt acts as a disincentive to filling prescriptions, and following through with recommended treatments or
diagnostic tests. If there were still debtors prisons 29 million people would be in them.
According to the study, the uninsured have an average credit card debt of $14,512 in medical debt and those with
insurance have $10,973. The average credit card debt for those in households with children was $12,840, and [for]
those without children $10,669. The numbers can't convey the reality of what debt costs families and individuals in
terms of quality of life. It means parents can't buy their children other things: a computer, trumpet lessons, Hannah
Montana tickets (if you could even get them), or a week in Disneyland. For adults, it means a working life dedicated to
paying off medical debt instead of buying a home or taking vacations.
Using Plastic to Pay for Care
The credit card industry has recognized the growing market for patient out-of-pocket-costs and has designed "medical
credit cards" specifically for medical expenses. Business is good. In 2001, patients charged $19.5 billion in health care
services to Visa cards. Highmark Inc., a health insurer in Pennsylvania, offers a "Health Care Gift Card." The card
costs $4.95 (plus shipping and handling), and can be loaded with as little as $25 to as much as $5,000. Now you can
give your partner that colonoscopy the proctologist recommended. Or buy yourself that brain shunt for your birthday.
Oops, I don't think $5,000 will cover it. Put the outstanding balance on another credit card.
We are not free.
Medical debt is related to another crisis in this country: the mortgage crisis. Another finding in the study by Zeldin and
Rukaivan is this: among those households that refinanced their homes or took out a second mortgage, [sixty percent]
paid down credit cards with the money. A recent story in the Chicago Reporter illustrated the connection between the
two. Edward and Thaida Booker bought a home in 2001 with a loan carrying a 6.2 percent interest rate. She was
diagnosed with cervical cancer a couple years later and they had to refinance their mortgage at a higher interest rate to
access some of the equity to pay off unexpected medical bills. Thaida died, and without her income Mr. Booker was on
his own to pay a mortgage that had gone from $800 a month to $1,425. The problem is Booker is retired and his
pension and disability payments can't cover the new amount. With help from a housing counselor he was able to
negotiate new terms with his lender, but still has to rent out a room in the house and work side jobs to make the
mortgage payment.
Edward Booker is not free.
Health Care and Employment
Have you ever stayed in a job that you hated because of the health insurance and [because] you or a family member had
a health condition that required frequent doctor visits, labs, and expensive medication? It's called job lock. An article in
BusinessWeek titled "Held Hostage by Health Care—Fear of Losing Coverage Keeps People at Jobs Where They're
Not Their Most Productive" exposes an aspect of the health care crisis that has been little discussed. Workers are
chained to jobs for one reason; the employers' health insurance. The article alleges there is "[a] health care refugee in
every office." I would wager there are millions of Americans who are desperate to leave their jobs but without
coverage, medical bankruptcy and/or a health emergency make the risk of quitting impossible. So we put up with the
boredom and abuse (and think we are "lucky" to have medical benefits), but if insurance wasn't tied to employment we
could tell our boss to "Take this job and shove it!"
Kathryn Holmes Johnson is a health care refugee profiled in the BusinessWeek article. For a decade Johnson wanted to
leave her job to find one that she really loved, but her husband and two children all have asthma and other health
problems. The entire family is covered through her medical plan. The $2,000 a year in co-payments for the family's
prescription drugs would have turned into $85,000 without insurance. When she considered changing jobs, the critical
factor was the prescription drug coverage that a new employer would offer. She wondered, "In what other country
would that be the deciding factor?" Only in America—a nation of health care hostages.
We are not free.
Source Citation:
Redmond, Helen. "Access to Health Care Is a Human Right." Universal Health Care. Ed. Susan C. Hunnicutt. Detroit: Greenhaven
Press, 2010. Opposing Viewpoints. Rpt. from "We Are Not Free: Health Care as a Human Right." Counterpunch (21 Feb. 2008).
Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Health Care Is Not a Right
"All legitimate rights have one thing in common: they are rights to action, not to rewards from other people."
Leonard Peikoff is the founder of the Ayn Rand Institute and the author of Objectivism: The Philosophy of Ayn Rand. In this
viewpoint he argues that treating health care as a human right requires that services that belong to some people—doctors—are
given for free to others. Health care can only be treated as a right through a violation of personal rights of doctors.
As you read, consider the following questions:
1. What does the author mean when he says that all legitimate rights are rights to action?
2. Does the author believe that most people can afford health care? Why or why not?
3. What moral principle does the author argue doctors must assert?
Most people who oppose socialized medicine do so on the grounds that it is moral and well-intentioned, but
impractical; i.e., it is a noble idea—which just somehow does not work. I do not agree that socialized medicine is moral
and well-intentioned, but impractical. Of course, it is impractical—it does not work—but I hold that it is impractical
because it is immoral. This is not a case of noble in theory but a failure in practice; it is a case of vicious in theory and
therefore a disaster in practice. I want to focus on the moral issue at stake. So long as people believe that socialized
medicine is a noble plan, there is no way to fight it. You cannot stop a noble plan—not if it really is noble. The only
way you can defeat it is to unmask it—to show that it is the very opposite of noble. Then at least you have a fighting
chance.
What is morality in this context? The American concept of it is officially stated in the Declaration of Independence. It
upholds man's unalienable, individual rights. The term "rights," note, is a moral (not just a political) term; it tells us that
a certain course of behavior is right, sanctioned, proper, a prerogative to be respected by others, not interfered with—
and that anyone who violates a man's rights is: wrong, morally wrong, unsanctioned, evil.
Now our only rights, the American viewpoint continues, are the rights to life, liberty, property, and the pursuit of
happiness. That's all. According to the Founding Fathers, we are not born with a right to a trip to Disneyland, or a meal
at McDonald's, or a kidney dialysis (nor with the 18th-century equivalent of these things). We have certain specific
rights—and only these.
Why only these? Observe that all legitimate rights have one thing in common: they are rights to action, not to rewards
from other people. The American rights impose no obligations on other people, merely the negative obligation to leave
you alone. The system guarantees you the chance to work for what you want—not to be given it without effort by
somebody else.
The right to life, e.g., does not mean that your neighbors have to feed and clothe you; it means you have the right to
earn your food and clothes yourself, if necessary by a hard struggle, and that no one can forcibly stop your struggle for
these things or steal them from you if and when you have achieved them. In other words: you have the right to act, and
to keep the results of your actions, the products you make, to keep them or to trade them with others, if you wish. But
you have no right to the actions or products of others, except on terms to which they voluntarily agree.
To take one more example: the right to the pursuit of happiness is precisely that: the right, to the pursuit—to a certain
type of action on your part and its result—not to any guarantee that other people will make you happy or even try to do
so. Otherwise, there would be no liberty in the country: if your mere desire for something, anything, imposes a duty on
other people to satisfy you, then they have no choice in their lives, no say in what they do, they have no liberty, they
cannot pursue their happiness. Your "right" to happiness at their expense means that they become rightless serfs, i.e.,
your slaves. Your right to anything at others' expense means that they become rightless.
That is why the U.S. system defines rights as it does, strictly as the rights to action. This was the approach that made
the U.S. the first truly free country in all world history—and, soon afterwards, as a result, the greatest country in
history, the richest and the most powerful. It became the most powerful because its view of rights made it the most
moral. It was the country of individualism and personal independence.
Today, however, we are seeing the rise of principled immorality in this country. We are seeing a total abandonment by
the intellectuals and the politicians of the moral principles on which the U.S. was founded. We are seeing the complete
destruction of the concept of rights. The original American idea has been virtually wiped out, ignored as if it had never
existed. The rule now is for politicians to ignore and violate men's actual rights, while arguing about a whole list of
rights never dreamed of in this country's founding documents—rights which require no earning, no effort, no action at
all on the part of the recipient.
You are entitled to something, the politicians say, simply because it exists and you want or need it—period. You are
entitled to be given it by the government. Where does the government get it from? What does the government have to
do to private citizens—to their individual rights—to their real rights—in order to carry out the promise of showering
free services on the people?
The answers are obvious. The newfangled rights wipe out real rights—and turn the people who actually create the
goods and services involved into servants of the state. The Russians tried this exact system for many decades.
Unfortunately, we have not learned from their experience. Yet the meaning of socialism is clearly evident in any field
at all—you don't need to think of health care as a special case; it is just as apparent if the government were to proclaim
a universal right to food, or to a vacation, or to a haircut. I mean: a right in the new sense: not that you are free to earn
these things by your own effort and trade, but that you have a moral claim to be given these things free of charge, with
no action on your part, simply as handouts from a benevolent government.
How would these alleged new rights be fulfilled? Take the simplest case: you are born with a moral right to hair care,
let us say, provided by a loving government free of charge to all who want or need it. What would happen under such a
moral theory?
Haircuts are free, like the air we breathe, so some people show up every day for an expensive new styling, the
government pays out more and more, barbers revel in their huge new incomes, and the profession starts to grow
ravenously, bald men start to come in droves for free hair implantations, a school of fancy, specialized eyebrow
pluckers develops—it's all free, the government pays. The dishonest barbers are having a field day, of course—but so
are the honest ones; they are working and spending like mad, trying to give every customer his heart's desire, which is a
millionaire's worth of special hair care and services—the government starts to scream, the budget is out of control.
Suddenly directives erupt: we must limit the number of barbers, we must limit the time spent on haircuts, we must limit
the permissible type of hair styles; bureaucrats begin to split hairs about how many hairs a barber should be allowed to
split. A new computerized office of records filled with inspectors and red tape shoots up; some barbers, it seems, are
still getting too rich, they must be getting more than their fair share of the national hair, so barbers have to start
applying for Certificates of Need in order to buy razors, while peer review boards are established to assess every
stylist's work, both the dishonest and the overly honest alike, to make sure that no one is too bad or too good or too
busy or too unbusy. Etc. In the end, there are lines of wretched customers waiting for their chance to be routinely
scalped by bored, hog-tied haircutters, some of whom remember dreamily the old days when somehow everything was
so much better.
Do you think the situation would be improved by having hair-care cooperatives organized by the government?—having
them engage in managed competition, managed by the government, in order to buy haircut insurance from companies
controlled by the government?
If this is what would happen under government-managed hair care, what else can possibly happen—it is already
starting to happen—under the idea of health care as a right? Health care in the modern world is a complex, scientific,
technological service. How can anybody be born with a right to such a thing?
Under the American system you have a right to health care if you can pay for it, i.e., if you can earn it by your own
action and effort. But nobody has the right to the services of any professional individual or group simply because he
wants them and desperately needs them. The very fact that he needs these services so desperately is the proof that he
had better respect the freedom, the integrity, and the rights of the people who provide them.
You have a right to work, not to rob others of the fruits of their work, not to turn others into sacrificial, rightless
animals laboring to fulfill your needs.
Some of you may ask here: But can people afford health care on their own? Even leaving aside the present governmentinflated medical prices, the answer is: Certainly people can afford it. Where do you think the money is coming from
right now to pay for it all—where does the government get its fabled unlimited money? Government is not a productive
organization; it has no source of wealth other than confiscation of the citizens' wealth, through taxation, deficit
financing or the like.
But, you may say, isn't it the "rich" who are really paying the costs of medical care now—the rich, not the broad bulk of
the people? As has been proved time and again, there are not enough rich anywhere to make a dent in the government's
costs; it is the vast middle class in the U.S. that is the only source of the kind of money that national programs like
government health care require. A simple example of this is the fact that all of these new programs rest squarely on the
backs not of Big Business, but of small businessmen who are struggling in today's economy merely to stay alive and in
existence. Under any socialized regime, it is the "little people" who do most of the paying for it—under the senseless
pretext that "the people" can't afford such and such, so the government must take over. If the people of a country truly
couldn't afford a certain service—as e.g. in Somalia—neither, for that very reason, could any government in that
country afford it, either.
Some people can't afford medical care in the U.S. But they are necessarily a small minority in a free or even semi-free
country. If they were the majority, the country would be an utter bankrupt and could not even think of a national
medical program. As to this small minority, in a free country they have to rely solely on private, voluntary charity. Yes,
charity, the kindness of the doctors or of the better off—charity, not right, i.e. not their right to the lives or work of
others. And such charity, I may say, was always forthcoming in the past in America. The advocates of Medicaid and
Medicare under LBJ did not claim that the poor or old in the '60's got bad care; they claimed that it was an affront for
anyone to have to depend on charity.
But the fact is: You don't abolish charity by calling it something else. If a person is getting health care for nothing,
simply because he is breathing, he is still getting charity, whether or not any politician, lobbyist or activist calls it a
"right." To call it a Right when the recipient did not earn it is merely to compound the evil. It is charity still—though
now extorted by criminal tactics of force, while hiding under a dishonest name.
As with any good or service that is provided by some specific group ... if you try to make its possession by all a right,
you thereby enslave the providers of the service, wreck the service, and end up depriving the very consumers you are
supposed to be helping. To call "medical care" a right will merely enslave the doctors and thus destroy the quality of
medical care in this country, as socialized medicine has done around the world, wherever it has been tried, including
Canada (I was born in Canada and I know a bit about that system firsthand).
I would like to clarify the point about socialized medicine enslaving the doctors. Let me quote here from an article I
wrote a few years ago: "Medicine: The Death of a Profession."
In medicine, above all, the mind must be left free. Medical treatment involves countless variables and options that must be
taken into account, weighed, and summed up by the doctor's mind and subconscious. Your life depends on the private, inner
essence of the doctor's function: it depends on the input that enters his brain, and on the processing such input receives from
him. What is being thrust now into the equation? It is not only objective medical facts any longer. Today, in one form or another,
the following also has to enter that brain: 'The DRG administrator [in effect, the hospital or HMO man trying to control costs] will
raise hell if I operate, but the malpractice attorney will have a field day if I don't—and my rival down the street, who heads the
local PRO [peer review organization], favors a CAT scan in these cases, I can't afford to antagonize him, but the CON boys
disagree and they won't authorize a CAT scanner for our hospital—and besides the FDA prohibits the drug I should be
prescribing, even though it is widely used in Europe, and the IRS might not allow the patient a tax deduction for it, anyhow, and I
can't get a specialist's advice because the latest Medicare rules prohibit a consultation with this diagnosis, and maybe I shouldn't
even take this patient, he's so sick—after all, some doctors are manipulating their slate of patients, accept only the healthiest
ones, so their average costs are coming in lower than mine, and it looks bad for my staff privileges." Would you like your case to
be treated this way—by a doctor who takes into account your objective medical needs and the contradictory, unintelligible
demands of some ninety different state and Federal government agencies? If you were a doctor could you comply with all of it?
Could you plan or work around or deal with the unknowable? But how could you not? Those agencies are real and they are
rapidly gaining total power over you and your mind and your patients.
In this kind of nightmare world, if and when it takes hold fully, thought is helpless; no one can decide by rational means what to
do. A doctor either obeys the loudest authority—or he tries to sneak by unnoticed, bootlegging some good health care
occasionally or, as so many are doing now, he simply gives up and quits the field. (The Voice of Reason: Essays in Objectivist
Thought, NAL Books, 1988, pp. 306-307)
Any mandatory and comprehensive plan will finish off quality medicine in this country—because it will finish off the
medical profession. It will deliver doctors bound hands and feet to the mercies of the bureaucracy.
The only hope—for the doctors, for their patients, for all of us—is for the doctors to assert a moral principle. I mean: to
assert their own personal individual rights—their real rights in this issue—their right to their lives, their liberty, their
property, their pursuit of happiness. The Declaration of Independence applies to the medical profession too. We must
reject the idea that doctors are slaves destined to serve others at the behest of the state.
Doctors, Ayn Rand wrote, are not servants of their patients. They are "traders, like everyone else in a free society, and
they should bear that title proudly, considering the crucial importance of the services they offer."
The battle against socialized medicine depends on the doctors speaking out against it—not only on practical grounds,
but, first of all, on moral grounds. The doctors must defend themselves and their own interests as a matter of solemn
justice, upholding a moral principle, the first moral principle: self-preservation.
Source Citation:
Peikoff, Leonard. "Health Care Is Not a Right." Rpt. in Universal Health Care. Ed. Susan C. Hunnicutt. Detroit: Greenhaven Press,
2010. Opposing Viewpoints. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Health Insurance
The personal and social costs of going without health insurance can be staggering. Serious illness and catastrophic
accidents bankrupt thousands of families each year. The uncompensated medical costs incurred by uninsured people in
the United States total more than $56 billion, according to the Kaiser Family Foundation. The cost of this
uncompensated care is born by insured individuals, in the form of higher health care costs and insurance premiums, and
by the government. Most people agree, therefore, that the health care and insurance industry needs reform. What
nobody can agree upon, however, is who should provide the insurance, who should pay for it, and whether such
insurance should be compulsory. The government, private insurers, and employers who often bear the health care costs
for their employees all have something at stake in the debate over the uninsured, and they have all played a role in the
changing healthcare landscape of the twentieth and twenty-first centuries.
The Idea of Universal Health Insurance
Until the development of the health insurance industry in the twentieth century, all Americans—with the exception of
some veterans—could be counted among the uninsured. When people got sick or were injured, they were expected to
pay for medical care themselves. In the early twentieth century, however, economists began to notice that health care
was not just a personal matter. They saw that it was intricately connected to poverty and other indicators of social and
economic inequality, and came up with a plan. Originally called “sickness insurance,” this early version of health
insurance for the needy formed the core of a bill that aimed to provide universal medical coverage and went before
Congress in 1915.
World War I, however, was raging, and so was anti-German sentiment in the United States. Because Germany boasted
a successful universal health plan, the opponents of the U.S. plan were able to link the reform to the country’s enemy
(the United States declared war on Germany in 1917), and the plan was defeated. The notion of universal health care
was not seriously advanced again until the 1930s, when President Franklin D. Roosevelt (1882–1945) considered
making it part of a package of socially progressive legislation called the New Deal that aimed to provide social and
economic relief during the Great Depression. While the New Deal established Social Security, a publicly funded
retirement plan, in 1935, the health-care component faced steep opposition from the American Medical Association,
which did not want the government involved in its business.
Two important developments did occur in the 1930s, however. The first private health insurer, Blue Cross, was
founded in 1929, and in 1935 the government established a health system, the Agricultural Workers Health
Associations, to provide preventive medicine and acute care to migrant workers who had left the “dust bowl” of the
Midwest to seek farm work in the West. Nevertheless, government-funded health care for those who cannot afford it
remained a fantasy of political progressives until the next wave of reform, in the 1960s.
While a viable plan for truly universal health care—coverage for everyone—was still decades in the future, two major
segments of the population became the beneficiaries of publicly funded health care in 1965. Both were populations
that, because of the increasing cost of medical services in general, were most likely to suffer without insurance: the
elderly and the very poor. Medicare, which provides coverage to people over 65 and to those with disabilities, and
Medicaid, which covers people with very limited means, were both established as part of President Lyndon Johnson’s
(1908–1973) Social Security Act of 1965.
While no public health insurance system existed in the United States for most people, employers increasingly
sponsored private insurance plans for their employees. Typically, an employer purchased an insurance policy from a
private insurance company and paid all or most of the cost of insurance coverage for employees and their families. By
2000, according to census figures analyzed by the Economic Policy Institute, more than 68 percent of Americans had
health coverage under an employer-sponsored plan. However, because of rising health care costs and a severe global
economic recession that led to high unemployment in the United States and elsewhere, that percentage dwindled
quickly in the first decade of the twenty-first century. By 2009, just under 59 percent of Americans had health care
coverage through an employer.
Spiraling Costs of Health Care
By U.S. Census Bureau estimates, nearly fifty million people had no health insurance at all in 2008. There are various
reasons that so many people are uninsured. For those not covered by employer-sponsored plans, the cost of purchasing
private individual health insurance can be prohibitive. Insurance companies were free to deny policies to people with
pre-existing health conditions or a history of poor health. As healthcare costs increased dramatically in the late
twentieth and early twenty-first century, insurance premiums spiked; this led many employers to reduce their
contributions to employee insurance, raising costs for employees and driving many to opt out of employer-based plans.
In the decades after the establishment of Medicare and Medicaid, several attempts were made to institute some form of
universal health care. In 1976, President Jimmy Carter (1924–) campaigned unsuccessfully for national health
insurance, and in 1993 President Bill Clinton (1946–) introduced an eight-hundred-page health plan that was decisively
defeated in Congress. Historians cite partisan politics, the secrecy involved in drafting the plan, and fierce lobbying by
the pharmaceutical and insurance industries as reasons the Clinton plan failed. Many of these forces were still at work
nearly two decades later, when a plan to cover the uninsured finally made it through Congress.
“Obamacare” and the Debate over the Uninsured
Support for President Barack Obama’s (1961–) health care plan, first introduced in Congress in 2009 and promising
coverage of 36 million previously uninsured Americans, was split cleanly along party lines. The fierceness of the
opposition from the right stemmed primarily from the “public option”: a government-funded insurance provider that
would compete with private companies; this option was ultimately dropped. The loudest objections accused the plan of
being “socialist,” because it called for using tax revenue to fund health insurance for those who cannot afford it. Lastly,
though such worries were proved unfounded by the final bill, opponents of the plan worried that illegal immigrants,
who make up millions of the nation’s uninsured, would now be covered at taxpayer expense.
The biggest challenge to the Obama plan, however, began after it was made law in March 2010. Over the next year,
more than twenty legal challenges by individual states were filed in federal courts; while the earliest cases were either
thrown out or unsuccessful, a federal judge in Richmond, Virginia ruled in December 2010 that it was unconstitutional
to force individuals to buy anything at all, including health insurance, and the following month a judge in Florida did
the same.
Despite the ongoing legal challenges, Obama’s plan moved forward in addressing the healthcare needs of the
uninsured. The bill included provisions that made it unlawful for insurers to drop people who develop serious illnesses,
and to refuse to insure them in the first place. It also allowed people under the age of twenty-six to remain covered by
their parents’ insurance, reaching a crucial demographic of people who are likely to go without insurance because they
cannot afford it or have not yet found a job.
The Future of the Uninsured
The United States spends more money per capita on health care than any other industrialized nation: an estimated $2.3
trillion in 2008, far more than any other industrialized country. Yet its healthcare outcomes lag behind those same
comparable countries in terms of health indicators such as life expectancy and infant mortality. Much of the world’s
cutting-edge research in genetics, pharmaceuticals, and technology occurs in the United States, yet it is the only
wealthy industrialized nation that does not offer some form of health care to all of its citizens. Ensuring that illness and
accident do not signal financial ruin, using preventive care to create a healthier population, and avoiding a politically
unpopular burden on the taxpayers will be a puzzle for yet another generation of policymakers.
Source Citation:
"Health Insurance." Opposing Viewpoints Online Collection. Gale, Cengage Learning, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
A Government Mandate to Buy Health Insurance Is Constitutional
Timothy Noah is a senior writer for Slate and a contributing writer to the Washington Monthly.
Barack Obama spent much of the 2008 presidential-primary season arguing with [fellow Democratic candidate] Hillary
Clinton about whether health reform should include a so-called "individual mandate" requiring all Americans to
purchase health insurance. Clinton argued that it should. Obama argued that it shouldn't (even though his own plan
called for a more limited individual mandate requiring parents to purchase health insurance for their children). One of
Obama's central arguments was that enforcing such a mandate would be impractical. "You can mandate it," Obama
said, "but there still will be people who can't afford it. And if they can't afford it, what are you going to fine them?"
The Insurance Mandate
At the time I thought Obama had the better argument, not just on practicality but also with respect to the Constitution.
"If you want to drive a car," I wrote,
it's accepted that you have to buy private auto insurance. But that's conditional on enjoying the societal privilege of driving a
car; you can avoid the requirement by choosing not to drive one. A mandate to buy private health insurance, however, would be
conditional on ... being alive. I can't think of another instance in which the government says outright, "You must buy this or
that," independent of any special privilege or subsidy it may bestow on you.
Nearly two years later, Obama has made peace with the individual mandate, which is included in the bills that cleared
three House committees and one Senate committee. The House bill imposes on anyone who neglects to purchase health
insurance for himself or his family a 2.5 percent tax on modified adjusted gross income. The Senate health committee
bill imposes a minimum penalty of $750. Yet I've continued to wonder whether the individual mandate is
constitutional.
Should health reform pass, it seems a dead certainty that conservatives will go to court to challenge the individual
mandate. A preview of their arguments can be found on the Web site of the conservative Federalist Society in the
paper, "Constitutional Implications of an 'Individual Mandate' in Health Care Reform" by Peter Urbanowicz, a former
deputy general counsel at the Health and Human Services [HHS] department, and Dennis Smith, a former director of
HHS's Center for Medicaid and State Operations. One of these turns out to resemble my earlier argument:
Nearly every state now has a law mandating auto insurance for all drivers. But the primary purpose of the auto insurance
mandate was to provide financial protection for people that a driver may harm, and not necessarily for the driver himself. And
the auto insurance mandate is a quid pro quo for having the state issuing a privilege: in this case a driver's license.
Should health reform pass, it seems a dead certainty that conservatives will go to court to challenge the individual mandate.
The Commerce Clause
Since I first wrote about this, it's been pointed out to me that the comparison with auto insurance is not a legal argument
at all. (I am not a lawyer.) The legal question isn't whether it would be unusual for the government to compel people to
buy health insurance. It's whether it would square with the Constitution. Mark Hall, a professor of law at Wake Forest
University, argues that it would, in part based on the Commerce Clause, which since the New Deal has permitted the
federal government to expand its power in various ways by defining various activities as "interstate commerce."
Although health delivery is often local, Hall writes, "most medical supplies, drugs and equipment are shipped in
interstate commerce." More to the point, "most health insurance is sold through interstate companies."
Yes, counter Urbanowicz and Smith, but "it is a different matter to find a basis for imposing Commerce Clause-related
regulation on an individual who chooses not to undertake a commercial transaction." Does the Commerce Clause cover
your refusal to engage in interstate commerce?
The individual mandate requires citizens to fork over not their houses or their automobiles but their money.
Well, yes, Hall in effect answers, because when a person declines to purchase health insurance, that affects interstate
commerce, too, by driving up health insurance premiums for everyone else:
Covering more people is expected to reduce the price of insurance by addressing free-rider and adverse selection problems.
Free riding includes relying on emergency care and other services without paying for all the costs, and forcing providers to shift
those costs onto people with insurance. Adverse selection is the tendency to wait to purchase until a person expects to need
health care, thereby keeping out of the insurance pool a full cross section of both low and higher cost subscribers. Covering
more people also could reduce premiums by enhancing economies of scale in pooling of risk and managing medical costs.
In essence, the Commerce Clause enables the economic arguments for the individual mandate to become legal
arguments as well.
A Taking or a Tax
Urbanowicz and Smith next reach for that perennial conservative favorite, the Fifth Amendment's Takings Clause,
which says the government may not take property from a citizen without just compensation. "Requiring a citizen to
devote a percent of his or her income for a purpose for which he or she otherwise might not choose based on individual
circumstances," Urbanowicz and Smith write, "could be considered an arbitrary and capricious 'taking.' ..."
But according to Akhil Reed Amar, who teaches constitutional law at Yale, the case law does not support Urbanowicz
and Smith. "A taking is paradigmatically singling out an individual," Amar explains. The individual mandate (despite
its name) applies to everybody. Also, "takings are paradigmatically about real property. They're about things." The
individual mandate requires citizens to fork over not their houses or their automobiles but their money. Finally, Amar
points out, the individual mandate does not result in the state taking something without providing compensation. The
health insurance that citizens must purchase is compensation. In exchange for paying a premium, the insurer pledges (at
least in theory) to pay some or all doctor and hospital bills should the need arise for medical treatment. The individual
mandate isn't a taking, Amar argues. It's a tax.
But how can it be a tax if the money is turned over not to the government but to a private insurance company? William
Treanor, dean of Fordham Law School and an expert on takings, repeated much of Amar's analysis to me (like Amar,
he thinks a takings-based argument would never get anywhere), but instead of a tax he compared the individual
mandate to the federal law mandating a minimum wage. Congress passes a law that says employers need to pay a
certain minimum amount not to the government but to any person they hire. "The beneficiaries of that are private
actors," Treanor explained. But it's allowed under the Commerce Clause. "Minimum wage law is constitutional." So,
too, then, is the individual mandate.
Source Citation:
Noah, Timothy. "A Government Mandate to Buy Health Insurance Is Constitutional." Health Care. Noël Merino. Detroit:
Greenhaven Press, 2011. Current Controversies. Rpt. from "Can Obama Make You Buy Health Insurance?" Slate.com. 2009. Gale
Opposing Viewpoints In Context. Web. 10 Apr. 2012.
A Government Mandate to Buy Health Insurance Is a Violation of Liberty
Robert Moffit is director of the Heritage Foundation's Center for Health Policy Studies and a former senior official at
the U.S. Department of Health and Human Services.
In his address to Congress, President [Barack] Obama made clear that he and his allies know how to spend your health
care money better than you do. It's a matter, you see, of "shared responsibility": You share your dollars with the feds,
and the feds are responsible for making your decisions. In the health care bill currently [October 2009] before the
House (H.R. 3200), there is even a "Health Choices Commissioner," to be appointed by the president, who will
rigorously define your choices.
The Individual Mandate
On "shared responsibility," the president brooks no dissent. "Unless everybody does their part, many of the insurance
reforms we seek—especially requiring insurance companies to cover preexisting conditions—just can't be achieved,"
he said. "That's why under my plan, individuals will be required to carry basic health insurance." This requirement is
known as the "individual mandate."
The president's proposal is historic—though not in a good way. Never before has Congress forced Americans to buy a
private good or service. In fact, for those with a traditional understanding of the Constitution as a charter of liberty (as
opposed to the "living" version), the list of Congress's powers in Article I, Section 8, grants it no authority to require
any such thing.
The Obama administration, along with its allies in Congress and throughout health policy wonkdom, would have you
believe that, on the question of a mandate, everyone of sound reputation is in agreement. That's not true; there is no
consensus on this issue, any more than there is a consensus on the "public option."
Penalties for Noncompliance
For one thing, mandates are meaningless without penalties for noncompliance, and polling data suggests that
Americans might accept an individual mandate, but not the penalties. This became a problem for Hillary Clinton in the
2008 presidential primaries, when Obama strongly disagreed with her proposal to impose an individual mandate—
saying, among other things, that it was unenforceable (he cited noncompliance with auto insurance laws as evidence).
Clinton responded by suggesting such measures as tax penalties and wage garnishments for health insurance scofflaws,
which Obama knew would be unpopular with voters.
The president's proposal is historic—though not in a good way.
Now that Obama is president, he no longer objects to such penalties. In the House bill, everyone would be required to
have an "acceptable" health plan (as defined by law) or pay a penalty of 2.5 percent of his adjusted gross income. This
penalty is expected to bring in $29 billion over a ten-year period. In the Senate Health, Education, Labor, & Pensions
Committee bill, the penalty is set at 50 percent of the price of the lowest-cost health plan participating in the bill's staterun health insurance exchanges. That's expected to generate $36 billion over ten years.
Meanwhile, Sen. Max Baucus (D., Mont.) has unveiled a Senate Finance Committee draft that also has an individual
mandate. It would levy a penalty of up to $3,800 on families for what the president calls "irresponsible behavior," by
which he means health care choices of which he disapproves. In Obama's usage, "personal responsibility" is selective;
it doesn't extend to the question of taking responsibility for one's health care. That's the government's job. Of course,
federal officials will have outside help in deciding for the rest of us. Powerful special interest groups and health
industry lobbyists will do all they can to make sure that their favored medical treatments, procedures, drugs, and
devices are part of the "bare minimum" that every plan must include.
A Hidden Tax
Despite all this, the president is right on one key point: The current system makes those with health coverage pay for
those without. And those who are without health coverage often get their care in the most expensive place possible: the
hospital emergency room. The president correctly calls this a hidden tax. Under existing federal law, hospitals are
required to provide treatment to everyone who comes to their emergency room, regardless of his ability to pay. There is
no serious legislation under consideration that would change that.
About three-quarters of this uncompensated care, adding up to tens of billions of dollars annually, is financed, in some
way, by the taxpayers. (Health care providers absorb some of these costs by delivering charity care.) The extent and
degree of this cost shifting varies from state to state. The challenge for conservatives is to address the situation in a
practical way that does not reward personal irresponsibility—the free-rider problem—or curtail freedom. That means
taking the principle of "personal responsibility" seriously by making sure that personal choices are clearly defined and
consequential.
The Mandate in Massachusetts
The experience of Massachusetts shows how hard it can be to pull off this balancing act. In 2005, as the state faced
$1.3 billion per year in taxpayer-financed uncompensated health care costs, Republican governor Mitt Romney came
up with a plan. In sum, his position was that people should exercise their responsibility by choosing their own health
insurance and paying their own health care bills. The state would provide direct assistance to help low-income folks
buy insurance, drawing heavily from existing government funding of health care.
Under the Romney proposal, those who did not wish to buy health insurance would be allowed to self-insure, but they
would have to post a $10,000 bond to pay their health care bills, such as hospital emergency care, instead of shifting
them onto the taxpayer. Anyone who refused to do so would lose an exemption on his state income tax.
Requiring everyone to buy government-specified health insurance, whether they need it or not, is an unacceptable violation of
personal liberty.
Romney's proposal, strictly speaking, was not a requirement to purchase health insurance; it was a requirement to pay
one's health bills, through insurance or predetermined direct payment, thus reducing the burden on taxpayers.
Nonetheless, it satisfied nobody. Critics on the Right, especially libertarians, said it amounted to a health insurance
mandate, while those on the Left said it was a weak and unnecessary substitute for the "real thing," which the
Massachusetts legislature enacted in 2006: a straight mandate for individuals to buy health insurance or pay a fine.
That mandate fell short of universal coverage. Some 60,000 people, roughly 1 percent of the state's population, were
initially exempted, as state officials—fearing a political backlash from labor officials, among others—refrained from
imposing the mandate on some low-income people they believed would have trouble paying for insurance. So, while
the state's liberal legislature allowed the government to set generous required benefit levels, politicians continued to
steer money to favored hospitals, aggravating the state's health care cost crisis. In other words, they deliberately
weakened a key element of Romney's proposed reform, which was to redirect existing government funding from
institutions to individuals and families. The Massachusetts experiment reminds us that in health care policy, precision
in drafting and careful implementation count as much as the broad outlines of legislation.
Freedom and Responsibility
In Massachusetts or Washington, no individual mandate is going to achieve the goal of universal coverage. In the cases
of similar mandates—auto insurance, income tax filing, military draft registration—compliance has invariably fallen
short of universal. The better course of action is to be serious about both personal freedom and personal responsibility.
They go together; you cannot have one without the other. And under the House and Senate bills, we would have
neither.
Requiring everyone to buy government-specified health insurance, whether they need it or not, is an unacceptable
violation of personal liberty. It is a way of taxing healthy people without calling it a tax. Since that is an irresistible
temptation to politicians, the list of required benefits would be certain to keep expanding.
The choice between freedom and responsibility, as the president and his congressional allies portray it, is a false choice.
We can and should have both.
Source Citation:
Moffit, Robert. "A Government Mandate to Buy Health Insurance Is a Violation of Liberty." Health Care. Noël Merino. Detroit:
Greenhaven Press, 2011. Current Controversies. Rpt. from "At What Cost to Freedom?" National Review Online. 2009. Gale
Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Junk Food
Junk food is generally defined as food items that have little nutritional value beyond providing fat or sugar. Junk food
often contains highly processed ingredients such as high fructose corn syrup, or excessive amounts of potentially
harmful ingredients like salt. Junk food is generally eaten because of its flavor, rather than to meet a person’s daily
requirements of vitamins, protein, fiber, or other essentials. As junk food has replaced more nutritious forms of food in
people’s diets—especially the diets of schoolchildren—the consequences, according to many health-care professionals,
have been devastating. This has led to a grass-roots movement to remove junk food options from schools and other
venues aimed at young people.
A Junk Food Invasion
According to an online poll at About.com, more than 40 percent of American parents state that their children do not eat
breakfast on a regular basis. School lunches, then, are an even more essential component in the diets of most students.
In fact, American students consume an estimated 20 to 50 percent of their total daily calories while at school.
Throughout the 1990s and 2000s, however, school districts began to turn away from the traditional “hot lunch” menu
items cooked on-site by cafeteria staff. At least part of the reason was clear: these meals were expensive to prepare.
This problem was compounded by the fact that almost twenty million American students were eligible, based on their
family income, for free or reduced-price lunches in 2009; the amount reimbursed by the government for each meal was
less than the actual cost to prepare each meal, meaning schools were losing more money each year.
At the same time, snack food producers such as Coca-Cola and Frito-Lay paid school districts to allow them to market
their food products inside the schools, generally through vending machines. These became known as “competitive
foods,” because they competed with traditional cafeteria items when students chose what to eat each day. Given the
choice between a prepared cafeteria meal and a bag of chips with a soda, many students opted for quick and easy junk
food. By 2009, about 98 percent of all American high schools contained vending machines.
Contracts with snack manufacturers allowed schools to earn extra money by offering junk food to students, and to save
money by making fewer meals. The profits were not small: a typical school district in Florida, for example, was given
nearly half a million dollars to sign an exclusive beverage contract with Pepsi. While such deals earned money for the
schools, they also worsened the growing obesity problem among America’s young people. According to the Centers for
Disease Control (CDC), childhood obesity tripled between 1980 and 2008. In the U.S., about one in five school-aged
children are classified as obese. Childhood obesity increases the risk of immediate health problems, such as sleep apnea
and joint ailments, and also increases the chances that the child will grow up to be obese and suffer from heart disease,
diabetes, or other health problems.
Removing Unhealthy Choices for Kids
Beginning with the 2007 school year, New Jersey became the first state to officially ban junk food from its public
schools. Soda beverages, and candies listing sugar as the first ingredient, were banned completely from school grounds
during normal class hours. The law did not eliminate the presence of vending machines in schools; however, available
snacks could not contain more than eight grams of fat per serving. In addition, New Jersey school cafeterias were
required to limit the amount of fat in prepared meals. Many other states have since followed suit, and the administration
of President Barack Obama has suggested that a nationwide ban on junk food in schools may be on the horizon.
In 2009, the Institute of Medicine updated its recommended nutrition requirements for healthy children. These
guidelines help shape the menus of school lunch programs nationwide. Among the changes were an emphasis on whole
grains and a limit on starch-heavy vegetables like potatoes, to be replaced by leafy vegetables and legumes. The new
guidelines also called for an overall increase in consumption of fruits and vegetables to double the previous amount.
In November 2010, lawmakers in San Francisco voted to ban the inclusion of toy giveaways with unhealthy meals. The
law targets fast-food chains such as McDonald’s and Burger King, which commonly include toys or prizes with their
children’s meals. The toy ban is designed to curb childhood obesity in the area by removing added incentives to
purchase high-fat, high-calorie foods. Restaurants would still be allowed to offer toys with healthier meals that met
certain guidelines regarding fat, calorie, and sodium content.
An Opposing View of Junk Food Restrictions
The San Francisco law banning toys in unhealthy meals for children has divided the public. Some feel that banning the
toys represents a restriction on freedom of choice; in other words, individuals should be allowed to choose unhealthy
food options if they wish. However, parents are still able to purchase unhealthy alternatives for their children—without
the toys—if they desire.
The issue of freedom is also a factor in the criticism of junk-food restrictions in schools. Many students believe that
they should be allowed to choose their own foods. By eliminating the opportunity to make good or bad choices, critics
argue, students are not learning to eat healthy—they are simply being temporarily kept away from unhealthy foods. In
addition, some local school boards have rejected the notion that state or federal government can control food options
within the local districts; according to these critics, the freedom and responsibility to decide school menus rests with
the community.
Another potential problem faced by health advocates involves the definition of “junk food.” Increasingly, foods once
considered to be hearty and appropriate for students—such as pizza—are being reclassified as unhealthy. A ban on junk
food can affect not just the “competitive foods” of contracted vendors, but also the food produced in a school’s
cafeteria. Some schools have already responded by creating healthier alternatives to longtime favorites, such as pizza
made with whole-grain crust and reduced-fat cheese. These healthier alternatives often cost more to produce, which
could mean a necessary increase in federal funding to maintain higher quality food and keep the “junk” out of schools.
Source Citation:
"Junk Food." Opposing Viewpoints Online Collection. Gale, Cengage Learning, 2010. Gale Opposing Viewpoints In
Context. Web. 10 Apr. 2012.
Junk Food Should Not Be Banned in Schools
Margaret Johnson teaches English and French at West Las Vegas High School in Las Vegas, New Mexico. She also
has taught college English and worked in the Army Signal Corps in Germany.
Freedom of choice should be extended to students' school dining options. Providing plentiful choices—including the option to
select foods that are less healthy—can keep schools from feeling institutional and can help enhance students' feelings of
personal responsibility and satisfaction. Students who do not care for standard school menu options, or who cannot eat these
meals because of dietary restrictions, often go without eating altogether. Providing attractive alternatives can help ensure that
students maintain adequate caloric intake, which will in turn help enhance their attention and performance in class. In addition,
sales of junk food can provide substantial economic benefits for schools.
Yes, junk food should be sold in schools—along with other food. Students will buy anything that costs under a dollar,
is portable, flavorful, visually appealing, and gives them a quick pick-me-up. They like traditional candy bars, nutrition
bars, pickles, lemons, sodas, fruit drinks, and water. I'm sure milk would be a top seller. Above all, students like
freedom of choice.
Students often do not recognize cafeteria fare as food. The nutritionally correct meal in the garbage can has no
nutritional value. Our politically correct cafeteria offers a wide variety of meals, all as well prepared as regulations and
mass production allow. The bread is made on site.
But some students will not eat tomatoes, meats, spices, or other ingredients. Others cannot deal with the noise, long
lines, and short lunch periods. Some don't have time to get seconds.
Our schools are restrictive enough as it is. They do not need to resemble prisons any more than they do already.
Junk food provides quick energy, substitutes for missed meals, and supplements inadequate meals.
Medical and Economic Arguments
My learned colleague in biology says teens need munchies to keep them alert. I found this to be accurate when I
grounded classes for not disposing of their trash properly. Without snacks, those who weren't antsy were asleep. Most
were not focused.
I do not claim any medical expertise, but I am a teacher and a mother, and I have observed that caffeine is, for some, a
better drug for hyperactivity than what is sold at the pharmacy, and a cola can help the student whose pharmaceutical
wears off at noon. A school that bans colas and candy can cause a medical hardship. Also, denial, even for as short a
time as a school day, can cause bingeing.
On the economic side, junk food can provide a steady income for school organizations at a better price and profit
margin than the fund-raiser companies offer. Recycling cans is a profitable byproduct.
I do worry about where students get so much disposable income, but that is a matter for their parents to monitor.
Yes, we should teach and model good eating habits. Yes, we have health problems brought on by poor nutrition. But
no, we should not ban junk food. Our schools are restrictive enough as it is. They do not need to resemble prisons any
more than they do already.
Source Citation:
Johnson, Margaret. "Junk Food Should Not Be Banned in Schools." Should Junk Food Be Sold in Schools? Ed. Norah Piehl. Detroit:
Greenhaven Press, 2011. At Issue. Rpt. from "Should Schools Allow the Sale of Junk Food?" NEA Today (Mar. 2002). Gale
Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Junk Food Should Be Banned in Schools
Will Dunham is a staff writer for Reuters.
Current guidelines for foods sold alongside official school meals have been in place since the 1970s but they need to be updated.
An Institute of Medicine expert panel recommends that lawmakers update guidelines to limit foods to specific whole grains, lowfat dairy, and fruit and vegetable options. In addition, schools should no longer sell sodas or other drinks with added sugar, and
they also should prohibit the sale of caffeinated beverages. Abiding by these recommendations may help curb the rising rates of
obesity and related health problems among schoolchildren. Because schoolchildren consume a high percentage of their daily
caloric intake during the school day, they should have the highest-quality nutritional choices available to them at these times.
Sugary drinks, fatty chips and gooey snack cakes should be banned from U.S. schools in the face of rising childhood
obesity fueled by those junk foods, an expert panel said on Wednesday in a report requested by Congress.
The Institute of Medicine panel proposed nutritional standards more restrictive than current government rules for foods
and drinks sold outside regular meal programs in cafeterias, vending machines and school stores in elementary, middle
and high schools.
They promote fruits, vegetables, whole grains and nonfat or low-fat dairy products and seek limits on calories,
saturated fat, salt and sugar. The panel opposed caffeinated products due to possible harmful effects like headaches and
moodiness.
The proposals would banish most potato and corn chips, candies, cheese curls, snack cakes such as Twinkies, "sports
drinks" such as Gatorade, sugary sodas and iced teas and punches made with minimal fruit juice.
School campuses should be an overall healthy eating environment.
A 15-member panel headed by Dr. Virginia Stallings of Children's Hospital of Philadelphia crafted standards applying
to items not part of federally sponsored meal programs, which already meet some nutrition guidelines. They do not
restrict bagged lunches or snacks children bring to school.
"Because foods and beverages available on the school campus also make up a significant proportion of the daily calorie
intake, they should contribute to a healthful diet. And school campuses should be an overall healthy eating
environment," Stallings told reporters.
The Institute of Medicine provides advice on health issues to U.S. policymakers. These recommendations came at the
request of Congress.
The American Beverage Association trade group said the industry already was changing the type of products available
in schools to reduce calories and portion size, and had agreed to voluntary guidelines on items sold in schools.
Rising Obesity
Consumer advocates called the proposals vastly superior to existing Agriculture Department standards dating to the
1970s for foods sold alongside official school meals, and asked Congress to embrace them.
"They're recommending very strongly that schools no longer sell junk food and sugary drinks, and that none of the
foods sold undermine children's diet and health. And that's really important these days because of the rising obesity
rates," said Margo Wootan of the Center for Science in the Public Interest advocacy group.
Sen. Tom Harkin, an Iowa Democrat sponsoring a bill to toughen the existing government rules, said unenforceable
voluntary guidelines by industry are not enough.
The panel proposed two categories of foods and beverages that can be sold in schools based on grade level.
One category should be allowed at all grade levels during school and after-school activities and should provide at least
one serving of fruits, vegetables, whole grains, or nonfat or low-fat dairy.
Examples include whole fruits, raisins, carrot sticks, whole-grain cereals, some multi-grain tortilla chips, some granola
bars, some nonfat yogurt, plain water, skim and 1 percent fat milk, soy drinks and 100 percent fruit and vegetable
juices.
A second category should be available only to high school students after regular school hours, including baked potato
chips, whole-wheat crackers, graham crackers, pretzels, caffeine-free diet soda and seltzer water.
Source Citation:
Dunham, Will, and Reuters. "Junk Food Should Be Banned in Schools." Should Junk Food Be Sold in Schools? Ed. Norah Piehl.
Detroit: Greenhaven Press, 2011. At Issue. Rpt. from "Expert Panel Urges Junk Food Ban in Schools." 2007. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Genetically Modified Food
Since the 1930s, several decades after the Austrian monk Gregor Mendel discovered genes—small heredity units in
DNA by which parents pass along traits to their offspring—people have altered crops or animals by crossbreeding.
Breeders select a certain trait, such as size or color, and, by carefully choosing the parent plants or animals, come up
with a new species of that organism. So tomatoes may be made plumper, grapes seedless, or oranges juicier. All of
these changes are considered natural, but the process can take a long time. Often many seasons pass before growers
achieve the results they want.
For animals these modifications can take even longer. If breeders want a heftier steer, they select parents who are larger
and meatier. Next they must wait for the calf to be born and grow up. Then they match it with another huge bull or cow
and again wait until the calf grows old enough to have babies. If breeders are working toward a specific trait, it may
take generations of crossbreeding to attain the desired results.
A Faster Method
Rather than waiting for several growing seasons or generations to produce the desired offspring, scientists can reach the
final outcome more quickly by removing or adding certain genes. Altering genes of plants or animals in this way
produces genetically modified (GM) foods or genetically modified organisms (GMOs). These new species are also
called genetically engineered (GE) or living modified organisms (LMOs).
Genetic modification can be done in two basic ways. In cisgenesis, scientists transfer genes from the same plant or
animal, whereas in transgenesis they use genes from a different species. To get the genes to attach themselves to an
existing organism, scientists may use a virus or may shoot the genes into the plant or animal with a special gun or insert
them with a syringe.
Initial Experiments
One of the reasons scientists worked on developing new strains of genetically modified (GM) foods was to combat
world hunger. Yet they first modified tomatoes so they would not turn soft and rotten while they sat on grocery store
shelves. Rather than helping the poor and starving, this genetic modification mainly had a commercial benefit. The first
GM tomato had competition, though, from a naturally grown tomato that had also been bred for long shelf life, so it did
not bring in the profit that was expected.
Next, scientists concentrated on making GM crops that would grow well. The most common modification adds a gene
from the bacterium Bacillus thuringiensis (Bt). As the GM plant grows, it develops a poison that kills pests such as
bollworms and stem borers. Unfortunately the toxins the plant emits are deadly to all insects, not just pests. Initial
reports indicated that Bt crops also killed butterflies.
Uses for GM Technology
As of 2010, field trials had been completed for 130 crops. The main crops developed using GM technology included
rice, soybeans, wheat, corn, cotton, potatoes, and rapeseed. Few GM varieties had been developed for vegetables, nuts,
or fruits.
The Organisation for Economic Co-operation and Development (OECD) listed the traits that GM plant breeding
focuses on: "herbicide tolerance, pest resistance (including insect, virus, bacteria, fungi, and nematode resistance),
agronomic traits for improved yield or stress tolerance, and product quality characteristics." OECD also indicated that
another major use of biotechnology will be the "cloning of GM animals to produce pharmaceuticals, followed by
cloned breeding stock. The first commercial use of the latter technology for meat production could occur in non-OECD
countries, where public opposition to meat from cloned animals could be less important than in OECD countries." Later
GM technologies will be extended to fish, honeybees, and trees.
By the early 2000s, GM crops were being added to many processed, canned, and preserved foods as well as to some
vaccines, vitamins, and drugs. GMOs were also used for animal feed, ethanol for gasoline, glue, and in yeast-based
products such as bread and beer.
Advantages of GM Foods
With the world population increasing and the amount of land available for farming decreasing, many have touted
genetically modified (GM) food as the answer to combating world hunger. Plant genes can be altered to make crops
resistant to diseases or pests, or to improve their yield.
According to a study done by Nuffield Council on Bioethics, "The Use of Genetically Modified Crops in Developing
Countries," farmers who plant these crops gain many advantages. "Although GM crops primarily benefit large-scale
farmers, many small-scale farmers in China and South Africa have already successfully grown GM cotton. In China,
yields were estimated to have increased by 10% compared to non-GM crops, and the amount of pesticide used fell by
as much as 80%, leading to an increase in profits. The efficiency of agriculture has a major impact on the standard of
living in most developing countries." GM foods have many other benefits as well.
Ecological Benefits
GM crops are not only pest and disease resistant, but can be modified to grow under many different conditions
including cold, drought, or salinity. Food can grow in areas where it was not possible before, leading to a greater yield,
which in turn can help alleviate world hunger.
Another benefit is that plants can be altered to use less herbicide and fertilizer. This addresses one of the major
environmental problems connected with farming. As Arcadia Biosciences explains, "Agriculture is the second-leading
source of global greenhouse gas, and nitrogen fertilizer represents a significant cause of these emissions." Statistics
from the International Service for the Acquisition of Agri-biotech Application (ISAAA) show that from 1996 to 2006
pesticide use decreased by 224,000 tons. Thus, using GM plants can significantly reduce the ecological impact of
pesticides and fertilizers.
Additional benefits of GM plants include soil, water, and energy conservation. Although initial plantings produced less
food than traditional farming, future crops are being modified for greater yield.
Crop and Livestock Improvements
GM can also make crops tastier and add nutrients. For example, golden grain, a strain of rice enriched with betacarotene, can help prevent blindness. Rice with iron and other vitamins and minerals can enhance the diets of the poor.
Animal species developed through GM will also be healthier and hardier. Livestock can be genetically modified to
increase milk or egg production, or to be meatier. Ongoing developments include adding vaccines or other valuable
additives to milk.
Disadvantages of GM Foods
Though GM foods seem to be the ideal solution for increasing the world food supply and helping the poor, they have
stimulated a great deal of controversy. One of the foremost concerns is their long-term effects on health, on both people
and the environment.
Health Problems
Although many sources point to the fifteen or so years GM foods have been used to indicate that they are safe for
humans, many groups disagree. They say that sufficient testing has not been conducted. A study published in 2009 in
the International Journal of Biological Sciences revealed adverse effects of GM corn on rats’ livers and kidneys,
particularly male rats. The company who markets the corn, however, refuted the results and challenged the validity of
the experiment.
Scientific researcher Arpad Pusztai examined the studies that had been conducted on GM foods, many of which
indicated that mice developed health problems or even died during the experiments, but concluded that none of the
studies were done under rigorous enough conditions to be valid. Pusztai expressed concern about the lack of properly
conducted research in his article, "Genetically Modified Foods: Are They a Risk to Human/Animal Health?": "We need
more and better testing methods before making GM foods available for human consumption … Our present data base is
woefully inadequate. Moreover, the scientific quality of what has been published is, in most instances, not up to
expected standards." Conversely, Wu Yongning, a food safety specialist with the Chinese Center for Disease Control
and Prevention, believes studies have not proven that genetically modified food is harmful to human health. "I am not
ruling out all possible risks, but those risks of genetically-modified food are no greater than that of traditional ones,
given the heavy use of pesticide in growing traditional food," he said.
Many people, however, feel that GM food should not be put on the market until studies have shown without a doubt
that it is safe for human consumption. In addition to the unknown health risks, critics list other negative effects of using
GM foods. New crop species could contain allergens or toxins, or they might cause people’s bodies to resist antibiotics.
Environmental Concerns
The long-term impact of GM products on the environment is also unknown. A few difficulties have already developed.
The transfer of modified genes to other nearby plants can occur through cross-pollination. GM plants that were bred for
pest resistance and hardiness have passed those genes on to the weeds growing in the fields, making them more
tenacious and resistant to herbicides, so additional amounts of weed-killers must be applied. This same concern could
also apply to pests. Insects may soon evolve into stronger, more virulent species that are tolerant to the pesticide
embedded in GM plants.
Some people also decry the loss of biodiversity. As plants become more specialized through GM technologies, many
regular crops will no longer be planted. Those species could eventually become extinct.
Access Issues
Costs for testing each new GM crop costs millions of dollars, so developers patent their inventions to protect their
intellectual property, which, although understandable, generally means that the technology remains in the hands of a
few companies. Statistics given by OECD reveal that GM foods are dominated by a small segment of the population:
"Between 1995 and 1999, 146 firms applied for at least one GM field trial. Ten years later the number declined by
almost half to 76 firms that applied for a field trial between 2005 and 2009." Thus, world food production is
concentrated in the hands of a few producers, leading to developing countries’ increasing economic dependence on
developed countries.
The high cost of development and the monopoly on intellectual property also result in high prices, making seed costs
prohibitive for most independent farmers, particularly those in developing countries. Thus, technology that originally
was invented to prevent world hunger is not accessible to those who need it most.
Ethical Questions
Another criticism leveled at GM is tampering with nature and mixing genes from different species. Some critics
wonder about the violation of organisms and object to mixing animal and plant genes. They also raise the issue of what
GM does to the animals themselves. Others also point to the effects on the experimental animals that are fed GM foods
during testing.
Along with these concerns is that of labeling GM products. The United States has no mandatory labeling for GM
ingredients. Activists believe that purchasers should be informed of what they are eating, but the costs to develop
standardized labeling and inspections would increase the price of food. So, although GM technology looks promising in
many ways, it is also fraught with difficulties that need to be addressed, not only by America, but by the global
community.
Biotechnology Around the World
In 1999 many countries of the world met in Cartagena, Columbia, to adopt the Cartagena Protocol on Biosafety. The
protocol was not finalized and accepted, however, until the following year in Montreal, Canada. By 2009, 157 countries
had ratified the document and had begun reaching the objectives the group had identified as paramount: "ensuring the
safe transfer, handling, and use of living modified organisms (LMOs) that result from modern biotechnology and that
may have adverse effects on biological diversity, taking also into account risks to human health." Signers included
forty-five African, forty-one Asia and Pacific, twenty-two Central and Eastern European, twenty-eight Latin
American/Caribbean, and twenty-one Western European countries. Noticeably absent from the list are the United States
and Canada.
One of the protocol’s other objectives is "to promote and facilitate public awareness, education, and participation."
Meeting this requirement has proved difficult for many of the participants. The Secretariat of the Convention on
Biological Diversity discussed this issue in the 2009 edition of Biosafety Protocol News: "Most developing countries
and countries with economies in transition lack the financial resources and technical capabilities to promote public
awareness and education concerning LMOs." In addition, although those countries were responsible for almost half of
all GM plantings in 2008, they did not have trained scientists to provide information about possible health risks.
Member countries try to rectify this inequity by offering Internet sessions and translating broadcasts into a variety of
dialects and languages.
Many countries passed legislation to implement the protocol. The European Union, for example, adopted laws that
require labels to indicate the presence of GM foods. Difficulties arise with this, though, especially when only trace
ingredients contain GMOs. Nevertheless, the international community has forged ahead of the United States, who in
2010 still had set no regulations.
Part of the problem in America is that GM foods fall under different jurisdictions. The EPA, which deals with
environmental safety, supervises pesticides, so it would be responsible for the pest-resistant crops. The USDA
determines whether plants are safe to grow, whereas the FDA evaluates whether products made from the plant are safe
to eat. All of these divisions would need to become involved in setting standards. For now, the FDA has classified GM
food as a "substantial equivalent" to natural foods, so it is not subject to regulation.
In spite of worldwide initiatives, many questions still remain unanswered, including the long-term health effects of
GMOs on humans. Although some people insist that the health risks are comparable to non-genetically modified foods,
the safety of GMOs has not been assessed over an extended period of time. This testing should be of paramount
concern because these products represent an important trend of the future.
Future of Biotechnology
As OECD stated in "The Bioeconomy to 2030," their International Futures Programme report, "GM or MAS [markerassisted selection] varieties of soybeans and maize could be responsible for the vast majority of total plantings by
2012." OECD also predicts that by 2015 about half of all global food and livestock feed will come from plants that
have been genetically modified.
The OECD report continues, "Demand for food, feed, and fibre is expected to increase substantially in the future due to
population and income increases across the globe. To meet increased demand, a diverse range of solutions are going to
need to work in concert. Biotechnological solutions will play a major role, but will not provide a silver bullet. They will
need to be combined with other strategies to modernize agricultural methods and increase agricultural productivity (e.g.
through farmer education, improved water management and conservation, and precision farming)."
A Balanced View
Arguments are strong for both sides of the issue of GM foods as Deborah B. Whitman, senior editor of Community
Supported Agriculture succinctly sums up in her article, "Genetically Modified Foods: Harmful or Helpful?":
"Genetically-modified foods have the potential to solve many of the world’s hunger and malnutrition problems, and to
help protect and preserve the environment by increasing yield and reducing reliance upon chemical pesticides and
herbicides. Yet there are many challenges ahead for governments, especially in the areas of safety testing, regulation,
international policy, and food labeling. Many people feel that genetic engineering is the inevitable wave of the future
and that we cannot afford to ignore a technology that has such enormous potential benefits. However, we must proceed
with caution to avoid causing unintended harm to human health and the environment as a result of our enthusiasm for
this powerful technology."
Source Citation:
"Genetically Modified Food." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Genetically Modified Food Should Not Be Banned, but Carefully
Monitored
Conor Meade lectures on ecology at the National University of Ireland, Maynooth.
Banning both the cultivation of genetically modified (GM) crops and the assessment of the ecological risk posed by GM crops is a
bad idea. Although there are environmental and health concerns over GM food, no scientific evidence has yet determined that
they are not safe. In addition, GM crops might be very beneficial and contribute to a sustainable future for all people. However,
without careful study, proper decisions about the cultivation and use of GM crops cannot be made. Carefully monitored
research on GM crops by public institutions is necessary. Without such research, a ban on GM crops is ill-considered.
As a dedicated home chef, as well as professional ecologist, I know that organic vegetables, poultry and dairy just taste
better. What matters about this great food is not the label, or the perceived chic of paying more for your food, but rather
the mentality of the organic farmer: using nature to get the best from nature. And it shows in the taste.
So it may come as some surprise that I differ from the Green Party [an Irish political party] particularly now that it is in
government, when it comes to the issue of genetically modified (GM) crops.
Banning GM Research Is Short-Sighted
The proposed strategy of banning not only the cultivation, but also the ecological risk-assessment of GM crops in
Ireland, is worrying and short-sighted. We should of course trade on Ireland's clean green image, as [Irish politician]
Trevor Sargent said recently when launching his GM-free Ireland idea.
While European consumer sentiment is against the idea of GM, we are indeed wise to market our food produce as GM
free, but the notion that we should also ban research on new crop technologies, as Sargent has suggested, is perhaps not
so enlightened.
It is possible that GM food is not, in fact, bad for us at all.
There is, of course, some concern that GM crops might be harmful for us to eat. This is a legitimate concern, just as is
the concern for the quality of any food we eat. However, from a scientific perspective, there is no reason to believe that
GM crops should be any more harmful for us than conventional crops.
GM Foods May Be Better
Indeed from what we know about the genetic composition of edible plants, GM crops have much the same ingredients
as the others—and testing their safety continues all the time. It can even be argued that certain GM crops that are
resistant to, for example, herbicide or pests, are exposed to far less chemical contamination than most of the food we
eat. So clearly it is possible that GM food is not, in fact, bad for us at all.
Another concern is the environment. GM crops may cross with wild relatives or grow outside cultivation, but recent
evidence suggests that such is the ferocity of natural selection in wild habitats that only the leanest genomes, honed for
survival over thousands of generations, can actually succeed and reproduce.
Pitted against these highly fit wild plants, cultivated crops that have been bred to rely on ample nutrient and water
supplies are typically very weak. So for both conventional and GM crops, survival in the wild is just a bare possibility
and successful reproduction even less likely.
GM Crops May Survive Climate Change
Certainly the variety of traits that can be bred into crops using GM technology introduces new environmental
challenges. We know that climate change is drying the heart of Africa, changing countless lives along the way. There is
much hope, therefore, that traits giving increased hardiness, drought resistance and salt tolerance may be introduced to
staple crops, traits with a potentially huge benefit for subsistence arid-zone farmers throughout the developing world.
However, these are also the traits with the greatest potential for spread among related wild plants in the desert zone.
Here we might face a potentially difficult choice between starvation and conservation. It does not mean, however, that
we should ignore the potential breakthroughs that genetic modification may offer us.
There is a growing body of independent, publicly funded scientific research that suggests GM crops are not in themselves
harmful to the environment.
On the other hand, herbicide-tolerant GM crops that are licensed for use in Europe really only pose a management
problem for our farmers—these plants will only thrive in fields where a particular brand of herbicide is used. For plants
growing anywhere else in woodlands, meadows, roadsides and sand dunes, such GM herbicide tolerance is a distinct
disadvantage.
GM Crops Can Contribute to a Sustainable Future
Additionally, as we approach the progressive escalation in the cost of petrochemicals, crops which need less pesticides
will begin to underpin the commercial viability of agriculture everywhere. Although GM crops are not the panacea for
the ills of the developing world, if the technology is put in the hands of publicly funded institutions, it will contribute to
building a more sustainable future.
The key to success lies in careful stewardship and this is where research has a critical role to play. There is a growing
body of independent, publicly funded scientific research that suggests GM crops are not in themselves harmful to the
environment. Put simply, making a new crop via GM methods rather than conventional crossing experiments does not
make these GM crops more "risky".
What matters is the new trait that has been put into the crop, be it disease resistance, changed starch content or
improved salt tolerance.
European legislation will not allow crops that are potentially hazardous to the environment to be grown here and those
that are judged to pose no potential harm must be managed very closely to avoid contamination of other crops. This
management process, allowing conventional, organic and GM crops to be grown together, without cross-contamination,
is known as co-existence.
We are at the point now of testing co-existence strategies to see if they can work, but the only way we can really do it
properly is to do controlled assessments in the field. If we spurn the opportunity to validate in the field the claims made
for GM crops, how can our Government stand up and be critical of these claims in Brussels?
Probably the only way to put in place a durable strategy for stewardship of GM technology is not to turn our backs on it, but to
come to grips with it.
Science may conclude at the end of the day that GM is a bad idea for us and for the environment. So be it. But what if
science doesn't say this, what if it says GM has indeed great potential to benefit us and the environment, what then?
Coming to Grips with GM
Probably the only way to put in place a durable strategy for stewardship of GM technology is not to turn our backs on
it, but to come to grips with it. Corporate ownership of seed patents is an issue that needs to be addressed, not least to
put consumer confidence on a firmer footing. But the problem has only arisen because public science has lagged so far
behind the private sector in following new opportunities.
Our public science infrastructure needs to take ownership of the issue, as [Professor] Liam Downey has pointed out. If
we spurn the opportunity, then the GM issue will remain divisive for the foreseeable future.
Source Citation:
Meade, Conor. "Genetically Modified Food Should Not Be Banned, but Carefully Monitored." Genetically Engineered Foods. Ed.
Nancy Harris. San Diego: Greenhaven Press, 2003. At Issue. Rpt. from "Careful Stewardship of GM Crops Is Needed, Not a Ban."
Irish Times 23 June 2007: 13. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Genetically Modified Food Should Be Banned
Andy Rees is the author of the book Genetically Modified Food: A Short Guide for the Confused.
Many faulty arguments have been raised in favor of genetically modified (GM) crops. First, proponents say that GM crops will
reduce pesticide and fungicide use. In reality, usage is scarcely reduced. Second, although proponents argue there will be little
contamination of traditional crops, cross-pollination between GM crops and traditional crops is widespread. Next, the biotech
industry has assured consumers that GM crops are safe. However, there are virtually no studies to support this claim. Finally,
proponents argue that GM crops have gained public acceptance and offer choice. The reality is that GM crops wipe out organic
farmers and have little support from consumers. The conclusion is that GM crops should be banned in the United Kingdom.
In August 2006, German chemicals company BASF applied to start GM [genetically modified] potato field trials in
Cambridge [England] and Derbyshire [England] as early as next spring [2007]. The GM industry is making many
claims about this product, but are these based on the truth?
Argument No. 1: We Need This Product
Late blight [a crop-damaging disease] costs UK [United Kingdom] farmers around £50m each year, even with regular
application of fungicides. BASF claims that its GM potato would reduce fungicide spraying from around 15 times a
year to just two.
The biotech industry has a long track record of first exaggerating a problem, then offering an unproven and oversold GM
solution.
This sounds impressive, until you realise that just 1,300 of the 12,000 tonnes of agrochemicals used on UK potatoes are
fugicides—meaning that, at most, pesticide usage would be reduced by only 10 per cent.
As far as actually reducing pesticide usage is concerned, Robert Vint of Genetix Food Alert observes that "such claims
... usually [soon] prove to be extreme exaggerations". The biotech industry has a long track record of first exaggerating
a problem, then offering an unproven and oversold GM solution. A classic example of this was [chemical company]
Monsanto's showcase project in Africa, the GM sweet potato. It was claimed that the GM potato would be virus
resistant, that it would increase yields from four to 10 tonnes per hectare, and that it would lift the poor of Africa out of
poverty. However, this crop not only wasn't virus-resistant, but yielded much less than its non-GM counterpart.
Moreover, the virus it targeted was not a major factor affecting yield in Africa. The claims were made without any
peer-reviewed data to back them up. And the assertion that yields would increase from four to 10 tonnes per hectare
relied upon a lie—according to FAO [Food and Agriculture Organization] statistics, non-GM potatoes typically yield
not four but 10 tonnes. Furthermore, a poorly resourced Ugandan virus-resistant sweet potato, that really was roughly
doubling yields, was studiously ignored by the biotech lobby.
Also conveniently overlooked are any non-GM solutions to blight. Many conventional potato varieties are naturally
blight resistant, some of which the organic sector are currently trialling. Another non-GM control, used by organic
farmers against late blight in potatoes, is the use of copper sprays in low doses. This is applied to the foliage of the
plant and does not contaminate the tuber.
Argument No. 2: Minimal Contamination
An article in The Guardian, which reads more like a BASF press release ..., reports that "Andy Beadle, an expert in
fungal resistance at BASF, said the risks of contamination from GM crops are minimal because potatoes reproduce
through the production of tubers, unlike other crops such as oil seed rape [canola], which produces pollen that can be
carried for miles on the wind."
Not only is this remark economical with the facts, it seems a little brazen given the biotech industry's rather prolific
history on contamination issues, which has resulted in at least 105 contamination incidents (some of them major), over
10 years, and in as many as 39 countries.
Amongst many other things, Mr Beadle forgot to mention that there is less direct risk of contamination by crosspollination, not no risk. Furthermore, cross-pollination is much higher when the GM and non-GM potato varieties are
different; one study showed that, even at plot-scale, 31 per cent of plants had become hybrids as far as 1 km [kilometer]
from a GM variety. Cross-pollination also increases greatly when the chief pollinator is the 'very common' pollen
beetle, which travels considerably further than another potato pollinator, the bumble bee. Years later, cross pollination
is still possible through potato volunteers (plants from a previous year's dropped tubers or seed); Defra [United
Kingdom Department for Environment Food and Rural Affairs] itself has acknowledged this problem. And similarly,
'relic' plants can persist in fields or waste ground. What is more, blight-resistant varieties create a far greater risk of GM
contamination because the flowering tops are more likely to be left on than with non-blight-resistant varieties. This is
because tops are usually removed from non-blight-resistant varieties to reduce disease incidence. Also, a number of
modern strains can produce considerable numbers of berries, each producing 400 seeds; these can lay dormant for
seven years, before becoming mature tuber-producing plants.
In October 2000, in the US, GM StarLink corn, approved only as animal feed, ended up in taco shells and other food products.
And if all that isn't enough to suggest that 'minimal' contamination is the figment of the corporate imagination, then it is
well worth checking out the March 2006 GM Contamination Register, set up by Greenpeace and GeneWatch UK....
This includes some of the worst contamination incidents to date, including the following three.
In October 2000, in the US, GM StarLink corn, approved only as animal feed, ended up in taco shells and other food
products. It led to a massive recall of more than 300 food brands and cost Aventis [an international pharmaceutical
company] an immense $1 billion to clear up. StarLink corn was just one per cent of the total crop, but it tainted 50 per
cent of the harvest. In March 2005, Syngenta [an international agribusiness] admitted that it had accidentally produced
and disseminated—between 2001 and 2004—'several hundred tonnes' of an unapproved corn called Bt10 and sold the
seed as approved corn, Bt11. In the US, 150,000 tonnes of Bt10 were harvested and went into the food chain. And in
April 2005, unauthorised GM Bt rice was discovered to have been sold and grown unlawfully for the past two years in
the Chinese province of Huber. An estimated 950 to 1200 tons of the rice entered the food chain after the 2004 harvest,
with the risk of up to 13,500 tons entering the food chain in 2005. The rice may also have contaminated China's rice
exports. And now, in 2006, BASF's application comes amidst the latest biotech scandal, that of US rice contamination
by an unauthorised, experimental GM strain, Bayer's LLRice 601.
Argument No. 3: Separation Distances
The GM lobby have proposed a buffer zone of 2-5m [meters] of fallow land around the GM potato crop, together with
a 20m separation with non-GM potato crops.
The National Pollen Research Unit (NPRU), on the other hand, has recommended separation distances of 500m.
Interestingly, pro-industry sources have always claimed that only very small separation distances are necessary, with
buffer zones for rape set at a derisory 200m in the UK crop trials. Judith Jordan (later Rylott) of AgrEvo (now Bayer)
gave evidence under oath that the chances of cross pollination beyond 50m were as likely as getting pregnant from a
lavatory seat. Well, you have been warned. But oilseed rape pollen has been found to travel 26km, maize [corn] pollen
5km, and GM grass pollen 21km.
The truth is that, as far as human health goes, the biotech industry cannot know that their products are safe, because there has
only been one published human health study.
Meanwhile, good ol' Defra is once again paving the way for the biotech industry, with its so called 'co-existence' paper
of August 2006. This will determine the rules for commercial GM crop growing in England—yet astonishingly, it
proposes no separation distances. GM contamination prevention measures will be left in the slippery hands of the GM
industry in the form of a voluntary code of practice.
Argument No. 4: This Product Is Safe
The biotech industry has from the very beginning assured us that their products are entirely safe. This is because, they
claim, they are so similar to conventional crops as to be 'Substantially Equivalent', a discredited concept that led to GM
crop approval in the US (and thence the EU [European Union]).
The truth is that, as far as human health goes, the biotech industry cannot know that their products are safe, because
there has only been one published human health study—the Newcastle Study, which was published in 2004 And
although this research project was very limited in scope, studying the effects of just one GM meal taken by seven
individuals, it nonetheless found GM DNA transferring to gut bacteria in the human subjects.
In Canada, ... the organic canola industry was pretty much wiped out by GM contamination.
As for tests of the effects of GM crops on animals, there are only around 20 published studies that look at the health
effects of GM food (not hundreds, as claimed by the biotech lobby), as well as some unpublished ones. The findings of
many of these are quite alarming. The unpublished study on the FlavrSavr tomato [a GM tomato] fed to rats, resulted in
lesions and gastritis, in these animals. Monsanto's unpublished 90-day study of rats fed MON863 maize resulted in
smaller kidney sizes and a raised white blood cell count. And when it comes to GM potatoes, Dr Ewen and Dr Pusztai's
1999 10-day study on male rats fed GM potatoes, published in the highly respected medical journal The Lancet,
showed that feeding GM potatoes to rats led to many abnormalities, including: gut lesions; damaged immune systems;
less developed brains, livers, and testicles; enlarged tissues, including the pancreas and intestines; a proliferation of
cells in the stomach and intestines, which may have signalled an increased potential for cancer; and the partial atrophy
of the liver in some animals. And this is in an animal that is virtually indestructible.
Argument No. 5: Increasing Choice
The proposed UK trials would follow those being carried out in Germany, Sweden and the Netherlands. Barry
Stickings of BASF explains: "We need to conduct these [in the UK] to see how the crop grows in different conditions. I
hope that society, including the NGOs [non-governmental organizations] realise that all we are doing is increasing
choice."
So, how much choice has GM crops given farmers? Well, in Canada, within a few years, the organic canola industry
was pretty much wiped out by GM contamination. And in the US, a 2004 study showed that, after just eight years of
commercial growing, at least 50 per cent of conventional maize and soy and 83 per cent of conventional canola were
GM-contaminated—again dooming non-GM agriculture.
Argument No. 6: Public Opinion
Regarding BASF's application to trial GM potatoes, the Financial Times reported that "Barry Stickings of BASF said
he did not expect too much opposition to the application". What had clearly slipped Stickings' mind was that BASF had
already faced protests with this product in Sweden, where it is in its second year of production.
In Ireland, where one may have expected more enthusiasm for the project, given the history of blight during the 1840s
famine, BASF was given the go ahead earlier this year for trials of its GM blight-resistant potato, only to face stiff
public resistance and rigorous conditions enforced by the Irish Environmental Protection Agency. BASF later
discontinued the trials.
In the U.S., ... GM potatoes were taken off the market ... when McDonald's, Burger King, McCain's and Pringles all refused to use
them, for fear of losing customers.
In the UK and Europe, as Friends of the Earth points out: "Consumers ... have made it clear that they do not want ...
GM food". In fact, the British Retail Consortium, which represents British supermarkets, has already stated that they
'won't be stocking GM potatoes for the conceivable future' because 'people remain suspicious of GM'. My forthcoming
book goes into the rejection of GM crops in more depth.
And even more surprisingly, in the US, where 55 per cent of the world's GM crops are grown, GM potatoes were taken
off the market back in 2000 when McDonald's, Burger King, McCain's and Pringles all refused to use them, for fear of
losing customers.
Conclusion: Ban GM Crops
So, having reviewed the claims made about BASF's GM potatoes, and having found them, well, somewhat lacking, there is only
one course of action open to the government, and that is, as Friends of the Earth's GM Campaigner Liz Wright recently said, to
"... reject this application and prevent any GM crops from being grown in the UK until it can guarantee that they won't
contaminate our food, farming and environment."
Source Citation:
Rees, Andy. "Genetically Modified Food Should Be Banned." Genetically Engineered Foods. Ed. Nancy Harris. San Diego:
Greenhaven Press, 2003. At Issue. Rpt. from "GM Potatoes—Facts and Fictions." The Ecologist 36.9 (22 Sept. 2006): 14-15. Gale
Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Juvenile Offenders
Juvenile offenders are classified by most legal systems in Western nations as people who have not reached the age of
majority, which is the legal definition for the threshold of adulthood. In most of the United States, this threshold is
eighteen years old for those who commit a criminal offense. Some states have laws, however, that allow a person as
young as fourteen to be tried as an adult for specified violent crimes.
Treatment of Juvenile Offenders in U.S. History
Juvenile crime has been a feature of almost every society, but how authorities choose to deal with it has varied
according to time and place. In the nineteenth century, in the midst of widespread social reform movements in Great
Britain and the United States, the idea that minors may be excused from criminal liability for their actions because of
their age became a tenet of the legal system, and separate juvenile courts and institutions were established to deal with
this largely urban problem. These detention facilities, called reform schools or reformatories, were designed to
rehabilitate the offender and keep him or her separate from the adult prison population, which was known to be both a
danger and a corrupting influence to minors.
The first reform school to open in the United States was the Lyman School for Boys in Westborough, Massachusetts, in
1846. Over the next few decades, similar institutions opened across the United States. Sometimes they were known as
industrial schools because they gave the incarcerated youth a chance to learn a trade or skill that would allow them to
enter the workforce once released. In the latter half of the twentieth century, however, these and similar institutions
were condemned as mere warehouses for problem children, most of whom exited at age twenty-one and went on to
become career criminals, and a renewed effort at curtailing juvenile crime became a topic of public debate.
The most significant court decision on the rights of juvenile offenders came in the 1967 U.S. Supreme Court ruling on
In re Gault. The case involved fifteen-year-old Gerald Francis Gault of Gila County, Arizona, who was detained by the
local sheriff in June 1964 after a neighbor reported that he made a prank phone call to her house. Gault was known to
authorities for two previous incidents, both involving theft, and was already on probation. When he was taken into
custody for the prank call, neither of his parents were at home, and there was no formal notice that he had been
detained, but his parents eventually learned of his whereabouts. A few days later, Gault was questioned by a judge of
the Gila County juvenile court, released into his parents’ custody, then summoned for another hearing. At that hearing,
the judge declared him to be a juvenile delinquent and sentenced him to the Arizona State Industrial School in Fort
Grant until he turned twenty-one. Arizona juvenile law did not allow for an appeal to be filed. Had he been charged
with making a lewd phone call as an adult, Gault would have faced a maximum penalty of fifty dollars and two months
in jail.
Gault’s parents fought the ruling with the help of a sympathetic local attorney and the American Civil Liberties Union,
and In re Gault went all the way to the Supreme Court. In the landmark 1967 ruling, the Court found that several
violations of due process had taken place in sentencing Gault to more than five years in the state school. Under most
state juvenile codes of law at the time, underage offenders did not have to be informed of the charges against them, did
not have the right to have a lawyer present during questioning, and did not have any legal protection against making
self-incriminating statements. All of these protections were constitutionally guaranteed to adults, however. In Gault’s
case, the neighbor woman was not required to appear in court or send legal representation. And there were not any
transcripts recorded of Gault’s meetings with the sheriff or judge. "Neither the fourteenth amendment nor the Bill of
Rights is for adults alone," the Supreme Court asserted in its ruling, which forced juvenile court systems across the
United States to comply with a new set of fair standards for minors.
Another important milestone in the treatment of juvenile offenders came in 1974, when Congress passed the Juvenile
Justice and Delinquency Prevention Act of 1974. This established the Office of Juvenile Justice and Delinquency
Prevention (OJJDP), a Department of Justice division that maintains statistics on juvenile crime and doles out federal
funds for juvenile-crime-prevention programs. The act also aimed to reduce the number of juvenile offenders by
ordering states to distinguish between two types of juvenile crime. The lesser is the status offense, an act that is not a
crime if an adult does it. This includes school truancy, running away from home, and possession of alcohol or tobacco.
The act forbids the detention of juveniles for status offenses, mandating probation instead, although it does permit
exceptions for weapons-possession charges. The second type of offense is the delinquency offense, defined as a
criminal act no matter what the age of the perpetrator, such as theft or assault.
Fears of Superpredator Kids
Sociologists have struggled to find answers to why juvenile crime in the United States seems to rise and fall in roughly
fifteen-year cycles. Peak years occurred between 1968 and 1975, but the rate of arrests for juveniles slowed after 1976.
They began to rise again in 1985, and over the next decade the rate jumped 67 percent, with an alarmingly higher
number of those arrests for the commission of violent crimes. A book titled Body Count: Moral Poverty … and How to
Win America’s War Against Crime and Drugs, published in 1996, warns of a new type of violent juvenile offender
dubbed the "superpredator." Such youths, it argues, were born to drug-addicted mothers during the crack-cocaine
epidemic of the 1980s, and had been raised in either an atmosphere of lawlessness at home or general negligence inside
the foster-care system. Such offenders, the book asserts, are "radically impulsive, brutally remorseless youngsters,
including ever more preteenage boys, who murder, assault, rape, rob, burglarize, deal deadly drugs, join gun-toting
gangs and create serious communal disorders."
Fears of the superpredator led several states to pass tougher laws for juvenile crime in the late 1990s. This allowed
states to try some offenders as adults, who were then sentenced to terms in adult prisons. Later studies that tracked
these cases, however, found that once such offenders were released from prison, they committed a serious crime more
often than their counterparts who had gone through the traditional juvenile justice system. This is known as recidivism,
a tendency to relapse into previous behavior, such as criminal activity.
Another approach to dealing with juvenile offenders in the 1990s was ;juvenile boot camp. These programs were
offered to offenders in lieu of a much longer sentence at a traditional juvenile detention facility. Boot camps used
military-style training tactics, including hours of arduous physical exercise, to reform juvenile offenders.
Unfortunately, the camps were also rife with abuse from guards, and after deaths of teens occurred in several states, the
facilities either came under much stricter state supervision or were closed altogether. Some parents were enthusiastic
advocates of the camps, asserting that the tough-love approach had worked miracles on a previously stubborn and
misbehaving son or daughter. The recidivism rates for those who went through a boot camp program were about the
same as for those who had completed standard juvenile-correction programs in detention facilities, however.
The fears about a superpredator never proved correct, and juvenile crime rates actually began to drop after 1995. The
statistics a decade later showed a continued decline. In 2008, there were 2.1 million arrests made of juveniles,
according to the OJJDP.
Alternatives to Detention
After the failure of boot camps, new treatment and supervision programs developed. These programs allow offenders to
remain at home, but they undergo comprehensive counseling and other treatment, including family therapy. New York
City’s Juvenile Justice Initiative (JJI) was launched in early 2007. It allows some medium-risk offenders to return
home, but they remain under close probationary supervision and undergo family counseling and individual therapy. A
year later, program executives noted that of the 275 youths who had entered the JJI program, fewer than 35 percent of
them had been arrested again or found in violation of their probation.
Source Citation:
"Juvenile Offenders." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Teen Curfews Create a Sense of Safety
Patrick Boyle is editor of Youth Today, a national newspaper for youth service professionals.
The few scientific studies on age-based curfews demonstrate that they have little to no effect on juvenile crime or victimization.
In addition, many towns and cities do not have the data to support claims that juvenile crime is on the rise. However, a growing
number of communities across the United States are adopting curfews, and a recent national survey shows that local officials
widely support them. The drive of policy makers to impose curfews is not about studies or data—it is about a community's
impression of juvenile delinquency and crime, in which anecdotal evidence is gathered from neighbors, the local news media,
and personal observations or experiences. To these people, curfews are reasonable and successful when they contribute to a
feeling of safety in the community.
As head of the Center on Juvenile and Criminal Justice, Dan Macallair has repeatedly given city officials scientific data
that juvenile curfews don't reduce youth crime. That evidence includes a study he co-authored, which appears to be the
most comprehensive research ever done on the subject.
But Macallair can't compete with Lerrel Marshmon. Last year [2005], Marshmon appeared before the town council of
Knightdale, N.C., urging it to impose a curfew. The minutes of the meeting summarize his evidence:
"Lerrel Marshmon, 308 Laurens Way, Knightdale, stated that he gets off work at 10:00 p.m. and frequently the gang
members come onto his property. He stated that the kids verbalized swear words to his wife. Mr. Marshmon stated that
he came outside to ask the kids to leave his property and they threatened him and his family. He explained that he
thinks the problem is very serious."
Among more than 200 cities with curfews, officials in 96 percent consider them "very" or "somewhat" effective.
Macallair says that when he gives officials his evidence about curfews, he usually gets "no response." After Marshmon
and others spoke in Knightdale, the town instituted a curfew, and this summer decided not to change it. Knightdale is
among numerous communities that have recently turned to curfews to crack down on an alleged rise in juvenile crime.
San Francisco, Houston, Washington, Oklahoma City, New Haven, Conn., Kinston, N.C., and the New York
communities of Rochester, Oswego, Fulton, East Syracuse and Wyoming County—those are just some of the places
that have created, expanded, restored or considered curfews over the past several months [of 2006].
This despite the fact that research shows little or no evidence that curfews work. And the rise in juvenile crime? In
many if not most of the towns, there's no data to support the claim.
While it's routine to bemoan the gap between research and practice in youth work, perhaps nowhere is that gap wider
than between the popularity of youth curfews and the research about their effectiveness.
"What's most astounding," Macallair says about the research, "is that it's one of those areas where there doesn't seem to
be any relationship whatsoever to policy analysis."
Indeed, a survey released this year [2006] by the National League of Cities shows that among more than 200 cities with
curfews, officials in 96 percent consider them "very" or "somewhat" effective. The headline on the league's news
release: "Youth Curfews Continue to Show Promise."
The league called curfews "a growing trend."
If curfews are demonstrably ineffective, are all those mayors, county supervisors and police chiefs ignorant, deceitful
or out of their minds?
The bugaboo of the youth field about lack of research dissemination is one culprit. Officials considering curfews
typically don't know about the work of Macallair and others.
But it probably wouldn't matter. While advocates who oppose curfews think their data make for a slam-dunk case,
policymakers aren't impressed; they have other factors on their minds.
Data Do Not Matter
"Whereas, the town council has determined that there has been an increase in juvenile violence, juvenile gang activity
and crime by persons under the age of 18 ..."
So begins the ordinance that created the curfew in Knightdale (pop.: 6,000). But ask Police Chief Ricky Pope for data
to back up the statement, and he says he probably has none. "It really wasn't because juvenile crime was up," he says of
the curfew.
The answers are similar around the country. While officials justify curfews with claims about increases in youth crime,
few can provide statistics to show it.
The city of Oswego, N.Y., is considering a youth curfew, but the main curfew proponent hasn't asked the police
department for juvenile crime numbers. "If they have it, we don't get it," says Councilwoman Barbara Donahue.
Oklahoma City expanded the hours of its youth curfew in August [2006] for its popular nightlife section, Bricktown,
after business owners said youth crime and gang activity were rising there. City police say they have no juvenile crime
statistics for Bricktown.
In Kinston, N.C.—which instituted a trial curfew in June and made it permanent in September [2006]—Councilman
Van Broxton voted for the measure, but says, "I don't know that we had a lot of criminal data."
Even when data are provided, the conclusions are debatable:



Police statistics from Rochester, N.Y., show juvenile arrests virtually unchanged from 2004 to 2005 (1,526 vs. 1,523).
Arrests increased during the first five months of this year [2006], then dropped sharply for three months—the three
months immediately preceding the city's new curfew. When the curfew began on Sept. 5 [2006], the number of juvenile
arrests for the first eight months of the year was down by 6 percent from the same period last year [2005]. (After the
curfew, however, the arrests fell even faster.)
In San Francisco, the mayor announced in September [2006] that the city would begin enforcing its long-ignored curfew
for anyone under 14, in response to an increase in overall violent crime. City statistics show that juvenile arrest rates
declined significantly over the past decade but rose slightly from 2004 to 2005. They also show that youth under 14
make up a small percentage of those detained in the city's juvenile hall—8.4 percent last year [2005], and 6.4 percent
through August of this year [2006].
Perhaps the strongest statistics came from Washington, D.C., which extended the hours of its curfew this summer in
response to what it called a crime emergency. Among other things, police cited an 82 percent increase in juvenile
robberies.
Jason Ziedenberg, executive director of the Washington-based Justice Policy Institute, sat before the city council and
demonstrated the futility of fighting curfews with data. He argued that the city's crime increase was driven by adult
crime. He argued that the juvenile robbery totals were so small (rising from 70 to 134) that large jumps in percentages
were misleading. While police said the curfew reduced juvenile arrests by 46 percent, Ziedenberg countered with an
analysis which said that over 23 days, the new curfew rules reduced arrests from 15 to 13. When he was done, "there
wasn't a single word from any councilor about my testimony," Ziedenberg recalls. He even asked one of them if he had
any questions, "and there was no comment."
The curfews show how a community's belief about crime—based on what residents see and talk about among themselves, and
what the news media and government officials report—speak louder than spreadsheets.
Impressions Drive Policy
The evidence that has driven policymakers to impose curfews this year [2006] is primarily not about data; it's
impressionistic.
"It really wasn't because juvenile crime was up," the Knightdale police chief says of the curfew there. "Our problem
was we were having groups of people, juveniles, hanging out in different locations and pretty much harassing the
public as they walk down the sidewalk." There also seemed to be more graffiti.
The story is similar in Oswego, where some of the concerns clearly involve youth, while others involve youth by
implication.
Residents have been complaining about "young people ... demanding money, using obscenities, throwing eggs at cars,"
Councilwoman Donahue says. "They're out here at 11 o'clock right to three or four o'clock in the morning. I've seen
them myself."
She says there's been more vandalism, including to a Little League concession stand and a city pool. She adds that town
officials have been hearing more from "homeowners with their cars being broken into. Cars keyed, rifled through." She
says it happened to her daughter-in-law.
How does she know the culprits are kids? "You can tell just by the loose change," Donahue says, noting that the
perpetrators seem more intent on being a nuisance than on finding valuables. "I don't think an adult would take and
throw stuff all over the place.... An adult would probably just take what they need and leave."
In some towns a few serious, high-profile crimes are behind the curfews. Rochester imposed its curfew in September
[2006] because of what the Rochester Democrat and Chronicle called "a spate of violence involving youths last year
[2005] that has continued this year." The city reported that seven youths (ages 12 to 17) were killed in 2005; the curfew
idea gained momentum after a 15-year-old was shot to death outside a recreation center one night last fall [in 2005]. In
New Haven, Conn., a flurry of violence this summer [2006]—including the shooting deaths of three teens—[had]
officials considering a curfew. A city alderwoman also cited kids on bicycles stirring up trouble in her neighborhood.
To be sure, arrest data are not a perfect reflection of criminal activity in a community; vandals routinely get away. And
some of the quality-of-life issues that residents complain about, such as being shouted at by teens, often don't lend
themselves to arrests.
The curfews show how a community's belief about crime—based on what residents see and talk about among
themselves, and what the news media and government officials report—speak louder than spreadsheets.
The Spreadsheets
There are few studies about the impact of curfews, and their findings are uniform. In 2003, the Urban Institute released
two studies of curfews in Prince George's County, Md., which borders Washington. They found "little support for the
hypothesis that the curfew reduced arrests and calls for service during the curfew hours," and "little support for the
hypothesis that the curfew reduced violent victimization of youth within the curfew age." The studies were funded by
the National Institute of Justice.
Prince George's County still has a youth curfew.
The largest curfew study looked at California, including jurisdictions with and without curfews. Conducted by the
Justice Policy Institute with funding from The California Wellness Foundation, the study looked at youth arrest and
crime rates from 1978 through 1996, and was published in 1998.
The main finding: "No evidence that curfews reduce the rate of juvenile crime." Counties with strict curfews saw no
decrease in crime compared with counties without strict curfews. Macallair compiled the study with researcher Mike
Males (who is also a Youth Today columnist).
So what?
In Oswego, Councilwoman Donahue says no one has talked about finding studies about the impacts of curfews,
although a curfew committee might do that.
Asked if Knightdale looked at studies about the effects of curfews elsewhere, Police Chief Pope says, "No." And they
don't much care. People in towns with curfews are comfortable judging their effectiveness not by data from other states,
but by observations made by themselves and people they trust.
In Kinston (pop.: 24,000), the public safety chief talked with officials from other towns with curfews, who said the
curfews were working, Councilman Broxton says. One such town is Knightdale, where Councilman Jeff Eddins says,
"What I would have someone look at is the number of complaints that we no longer get. The number of streets you can
now drive down and not have people harassing or cursing you."
That approach explains why Ziedenberg of the Justice Policy Institute says that when he gives government officials
evidence that curfews don't work, "the reaction varies from stony silence to dismissal."
When it comes to curfew decisions, data are no match for feeling safe.
An Attractive Option
It's easy to see why local government officials like curfews.
"I've been on the council for six years, and there's never been an issue that brought as many people out as this issue,"
says Eddins in Knightdale. "From a political standpoint, it was an easy decision to make. You've got a majority of your
citizens saying, 'Take action now. We want this fixed.'"
In Rochester, a local TV station (WROC) asked residents in September [2006], "Are you in favor of Rochester's youth
curfew?" Eighty-eight percent said "yes."
If it were up to you, would you go against such public wishes, and stake your case on what Macallair says happened in
California a decade ago?
Macallair understands why curfews seem reasonable to most people. "From a gut level, you want to have police be able
to arrest kids who are out on the street after hours," he says. "If you've got kids, that makes lots of sense." It helps that
police usually support the proposals. In some communities, however—such as Oswego and New Haven—police have
objected to diverting their resources to chase kids home. "Police officers have enough to do right now besides baby-sit
for other people's children," the president of the New Haven police union said in the Providence Journal. He added that
a curfew would "create more hostility between our cops and the kids."
Feeling an Impact
In Knightdale, there are no data to say juvenile crime has gone down since the curfew. But Chief Pope says the
measure has been a success, based on "nothing other [than] the citizens saying it's made a big difference: 'I don't feel
intimidated anymore walking down the sidewalk.'"
"The result has been great," says Eddins, the councilman. "You had groups of youth gathering in the middle of the
street, on sidewalks. They were cursing, making threatening gestures to families....
"With the curfew, that's been eradicated. People can actually walk up and down the streets. The kids can play in the
front yards."
So while studies might see no change in crime statistics, residents weigh quality-of-life issues that don't show up in the
statistics. "Every time I talk to [community leaders], they tell me the same stuff: Thanks for the curfew. Their
neighborhoods feel safer," Eddins says.
When it comes to curfew decisions, data are no match for feeling safe.
Source Citation:
Boyle, Patrick. "Teen Curfews Create a Sense of Safety." Are Teen Curfews Effective? Ed. Roman Espejo. Detroit: Greenhaven
Press, 2009. At Issue. Rpt. from "Curfews and Crime." Youth Today Nov. 2006: 36-38. Gale Opposing Viewpoints In Context. Web.
10 Apr. 2012.
Teen Curfews Should Not Be Supported
Based in Washington, D.C., the National Youth Rights Association is the largest youth-led organization in the United
States.
Teen curfews should be opposed on several grounds. First of all, curfews do not reduce juvenile crime. In fact, the current
available data show that these laws may actually increase crime. Second, teen curfews interfere with parenting. Parents do not
need the government to manage their households or set curfews for their children. Finally, teen curfews violate civil rights.
Though not inherently racist, curfews are more heavily enforced in black communities. In addition, lower courts have struck
down these laws because they restrict the exercise of free speech. Government intervention should not be required to monitor
young people's personal activities at any given hour.
We have curfews? What are they?
Curfews usually exist only in times of national emergency or military occupation. On June 14, 1940, when the
Germans occupied Paris, they imposed an 8 o'clock curfew. The United States puts a new twist on this familiar concept
by setting curfews during times of peace for all young people under a certain age. Curfew laws are often set by a city or
a state and make it illegal for a person underage to be outside during certain times. For example, in the state of
Michigan, it is illegal for a person under 16 to be out in public between the hours of 12 and 6 a.m. Cities within the
state often impose curfew laws with stricter requirements than the state.
What are penalties for breaking curfew?
That depends on the law; each one is different. In some cases, the police will simply give a warning, others will make
the youth return home; in other cases there may be a fine or jail time involved. For example, in St. Louis, Missouri,
curfew violators face up to $500 in fines and 90 days in jail. In some cases, parents face penalties when their children
are out past curfew as well. In St. Louis, if a young person has been picked up for curfew and taken to the police
station, the parents must pick him or her up from the station within 45 minutes or face penalties of up to $500 in fines
and 90 days in jail.
What are daytime curfews?
In addition to laws that make it a crime to be outside at night, there are also laws that make it a crime to be out during
the day, usually during school hours. The city of Los Angeles has a curfew making it illegal for anyone under 18 in
school to be in public between the hours of 8:30 a.m. and 1:30 p.m.
Curfews don't affect crime and only hurt innocent youth—repeal them.
Does my city have a curfew?
Possibly—youth curfews are spreading in cities and states all across the country. Please visit this list [at
www.youthrights.org] to see if you are included. Due to the rapid expansion of curfew laws in the last few years, our
list may not be complete, but it's the best we've seen. You can help add to our list by providing us information on your
cities' curfew laws.
Do curfews cut down on youth crime?
No. Supporters of youth curfews cite only anecdotal and incidental data; the only true study on the effectiveness of
youth curfews at reducing crime showed it had no effect. Researchers Mike A. Males and Dan McAllister said,
"Statistical analysis does not support the claim that curfew and other status enforcement reduces any type of juvenile
crime, either on an absolute (raw) basis or relative to adult crime rates. The consistency of results of these three
different kinds of analysis of curfew laws point to the ineffectiveness of these measures in reducing youth crime." In
fact, curfew laws may even lead to increased crime: "The current available data provides no basis to the belief that
curfew laws are an effective way for communities to prevent youth crime and keep young people safe. On virtually
every measure, no discernable effect on juvenile crime was observed. In fact, in many jurisdictions, serious juvenile
crime increased at the very time officials were [touting] the crime reduction effects of strict curfew enforcement."
Let's think about this rationally. Curfew laws are intended to stop young people from committing crimes by making
them stay inside. If a person intends to commit a crime by stealing a car, vandalizing a home, or dealing drugs, why
would they have any respect for another law that made it illegal to be outside? Aren't laws against auto theft, property
damage, and drug dealing enough?
Curfews don't affect crime and only hurt innocent youth—repeal them.
A Family Decision
Should 5-year-olds be free to roam the street at 4 in the morning?
That's a family decision. Parents should be able to set curfews, not government. Parents know their children far better
than an impersonal law and should be given the discretion to parent.
If curfew laws are repealed, kids will be more likely to defy their parents' curfews, seeing that the government no
longer is concerned about this issue, right?
There are no laws against yelling in the house, running with scissors, or pulling hair, but parents manage to handle
these issues just fine. Why do parents need police to back them up when setting curfews? As the experience of Prince
George's County, Maryland, shows, often parents don't even know about the curfew law. "Despite a number of public
service announcements and the distribution of 40,000 brochures to middle- and high-school students to educate them
about the curfew, awareness of the curfew is not universal among parents—only three in four parents of teenagers
knew of it."
Curfews and Crime
Don't curfew laws help the police fight crime?
Police are split on this issue. The creation of a substantially broad crime to allow the ability to stop and question all
individuals under a certain age is a tool for police, and a way to get around individual rights. Many other officers,
however, feel curfew laws create a drain on police time and resources, forcing them not only to serve and protect, but
also to parent. With murderers and rapists loose on the street, making sure Billy isn't out to late should not be a police
priority.
Are curfews racist?
Not inherently, but usually they turn out to be. Curfew laws give a great amount of discretion to police officers, which
... often leads to racist enforcement of curfew laws. Curfew laws are heavily enforced in black neighborhoods, but not
as much in white neighborhoods. Likewise, white youth are less likely to be stopped by police than black youth.
Because of this, the rate of arrest for blacks in 2000 was 71% higher than that for whites.
Curfew hours target the period of highest youth crime, right?
No, nighttime and daytime curfews don't cover the stretch of time most juvenile crime occurs—the afternoon.
According to the FBI, "Youth between the ages of 12 and 17 are most at risk of committing violent acts and being
victims between 2 p.m. and 8 p.m." These are times that no curfew laws cover.
Curfews only exist in places with high rates of juvenile crime; curfew laws aren't introduced baselessly, right?
Wrong. In response to a grisly string of murders in Manning, South Carolina, the city council proposed a youth curfew
in response. The problem, however, was the criminal suspect was 37-years-old, and the proposed youth curfew would
have had no effect whatsoever on the murders that shocked this small town. The experience of Manning is not unusual.
Communities choose to enact curfew laws that have no problems with youth crime whatsoever. In fact, except for the
elderly, juvenile crime makes up the lowest proportion of crime altogether. So if adults commit 75-90% of all crime,
where is the urgent need for curfew laws to protect society from violent youth?
Don't curfew laws protect young people from being victimized by criminals; shouldn't youth be glad such laws protect
them?
If young people were concerned about violent criminals, they would stay inside voluntarily; no law would be needed.
This line of reasoning is only correct if applied to all people at risk of being attacked by criminals. Of course, all people
are at risk of crime; if protecting innocent people from crime were a legitimate concern, then all people regardless of
age would clamor for, and accept, curfews governing their lives. Would a requirement that all U.S. residents be inside
by 11 p.m. free the country of all crime?
In a Free Country
Are curfew laws unconstitutional?
There have been many court challenges to curfew laws around the nation, and so far courts are split on this issue. With
no U.S. Supreme Court ruling on the issue, there is no easy answer to offer. In general, lower courts recognize that
curfews impose restrictions to the 1st Amendment right of free speech and have struck down many laws that impose
too heavy a burden on the exercise of youth's free speech rights. These same courts will often uphold curfew laws once
exceptions have been written to allow for political protests. The narrow interpretation of 1st Amendment rights is a
tragedy and ultimately ignores the more pressing liberty rights at issue.
Curfew laws are also deemed to be constitutional if they serve a compelling state interest, in this case, the reduction of
juvenile crime. However, as no study has shown curfews, in fact, reduce crime, this assertion is false. With no
compelling state interest, NYRA [National Youth Rights Association] strongly asserts curfew laws are unconstitutional
and must be struck down.
Curfew laws often have exceptions if the person is coming home from work, or in an emergency; what else would a
youth want to be out at night for?
In a free country, it is not our place to decide what is appropriate for our neighbor to do or not do. Freedom doesn't
require proof to justify one's decisions. If a teen wants to take a stroll and gaze at the moon, that's her decision. If a teen
feels it's too hot during the day and prefers jogging at night or early in the morning, that's his decision. If a teen wants
to go to the park and count blades of grass at 3 in the morning, from what harm do we suffer? Freedom is not the result
of exceptions to the law; the laws are the exceptions to freedom.
What can I do to help fight curfews?
We're glad you asked. NYRA has provided for you a resource with lots of information on what you can do to fight
curfews in your area. Print out stickers, start a NYRA chapter, hold a protest, and of course, let the media know. Check
out NYRA's anti-curfew action site to start a campaign against your curfew. Since NYRA is one of the top
organizations fighting curfews, joining the organization is a good step against curfews.
Source Citation:
"Teen Curfews Should Not Be Supported." Are Teen Curfews Effective? Ed. Roman Espejo. Detroit: Greenhaven Press, 2009. At
Issue. Rpt. from "Curfew FAQ." www.youthrights.org/curfewfaq.php. 2008. Gale Opposing Viewpoints In Context. Web. 10 Apr.
2012.
Medical Marijuana
Marijuana is the most commonly used illicit drug in the United States. Many advocates consider marijuana a harmless,
or even beneficial, substance that should be made legal. Among the arguments for legalizing marijuana is its purported
value in treating symptoms of serious diseases, including AIDS, glaucoma, cancer, multiple sclerosis, epilepsy, and
chronic pain. Several states have enacted laws permitting use of marijuana for medical purposes, and legalization has
strong public support. Nevertheless, some medical practitioners and lawmakers continue to argue against legalization.
Medical Benefits
Marijuana is the term for the dried leaves, flowers, or stems of the Cannabis plant. Marijuana contains more than 400
chemicals, of which eighty have been identified as unique to Cannabis. These are known as cannabinoids. Of these, the
most pharmacologically active is delta-9-tetrahydrocannabinol (THC). The U.S. Food and Drug Administration (FDA)
has found THC to be safe and effective in treating nausea, vomiting, and wasting diseases such as anorexia.
Cannabinoids appear to play a key role in the body's pain mechanisms. According to the U.S. Society for Neuroscience,
evidence shows that "cannabinoids directly interfere with pain signaling in the nervous system." Medical researchers
have concluded that cannabinoids can be helpful in treating pain associated with chemotherapy, postoperative recovery,
and spinal cord injury, as well as neuropathic pain, which is often experienced by patients with metastatic cancer,
multiple sclerosis (MS), diabetes, and HIV/AIDS.
Cannabinoids are also effective in treating nausea and vomiting, which are common symptoms of many kinds of
cancers and are side effects of chemotherapy. By stimulating appetite, cannabinoids can be helpful in treating anorexia
and other wasting diseases, which can include chronic diarrhea, tuberculosis, and unexplained weight loss. Other
conditions for which marijuana may prove beneficial include epilepsy, muscle spasticity, Parkinson's disease,
glaucoma, and depression.
Organizations that have endorsed the use of marijuana for medical purposes include the American Public Health
Association, the American Academy of Family Physicians, and the American Medical Student Association. In 2009 the
American Medical Association reversed its previous policy against such use and urged the government to reconsider
marijuana's illicit status, with the goal of facilitating further clinical research into marijuana's medical benefits. Not all
physicians, however, agree that marijuana should be prescribed for medical conditions. Quoted in The Baltimore Sun,
oncologist Dr. Kevin Cullen, director of the University of Maryland Marlene and Stewart Greenebaum Cancer Center,
questioned the need to legalize marijuana and stated that he would probably never recommend the drug to cancer
patients because other medications are more effective.
Health Risks
Among arguments against the legalization of marijuana is that it contains carcinogens and may increase the risk of lung
cancer and other respiratory tract cancers. According to the Mayo Clinic, marijuana smoke contains between 50 percent
and 70 percent more carcinogens than does tobacco smoke. What is more, users inhale marijuana smoke more deeply
than tobacco smoke and hold it longer, increasing the lungs' exposure to these harmful substances. A British Lung
Foundation report in 2002 found that smoking three to four marijuana cigarettes per day is associated with "the same
degree of damage to the bronchia mucosa as 20 or more tobacco cigarettes a day." The study also stated that marijuana
use is likely to weaken the immune system.
Yet a study presented in 2006 found no evidence that marijuana smokers had higher rates of lung cancer. Study author
Dr. Donald P. Tashkin, quoted in The Washington Post, stated that there was "no association at all, and even a
suggestion of some protective effect" between marijuana use and lung cancer. In a 2006 report, Paul Armentano,
analyst at the National Organization for the Reform of Marijuana Laws, observed that though cannabis smoke contains
many carcinogens, it also contains cannabinoids that are non-carcinogenic and that "demonstrate anti-cancer
properties." And Harvard Medical School professor Lester Grinspoon, M.D., wrote in the Los Angeles Times that
"there is very little evidence that smoking marijuana as a means of taking it represents a significant health risk."
According to the National Institute on Drug Abuse (NIDA), long-term use of marijuana can lead to addiction. It is also
associated with increased rates of depression, anxiety, and schizophrenia. NIDA also reports that, though the link
between marijuana and lung cancer remains unproven, chronic cannabis smokers have other respiratory problems
found in tobacco smokers, including daily cough, phlegm, and increased risk of lung infections. Acknowledging
marijuana's therapeutic value in treating pain, nausea, and loss of appetite, NIDA does not endorse it as a medication,
because it contains many chemicals with unknown health effects and because it is usually consumed by smoking,
which is associated with health risks. NIDA states that a better course is to develop purified or synthetic drugs from the
cannabinoids in marijuana, which would deliver "more tailored medications with improved risk/benefit profiles."
Legal and Judicial History
The federal government classifies marijuana as a Schedule One substance which, by definition, has a high potential for
abuse and has no medical value. The first shift in the government's approach came in 1978, after glaucoma patient
Robert Randall sued the FDA and other government agencies following his criminal conviction for cultivating
cannabis. In Randall v. U.S. he argued that marijuana was a medical necessity for him, permissible under the common
law doctrine of necessity. The court agreed, and, following this decision, the FDA created the Investigational New
Drug (IND) compassionate access program, through which individuals could petition for legal access to marijuana for
medical purposes. The IND was closed to new patients in 1992, though as of 2010 there were seven patients still
receiving medical marijuana through the program.
In 1996 activists in California, led by Dennis Peron, organized a ballot proposition to legalize medical marijuana.
Proposition 215, also called the Compassionate Use Act of 1996, passed by 55.58 percent of the vote. The proposition,
as later expanded, allows patients to grow, possess, and collectively distribute marijuana for personal medical use, on
the recommendation of a licensed physician. Yet the federal government threatened to sanction or prosecute physicians
who recommended medical marijuana. A class-action suit, Conant v. McCaffrey, was brought in 1997 by a group of
physicians who alleged that these threats violated their constitutional rights. In its decision in 2002, the U.S. District
Court upheld the right of doctors to recommend marijuana to patients for medical reasons. The Conant decision allows
physicians to discuss medical marijuana use with patients, to record this information, and to recommend marijuana use.
It does not permit physicians to write prescriptions for marijuana or to help patients obtain it. Nor does it permit them
to recommend marijuana use without a medical reason.
The cause for legalizing medical marijuana experienced a major setback in 2005, when the U.S. Supreme Court ruled
in Gonzales v. Raich that the federal government has the right to ban the use of cannabis even in states with
compassionate use laws. The decision does not overturn states' compassionate use laws, but does permit the federal
government under some circumstances to prosecute medical marijuana users. The Obama administration, however, has
indicated its intention to disregard the ban in some cases. Attorney General Eric Holder stated in 2009 that "It will not
be a priority to use federal resources to prosecute patients with serious illnesses or their caregivers who are complying
with state laws on medical marijuana."
In January 2010, New Jersey became the fourteenth state in the country to legalize the use of marijuana for medical
purposes. The bill, which allows patients with serious illnesses such as cancer, AIDS, MS, and Lou Gehrig's disease, to
obtain medical marijuana through state-monitored channels, passed by a margin of forty-eight to fourteen in the
General Assembly and twenty-five to thirteen in the State Senate. It is the most restrictive medical marijuana law in the
country, permitting doctors to prescribe marijuana only for specific chronic illnesses and forbidding patients to grow or
buy marijuana on their own. The bill also limits patients to no more than two ounces of marijuana per month. Reed
Gusciora, the assemblyman who sponsored the legislation, stated his belief that the bill "will become a model for other
states because it balances the compassionate use of medical marijuana while limiting the number of ailments that a
physician can prescribe it for." Besides California and New Jersey, other states that have legalized medical marijuana
are Alaska, Colorado, Hawaii, Maine, Michigan, Montana, Nevada, New Mexico, Oregon, Rhode Island, Vermont, and
Washington.
Popular Opinion
A large percentage of Americans approve of legalizing marijuana for medical use. According to an ABC
News/Washington Post poll, approval rates rose from 69 percent in 1997 to 81 percent in 2009. Though liberals are
more likely than conservatives to favor legalization, there is majority support for it at both ends of the political
spectrum. Some 68 percent of Americans who describe themselves as conservatives and 72 percent of Republicans
support legalization compared to 90 percent of liberal/moderates and 85 percent of Democrats.
Opponents, however, worry that legalizing medical marijuana will lead to abuses. Even stringent laws such as New
Jersey's, they argue, will increase the availability of the drug, making it more likely that teens will be tempted to try it.
Source Citation:
"Medical Marijuana." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Medical Marijuana
Marijuana is the most commonly used illicit drug in the United States. Many advocates consider marijuana a harmless,
or even beneficial, substance that should be made legal. Among the arguments for legalizing marijuana is its purported
value in treating symptoms of serious diseases, including AIDS, glaucoma, cancer, multiple sclerosis, epilepsy, and
chronic pain. Several states have enacted laws permitting use of marijuana for medical purposes, and legalization has
strong public support. Nevertheless, some medical practitioners and lawmakers continue to argue against legalization.
Medical Benefits
Marijuana is the term for the dried leaves, flowers, or stems of the Cannabis plant. Marijuana contains more than 400
chemicals, of which eighty have been identified as unique to Cannabis. These are known as cannabinoids. Of these, the
most pharmacologically active is delta-9-tetrahydrocannabinol (THC). The U.S. Food and Drug Administration (FDA)
has found THC to be safe and effective in treating nausea, vomiting, and wasting diseases such as anorexia.
Cannabinoids appear to play a key role in the body's pain mechanisms. According to the U.S. Society for Neuroscience,
evidence shows that "cannabinoids directly interfere with pain signaling in the nervous system." Medical researchers
have concluded that cannabinoids can be helpful in treating pain associated with chemotherapy, postoperative recovery,
and spinal cord injury, as well as neuropathic pain, which is often experienced by patients with metastatic cancer,
multiple sclerosis (MS), diabetes, and HIV/AIDS.
Cannabinoids are also effective in treating nausea and vomiting, which are common symptoms of many kinds of
cancers and are side effects of chemotherapy. By stimulating appetite, cannabinoids can be helpful in treating anorexia
and other wasting diseases, which can include chronic diarrhea, tuberculosis, and unexplained weight loss. Other
conditions for which marijuana may prove beneficial include epilepsy, muscle spasticity, Parkinson's disease,
glaucoma, and depression.
Organizations that have endorsed the use of marijuana for medical purposes include the American Public Health
Association, the American Academy of Family Physicians, and the American Medical Student Association. In 2009 the
American Medical Association reversed its previous policy against such use and urged the government to reconsider
marijuana's illicit status, with the goal of facilitating further clinical research into marijuana's medical benefits. Not all
physicians, however, agree that marijuana should be prescribed for medical conditions. Quoted in The Baltimore Sun,
oncologist Dr. Kevin Cullen, director of the University of Maryland Marlene and Stewart Greenebaum Cancer Center,
questioned the need to legalize marijuana and stated that he would probably never recommend the drug to cancer
patients because other medications are more effective.
Health Risks
Among arguments against the legalization of marijuana is that it contains carcinogens and may increase the risk of lung
cancer and other respiratory tract cancers. According to the Mayo Clinic, marijuana smoke contains between 50 percent
and 70 percent more carcinogens than does tobacco smoke. What is more, users inhale marijuana smoke more deeply
than tobacco smoke and hold it longer, increasing the lungs' exposure to these harmful substances. A British Lung
Foundation report in 2002 found that smoking three to four marijuana cigarettes per day is associated with "the same
degree of damage to the bronchia mucosa as 20 or more tobacco cigarettes a day." The study also stated that marijuana
use is likely to weaken the immune system.
Yet a study presented in 2006 found no evidence that marijuana smokers had higher rates of lung cancer. Study author
Dr. Donald P. Tashkin, quoted in The Washington Post, stated that there was "no association at all, and even a
suggestion of some protective effect" between marijuana use and lung cancer. In a 2006 report, Paul Armentano,
analyst at the National Organization for the Reform of Marijuana Laws, observed that though cannabis smoke contains
many carcinogens, it also contains cannabinoids that are non-carcinogenic and that "demonstrate anti-cancer
properties." And Harvard Medical School professor Lester Grinspoon, M.D., wrote in the Los Angeles Times that
"there is very little evidence that smoking marijuana as a means of taking it represents a significant health risk."
According to the National Institute on Drug Abuse (NIDA), long-term use of marijuana can lead to addiction. It is also
associated with increased rates of depression, anxiety, and schizophrenia. NIDA also reports that, though the link
between marijuana and lung cancer remains unproven, chronic cannabis smokers have other respiratory problems
found in tobacco smokers, including daily cough, phlegm, and increased risk of lung infections. Acknowledging
marijuana's therapeutic value in treating pain, nausea, and loss of appetite, NIDA does not endorse it as a medication,
because it contains many chemicals with unknown health effects and because it is usually consumed by smoking,
which is associated with health risks. NIDA states that a better course is to develop purified or synthetic drugs from the
cannabinoids in marijuana, which would deliver "more tailored medications with improved risk/benefit profiles."
Legal and Judicial History
The federal government classifies marijuana as a Schedule One substance which, by definition, has a high potential for
abuse and has no medical value. The first shift in the government's approach came in 1978, after glaucoma patient
Robert Randall sued the FDA and other government agencies following his criminal conviction for cultivating
cannabis. In Randall v. U.S. he argued that marijuana was a medical necessity for him, permissible under the common
law doctrine of necessity. The court agreed, and, following this decision, the FDA created the Investigational New
Drug (IND) compassionate access program, through which individuals could petition for legal access to marijuana for
medical purposes. The IND was closed to new patients in 1992, though as of 2010 there were seven patients still
receiving medical marijuana through the program.
In 1996 activists in California, led by Dennis Peron, organized a ballot proposition to legalize medical marijuana.
Proposition 215, also called the Compassionate Use Act of 1996, passed by 55.58 percent of the vote. The proposition,
as later expanded, allows patients to grow, possess, and collectively distribute marijuana for personal medical use, on
the recommendation of a licensed physician. Yet the federal government threatened to sanction or prosecute physicians
who recommended medical marijuana. A class-action suit, Conant v. McCaffrey, was brought in 1997 by a group of
physicians who alleged that these threats violated their constitutional rights. In its decision in 2002, the U.S. District
Court upheld the right of doctors to recommend marijuana to patients for medical reasons. The Conant decision allows
physicians to discuss medical marijuana use with patients, to record this information, and to recommend marijuana use.
It does not permit physicians to write prescriptions for marijuana or to help patients obtain it. Nor does it permit them
to recommend marijuana use without a medical reason.
The cause for legalizing medical marijuana experienced a major setback in 2005, when the U.S. Supreme Court ruled
in Gonzales v. Raich that the federal government has the right to ban the use of cannabis even in states with
compassionate use laws. The decision does not overturn states' compassionate use laws, but does permit the federal
government under some circumstances to prosecute medical marijuana users. The Obama administration, however, has
indicated its intention to disregard the ban in some cases. Attorney General Eric Holder stated in 2009 that "It will not
be a priority to use federal resources to prosecute patients with serious illnesses or their caregivers who are complying
with state laws on medical marijuana."
In January 2010, New Jersey became the fourteenth state in the country to legalize the use of marijuana for medical
purposes. The bill, which allows patients with serious illnesses such as cancer, AIDS, MS, and Lou Gehrig's disease, to
obtain medical marijuana through state-monitored channels, passed by a margin of forty-eight to fourteen in the
General Assembly and twenty-five to thirteen in the State Senate. It is the most restrictive medical marijuana law in the
country, permitting doctors to prescribe marijuana only for specific chronic illnesses and forbidding patients to grow or
buy marijuana on their own. The bill also limits patients to no more than two ounces of marijuana per month. Reed
Gusciora, the assemblyman who sponsored the legislation, stated his belief that the bill "will become a model for other
states because it balances the compassionate use of medical marijuana while limiting the number of ailments that a
physician can prescribe it for." Besides California and New Jersey, other states that have legalized medical marijuana
are Alaska, Colorado, Hawaii, Maine, Michigan, Montana, Nevada, New Mexico, Oregon, Rhode Island, Vermont, and
Washington.
Popular Opinion
A large percentage of Americans approve of legalizing marijuana for medical use. According to an ABC
News/Washington Post poll, approval rates rose from 69 percent in 1997 to 81 percent in 2009. Though liberals are
more likely than conservatives to favor legalization, there is majority support for it at both ends of the political
spectrum. Some 68 percent of Americans who describe themselves as conservatives and 72 percent of Republicans
support legalization compared to 90 percent of liberal/moderates and 85 percent of Democrats.
Opponents, however, worry that legalizing medical marijuana will lead to abuses. Even stringent laws such as New
Jersey's, they argue, will increase the availability of the drug, making it more likely that teens will be tempted to try it.
Source Citation:
"Medical Marijuana." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing
Viewpoints In Context. Web. 10 Apr. 2012.
Medical Marijuana Should Remain Illegal
Marijuana should not be legalized for medicinal purposes. Marijuana smoke contains known carcinogens and produces
dependency in heavy users. In addition, medical marijuana causes intoxication in patients and should not under any
circumstances be smoked before driving or operating dangerous machinery. While pot promoters claim that marijuana is the
only drug that can alleviate suffering from cancer, AIDS, glaucoma, and other conditions, there are many pharmaceutical drugs
that have been approved to help patients with those illnesses. The problems and dangers associated with marijuana are too
great for the government to classify this drug as a Schedule II substance, which deems it useful as medicine. Marijuana must
remain a Schedule I drug, meaning it is highly addictive and lacking any medicinal value.
Editor's note: The following article is excerpted from a "friend of the court" legal brief, written by lawyers working for several
anti-drug groups, such as the Drug Free America Foundation. Although these groups were interested in the outcome, they were
not directly involved in the 2001 medical marijuana case before the Ninth Circuit Court of Appeals, the United States v. Oakland
Cannabis Buyers' Cooperative (OCBC). The OCBC is a group that provided medical marijuana to patients with a variety of
diseases. The case is still under consideration.
Throughout this brief we use the term "crude marijuana" to describe the illicit Schedule I drug that people abuse. The
drug is derived from the leaves and flowering tops of the Cannabis plant and is consumed in a variety of ways....
There is a strong governmental interest in prohibiting the distribution of crude marijuana as medicine. The federal
government strives to protect our citizens from unsafe, ineffective substances sold as "medicines" and from drug abuse,
drug addiction, and the abusive and criminal behaviors that marijuana and other illicit drugs often generate. The OCBC
[Oakland Cannabis Buyers' Cooperative] is distributing an unproven drug in disregard of the government's objective to
ensure the safety and efficacy of medicines....
[In order for marijuana to be sold as "medicine"] the drug must first be approved by the Food and Drug Administration
(the "FDA"). The federal Food, Drug, and Cosmetics Act, gives the federal government sole responsibility for
determining that drugs are safe and effective, a requirement all medicines must meet before they may be distributed to
the public. The FDA has not approved marijuana as safe or effective, so the drug may not legally be prescribed and
sold as a medicine.
Not only has the FDA failed to approve marijuana, but marijuana is a Schedule I controlled substance under the
Controlled Substances Act. Schedule I drugs have "1) a high potential for abuse, 2) no currently accepted treatment in
the United States, and 3) a lack of accepted safety for use of the drug ... under medical supervision."
In [the court case] Alliance for Cannabis Therapeutics v. DEA, the United States District Court for the District of
Columbia accepted the Drug Enforcement Administration's new five-part test for determining whether a drug is in
"currently accepted medical use." The test requires that:
1.
2.
3.
4.
5.
The drug's chemistry must be known and reproducible;
there must be adequate safety studies;
there must be adequate and well-controlled studies proving efficacy;
the drug must be accepted by qualified experts; and
the scientific evidence must be widely available.
Applying these criteria to a petition to reschedule crude marijuana [to a Schedule II drug that would allow it to be used
as medicine], the court found that the drug had no currently accepted medical use and, therefore, had to remain in
Schedule I. Thus, the OCBC disregarded the FDA's statutorily prescribed mandate created to ensure drug safety and is
distributing an untested, unsafe Schedule I drug in violation of the Controlled Substances Act....
No Future as Medicine
Crude marijuana is derived from the leaves and flowering tops of the Cannabis plant. It contains some 400 chemicals,
most of which have not been studied by scientists. Some 60 of these chemicals, called cannabinoids, are unique to the
Cannabis plant. One cannabinoid, Delta-9-tetrahydrocannabinol (THC), was synthesized, tested, and approved by [the]
FDA in 1985 for treating nausea in cancer patients and wasting in AIDS patients. The drug's generic name is
dronabinol and its trade name is Marinol®. It is produced by Unimed Pharmaceuticals.
According to John Benson, Jr., M.D., of the Institute of Medicine, research on other cannabinoids is underway and
some of these chemicals may one day prove to be useful medicines. However, he states:
While we see a future in the development of chemically defined cannabinoid drugs, we see little future in smoked marijuana as
a medicine.
The fact that crude marijuana contains a chemical that has been synthesized, tested, and approved for medical use does
not make marijuana itself a safe or effective medicine. Modern pharmaceutical science would require all the 400 or
more chemicals in marijuana to pass the safety and efficacy tests in research, and this has not happened. Any
consideration of this issue must take into account the substantial toxicity and morbidity associated with marijuana use.
Because of the impurity of crude marijuana and its known toxic effects, it does not represent a useful medical
alternative to currently available medications. Furthermore, efforts to gain legal status of marijuana through ballot
initiatives seriously threaten the Food and Drug Administration process of proving safety and efficacy, and they create
an atmosphere of medicine by popular vote, rather than the rigorous scientific and medical process that all medicines
must undergo.
Before the development of modern pharmaceutical science, the field of medicine was fraught with potions and herbal
remedies. Many of those were absolutely useless, or conversely were harmful to unsuspecting subjects. Thus evolved
our current Food and Drug Administration and drug scheduling processes, which should not be undermined.
Having extensively reviewed available therapies for chemotherapy-associated nausea, glaucoma, multiple sclerosis, and
appetite stimulation, Drs. [E.A.] Voth and [R.A.] Schwartz have determined that no compelling need exists to make
crude marijuana available as a medicine for physicians to prescribe. They concluded that the most appropriate direction
for THC research is to research specific cannabinoids or synthetic analogs rather than pursuing the smoking of
marijuana.
The conclusions Drs. Voth and Schwartz were echoed a year later by the National Academy of Science's Institute of
Medicine (hereinafter "IOM Report") in an assessment of scientific marijuana and cannabinoid research.
Available research on the utility of THC has demonstrated some effectiveness of the purified form of the drug in
treating nausea associated with cancer chemotherapy....
Legalization advocates would have the public and policy makers incorrectly believe that crude marijuana is the only
treatment alternative for masses of cancer sufferers who are going untreated for the nausea associated with
chemotherapy, and for all those who suffer from glaucoma, multiple sclerosis, and other ailments. Numerous effective
medications are, however, currently available for conditions such as nausea.
In fact, The IOM report found that neither smoked marijuana nor cannabinoids are as effective as current medicines
that stop nausea and vomiting in cancer chemotherapy patients. However, the scientists speculated that cannabinoids
might be effective in those few patients who respond poorly to current antiemetic (anti-nausea) drugs or more effective
in combination with current antiemetics. It recommended that research should be pursued for patients who do not
respond completely to current antiemetics and that a safe (non-smoking) delivery system for cannabinoids should be
developed.
The negative side effect profile for marijuana, even oral dronabinol (Marinol®), far exceeds most of the other effective
agents available. If there exist treatment failures of available medications in these patients, the use of marijuana would,
at minimum, demonstrate unpleasant side effects. In the studies performed to examine THC for chemotherapyassociated nausea, elderly patients could not tolerate the drug. Chronic, daily doses of the drug would be necessary to
treat many of the proposed medical conditions. This would unnecessarily expose the patients to the toxic effects....
In 1997 the White House Office of National Drug Control Policy commissioned the National Academy of Sciences
Institute of Medicine (IOM) to undertake an evaluation of the utility of marijuana and other cannabinoids for medicinal
applications. The study concluded that the challenge for future research will be to find cannabinoids which enhance
therapeutic benefits while minimizing side effects such as intoxication and dysphoria [depression]. Delivery systems
such as nasal sprays, metered dose inhalers, transdermal patches, and suppositories could be useful delivery systems for
isolated or synthetic cannabinoids. The future for medicinal applications of cannabinoids and whether cannabinoids are
equal or superior to existing medicines remains to be determined....
High Potential for Abuse
Marijuana adversely impacts concentration, motor coordination, and memory, factors that must be considered in any
discussion of providing this drug to patients suffering chronic diseases. The ability to perform complex tasks, such as
flying, is impaired even 24 hours after the acute intoxication phase. The association of marijuana use with trauma and
intoxicated motor vehicle operation is also well established. This is of central importance in an ambulatory
environment where patients may smoke marijuana and then drive automobiles. Recent evaluations of the effect of
marijuana on driving determined that ... "Under marijuana's influence, drivers have reduced capacity to avoid collisions
if confronted with the sudden need for evasive action." A ... study found that a BAC of .05 combined with moderate
marijuana produced a significant drop in the visual search frequency.
Despite arguments of the legalization advocates to the contrary, marijuana is a dependence-producing drug. Strangely,
in the course of the rescheduling hearings, petitioners admitted that "marijuana has a high potential for abuse and that
abuse of the marijuana plant may lead to severe psychological or physical dependence." These are points which they
now deny. However, this dependence and associated "addictive" behaviors have been well described in the marijuana
literature. Marijuana dependence consists of both a physical dependence (tolerance and subsequent withdrawal) and a
psychological dependence. Withdrawal from marijuana has been demonstrated in both animals and humans.
While the dependence-producing properties of marijuana are probably a minimal issue for chemotherapy-associated
nausea when medication is required sporadically, it is a major issue for the chronic daily use necessary for glaucoma,
AIDS wasting syndrome, and other alleged chronic applications.
The respiratory difficulties associated with marijuana use preclude the inhaled route of administration as a medicine.
Smoking marijuana is associated with higher concentrations of tar, carbon monoxide, and carcinogens than are found in
cigarette smoking. Marijuana adversely impairs some aspects of lung function and causes abnormalities in the
respiratory cell linings from large airways to the alveoli. Marijuana smoke causes inflammatory changes that are
similar to the effects of tobacco in the airways of young people. In addition to these cellular abnormalities and
consequences, contaminants of marijuana smoke are known to include various pathogenic bacteria and fungi. Those at
particular risk for the development of disease and infection when these substances are inhaled are those users with
impaired immunity.
One of the earliest findings in marijuana research was the effect on various immune functions, which is now evidenced
by an inability to fight herpes infections and the discovery of a blunted response to therapy for genital warts during
cannabis consumption. Abnormal immune function is, of course, the cornerstone of problems associated with AIDS.
The use of chronic THC in smoked form for AIDS wasting not only exposes the patient to unnecessary pathogens, but
also risks further immunosuppression....
A hallmark of the treatment for AIDS is avoidance of drug use, not extension or perpetuation of it. It should be clear
that marijuana exposes the user to substantial health risks. In chronic use, or use in populations at high risk for infection
and immune suppression, the risks are unacceptable....
In the interest of protecting seriously and terminally ill patients from unsafe and ineffective drugs, the safety and
efficacy process of the FDA cannot be bypassed. Crude marijuana, an impure and toxic substance has no place in the
[practice of medicine]. It is no more reasonable to consider marijuana a medicine than it is to consider tobacco a
medicine.
Coupled with the medical risk to patients, serious regulatory questions arise that have not been adequately dealt with by
ballot initiatives. Those who propose medical uses, or who conduct research on the use of marijuana, have an ethical
responsibility not to expose their subjects to unnecessary risks. Under current guidelines, crude marijuana is not a
medicine, and allowing it as such would be a step backward to the times of potions and herbal remedies.
Source Citation:
Evans, David G., and John E. Lamp. "Medical Marijuana Should Remain Illegal." Legalizing Drugs. Ed. Stuart A. Kallen. San Diego:
Greenhaven Press, 2006. At Issue. Rpt. from "Amicus Curiae Brief: United States v. Oakland Cannabis Buyers' Cooperative."
www.nationalfamilies.org. 2001. Gale Opposing Viewpoints In Context. Web. 10 Apr. 2012.
Download