Gale Opposing Viewpoints In Context

advertisement

Media Violence

Miriam-Webster’s Online Dictionary defines violence as the "exertion of physical force so as to injure or abuse.

"Violence, be it domestic violence, criminal violence, youth gang violence, or even violence against animals, is generally viewed as a serious social problem. Violent acts are often punished as crimes, and protecting individuals from being victims of violence is considered a fundamental aim of civilized society.

Media violence refers to acts or depictions of violence that are found in the media or mass communications. These could include violent scenes presented in television shows and motion pictures, violent acts recorded and seen in television news, lyrics in popular songs describing and perhaps glorifying acts of violence, and violent activities simulated on electronic computer games. Through these types of media, people can vicariously experience violence without actually being a perpetrator or victim of a violent act. This raises the question of whether "exposure" to media violence is a cause of violence in real life. Many observers are especially concerned on the effects of media violence on children and adolescents. However, efforts to control or restrict violence in the media have also raised questions about censorship and whether government may unduly limit freedom of artistic expression.

Television Violence

Much of the study and controversy over media violence focuses on what is seen on television. There are several reasons for this. Television is pervasive: 98 percent of American households own at least one television set. Television shows can also be seen on cell phones and computers via the Internet. Television is also highly popular, especially among young people, who watch between twenty-two and twenty-eight hours a week, according to some studies. Finally, television often features violence. The 1998 National Television Violence Study concluded that 60 percent of television programs included violent acts. It also found that children’s television programs had twice as many violent acts as did other programs.

Viewing violent acts on television affects young people in several negative ways, according to the American

Psychological Association (APA). Research by psychologist Albert Bandura and others has indicated that children learn behavior by watching and imitating others, and that the more violence children observe, the more aggressive and violent they act. Watching violence can also make the viewer less concerned about the suffering of victims and decrease their sensitivity to violent acts. "Young children are especially vulnerable to the effects of observed violence," according to the APA. The APA is one of several prominent organizations concerned with children that have taken positions calling for the curbing of media violence. Others include the National Institute in Mental Health, the

American Academy of Pediatrics, the National Parent Teachers Association, and the Surgeon General’s Office of the

United States.

However, the thesis that television violence creates real-life violence for children has been challenged by some researchers. Psychology professor Jonathan Freedman argues that television violence is at best a minor factor in causing aggression and crime compared with such things as poverty, racial conflict, family dysfunctions, availability of guns, and drug abuse. "If you got rid of all the violence on television tomorrow, and no one ever watched violent television again, you would probably see no change in violent crime." Researcher Jib Fowles argues that watching violence within the context of a television show may actually reduce violence by causing "the harmless discharge of hostile feelings" in viewers. Watching violence, in this view, serves as a channel or safety valve for pent-up emotions both children and adults develop over the course of daily living.

Other Forms of Media Violence

Other forms of media violence that have raised concerns include violent motion pictures, violent ideas expressed in song lyrics, and violence in electronic games.

Violence in motion pictures has been an ongoing controversy. In the 1920s and 1930s many people were alarmed at the violent (and sexually risqué) scenes in the new medium. Fearing government regulation, the American motion picture industry instituted a program of self-censorship of controversial content from 1934 to 1968. The issue of violence in

movies took center stage again in 1995, when Senator Robert Dole of Kansas launched his campaign for president with a speech attacking the entertainment industry for producing "films that revel in mindless violence." Parents and politicians have continued to criticize the ways violent films are marketed to youth. (Dole ultimately lost the 1996 election to Bill Clinton.)

Lyrics in popular music have also been attacked as glorifying violence. Many teens listen to 40 hours or more of music a week. A significant number of hip-hop and other popular records feature lyrics about raping and killing women, shooting cops, and other violent thoughts and acts. Social psychologists Craig A. Anderson and Nicholas L. Carnagey argue that although there has been relatively little research into the effects of repeated listening to such lyrics, there are

"good theoretical and empirical reasons to expect effects of music lyrics on aggressive behavior to be similar to the well-studied effects of exposure to TV and movie violence."

Violence in electronic computer games has given rise to concerns similar to those expressed about television and other forms of media. Advances in computer technology have made many electronic games intensive sensory experiences.

Many games often feature combat and violence depicted in vivid detail. In addition, unlike television shows and other media, games are interactive—the player takes part in, and to some extent, controls the action. Thus, players in video games can be said to engage in (virtual) violent behavior. Games in which the primary action is the shooting of monsters or other opponents are popular enough to get their own category (they are known as "first-person shooting games"). A young man in Alabama in 2003 was charged with murder after shooting three men, including two police officers, a crime he attributed to the violent video game "Grand Theft Auto." But defenders of video games note that most video gamers do not develop problems with violence, and that levels of youth violence have decreased in recent years even as the popularity of electronic games have increased.

Ratings Systems

The various businesses that form the media industry have responded to concerns about how children are affected by media violence (and other controversial material) by creating various ratings systems that advise whether certain programs, songs, or games should be bought and purchased by young people. The Motion Picture Association of

America (MPAA) has a ratings system (begun in 1968 and since modified) that rates motion pictures for violence and other material deemed potentially harmful for minors. It classifies some movies as unsuitable for children (NC-17) or requiring people under 17 to attend with an adult (R).

In the 1980s the Recording Industry Association of America (RIAA) created a Parental Advisory label for records with explicitly violent and/or obscene language. The label is supposed to help parents if they want to prevent their children from purchasing objectionable records, and some retailers refuse to stock records with the sticker, but the use of the label is voluntary. Further, as more and more music is purchased online through services such as iTunes, parental advisory stickers are less helpful. However, record labels will often release two versions of their music: one "clean" version that is suitable for all ages, and a regular version, which might contain violent content or profanity.

In response to critics, the companies that make electronic games established the Entertainment Software Rating Board

(ESRB). The board examines the violence and other material in video games and determines which ones are acceptable for children and which are not. It bestows ratings of "E" for everyone, "T" for teens (not suitable for younger children),

"M" for mature (not suitable for children and younger teens), and "AO" (adult only).

The television industry, at the instigation of Congress, created its own rating system in 1997. Its ratings include TV-G

(general audiences), TV-PG (parental guidance suggested), TV-14 (unsuitable for people under 14 because of sexual or violent content), and TV-MA (for mature audiences only due to graphic violent or sexual content). The V-chip—an electronic devise mandated by law for all television sets sold after 1999—enables parents to program their sets to detect and block programs that carry certain ratings.

Censorship vs. Free Expression

Some people argue that these various ratings systems do not go far enough in protecting children from media violence.

They contend that the government should make laws criminalizing certain forms of media violence, or at least banning the sale of certain media items to minors. However, such laws may run afoul of the First Amendment of the United

States Constitution, which protects freedom of speech and of the press. One example where censorship laws have been legally tested is in the area of violent video games. Several states, including California, Illinois, Michigan, and

Louisiana, have passed laws banning the rental or sale of such games to children under 18. But in each case, federal judges have struck down these laws as violations of the First Amendment.

Another time media violence was considered in the federal courts involved Louisiana’s law banning the sale or rental of video games to minors if the games depicted violence that was "patently offensive" to an "average person," or could be seen to appeal to a person with a "morbid interest in violence." But a federal judge in 2006 ruled that "depictions of violence are entitled to full constitutional protection," and that Louisiana’s law was an "invasion of First Amendment rights" of the makers, sellers, and buyers of video games.

Even more recently, in 2009, the Supreme Court considered whether states can ban movies that depict actual violence and cruelty to animals, like videos of dog fighting. The Supreme Court said that the First Amendment does not allow states to ban such depictions of violence. The case was called United States v. Stevens.

Source Citation:

"Media Violence." Current Issues : Macmillan Social Science Library . Detroit: Gale, 2010. Gale Opposing Viewpoints

In Context . Web. 12 Apr. 2012.

Media Violence May Cause Youth Violence

"The body of data has grown and grown and it leads to an unambiguous and virtually unanimous conclusion: media violence contributes to anxiety, desensitization, and increased aggression among children."

In the following viewpoint excerpted from a speech she delivered to the Henry J. Kaiser Family Foundation, Hillary Rodham

Clinton claims that pervasive media violence has a deleterious impact on the behavior of children. Clinton insists today's young viewers have hit a ceiling in terms of how much time they can spend absorbed in different types of media, and that violent content on television and the Internet, as well as in movies and video games, has been scientifically proven to increase aggression, anxiety, and desensitization. Therefore, in her opinion, media violence is a "silent epidemic" that encourages children to participate in a culture that condones aggression. Clinton is the U.S. Secretary of State and a former U.S. senator.

As you read, consider the following questions:

1.

According to Clinton, how much television does the average child watch?

2.

How are children in the classroom different now compared with young students decades ago, in Clinton's view?

3.

How does Clinton describe the impact of video game violence on children?

You know, I started caring about the environment in which children are raised, including the media environment, before my daughter was born, but then I began to take it very personally and in our own ways, Bill [former President

Bill Clinton] and I tried to implement some strategies, some rules, some regulations but it wasn't quite as difficult 25 years ago as it is today. And although I confess, I still wonder what my daughter's watching as an adult, you know, those days of being involved in a direct and personal way are certainly over in my parenting experience.

But it is probably the single most commonly mentioned issue to me by young parents, almost no matter where I go, when we start talking about raising children. We start talking about the challenges of parenting today, and all of a sudden people are exchanging their deep concerns about losing control over the raising of their own children, ceding the responsibility of implicating values and behaviors to a multi-dimensional media marketplace that they have no control over and most of us don't even really understand because it is moving so fast we can't keep up with it. And so

I've spent more than 30 years advocating for children and worrying about the impact of media. I've read a lot of the research over the years about the significant effects that media has on children. And I've talked and advocated about the need to harness the positive impacts that media can have for the good of raising, you know, healthy productive, children who can make their own way in this rather complicated world. And I've particularly advocated for trying to find ways to re-empower parents, to put them back in the driver's seat so they feel they are first knowledgeable and secondly in some sense helping to shape the influences that affect their children....

And parents who work long hours outside the home and single parents, whose time with their children is squeezed by economic pressures, are worried because they don't even know what their children are watching and listening to and playing. So what's a parent to do when at 2 o'clock in the afternoon, the children may be at home from school but the parents aren't home from work and they can turn on the TV and both on broadcast and cable stations see a lot of things which the parents wish they wouldn't or wish they were sitting there to try to mediate the meaning of for their children.

And probably one of the biggest complaints I've heard is about some of the video games, particularly Grand Theft

Auto, which has so many demeaning messages about women and so encourages violent imagination and activities and it scares parents. I mean, if your child, and in the case of the video games, it's still predominantly boys, but you know, they're playing a game that encourages them to have sex with prostitutes and then murder them, you know, that's kind of hard to digest and to figure out what to say, and even to understand how you can shield your particular child from a media environment where all their peers are doing this.

And it is also now the case that more and more, parents are asking, not only do I wonder about the content and what that's doing to my child's emotional psychological development, but what's the process doing? What's all this stimulation doing that is so hard to understand and keep track of?

So I think if we are going to make the health of children a priority, then we have to pay attention to the activities that children engage in every single day. And of course that includes exposure to and involvement with the media.

And I really commend Kaiser [Family Foundation] for this report. It paints a picture that I think will surprise a lot of parents across the nation. It reveals the enormous diet of media that children are consuming, and the sheer force of the data in this report demands that we better pay attention and take more effective action on behalf of our children.

Media Is Omnipresent

Generation M: Media in the Lives of 8 to 18 Year Olds shows us that media is omnipresent. It is, if not the most, it is certainly one of the most insistent, pervasive influences in a child's life. The study tells us, as you've heard, on average that kids between 8 and 18 are spending 6.5 hours a day absorbed in media. That adds up to 45 hours a week, which is more than a full time job. Television alone occupies 3 to 4 hours a day of young people's time. And we all know, that in most homes, media consumption isn't limited to the living room, as it was when many of us were growing up. In twothirds of kids' bedrooms you'll find a TV; in one-half you will find a VCR and/or video game console.

We also know from today's study that the incorporation of different types of media into children's lives is growing.... In one quarter of the time kids are using media, they are using more than one form at once. So, yes, they are becoming masters at multi-tasking. We know that the amount of time children are spending using media has not increased since the last Kaiser study.

So, today's study suggests that kids are in fact hitting a ceiling in terms of how much time they can spend with media.

But they are using media more intensively, experiencing more than one type at the same time. And this creates not only new challenges for parents but also for teachers. I had a veteran teacher say to me one time, I said, "What's the difference between teaching today and teaching 35 years ago when you started?" And she said, "Well, today even the youngest children come in to the classroom and they have a mental remote controller in their heads. And if I don't capture their attention within the first seconds they change the channel. And it's very difficult to get them to focus on a single task that is frustrating or difficult for them to master because there's always the out that they have learned to expect from their daily interaction with media."

You know, no longer is something like the v-chip the "one stop shop" to protect kids, who can expose themselves to all the rest of this media at one time. And so parental responsibility is crucial but we also need to be sure that parents have the tools that they need to keep up with this multi-dimensional problem.

Of course the biggest technological challenge facing parents and children today is the Internet. And today's Kaiser

Report goes a long way toward establishing how much media our children are consuming. And one thing we have known for a long time which is absolutely made clear in this report is that the content is overwhelmingly, astoundingly violent.

The Impact of Media Violence

In the last four decades, the government and the public health community have amassed an impressive body of evidence identifying the impact of media violence on children. Since 1969, when President [Lyndon] Johnson formed the National Commission on the Causes and Prevention of Violence, the body of data has grown and grown and it leads to an unambiguous and virtually unanimous conclusion: media violence contributes to anxiety, desensitization, and increased aggression among children. When children are exposed to aggressive films, they behave more aggressively.

And when no consequences are associated with the media aggression, children are even more likely to imitate the aggressive behavior.

Violent video games have similar effects. According to testimony by Craig Anderson [director of the Center for the

Study of Violence at Iowa State University] before the Senate Commerce Committee in 2000, playing violent video games accounts for a 13 to 22% increase in teenagers' violent behavior.

Now we know about 92% of children and teenagers play some form of video games. And we know that nine out of ten of the top selling video games contain violence.

And so we know that left to their own devices, you have to keep upping the ante on violence because people do get desensitized and children are going to want more and more stimulation. And unfortunately in a free market like ours, what sells will become even more violent, and the companies will ratchet up the violence in order to increase ratings and sales figures. It is a little frustrating when we have this data that demonstrates there is a clear public health connection between exposure to violence and increased aggression that we have been as a society unable to come up with any adequate public health response.

There are other questions of the impact of the media on our children that we do not know, for example, we have a lot of questions about the effect of the Internet in our children's daily lives.

We know from today's study that in a typical day, 47 percent of children 8 to 18 will go online. And the Internet is a revolutionary tool that offers an infinite world of opportunity for children to learn about the world around them. But when unmonitored kids access the Internet, it can also be an instrument of enormous danger. Online, children are at greatly increased risk of exposure to pornography, identify theft, and of being exploited, if not abused or abducted, by strangers.

According to the Kaiser study, 70% of teens between 15 and 17 say they have accidentally come across pornography on the web, and 23 percent report that this happens often. More disturbing is that close to one-third of teens admit to lying about their age to access a website....

Standards and Values

Well this is a silent epidemic. We don't necessarily see the results immediately. Sometimes there's a direct correlation but most of the times it's aggregate, it's that desensitization over years and years and years. It's getting in your mind that it's okay to diss people because they're women or they're a different color or from a different place, that it's okay somehow to be part of a youth culture that defines itself as being very aggressive in protecting its turf. And we know that for many children, especially growing up in difficult circumstances, it's hard enough anyway. You know, they're trying to make it against the odds to begin with....

So I think we have to begin to be more aware of what our children are experiencing and do what we can to encourage media habits that allow kids to be kids, and that help them to grow up into healthy adults who someday will be in the position to worry about what comes next in the media universe because we have no idea what parents in ten, twenty, thirty years will be coping with. All we can do is to try to set some standards and values now and then fulfill them to the best of our ability.

Source Citation:

Clinton, Hillary Rodham. "Media Violence May Cause Youth Violence." Mass Media. Ed. William Dudley. San Diego: Greenhaven

Press, 2005. Opposing Viewpoints. Rpt. from "Senator Clinton's Speech to Kaiser Family Foundation Upon Release of Generation

M: Media in the Lives of Kids 8 to 18." 2005. Gale Opposing Viewpoints In Context. Web. 12 Apr. 2012.

Media Violence Does Not Cause Youth Violence

"Concerns about media and violence rest on several flawed, yet taken-for-granted assumptions about both media and violence."

Karen Sternheimer is a lecturer in the Department of Sociology at the University of Southern California. She also is author of Kids

These Days: Facts and Fictions About Today's Youth and It's Not the Media: The Truth About Pop Culture's Influence on Children.

In the following viewpoint excerpted from It's Not the Media, Sternheimer proposes that the correlations between violence in mass media and youth violence are formed upon four flawed assumptions: the increase in media violence is creating more violent youths, children imitate media violence in deadly ways, young viewers cannot distinguish media violence from real violence, and research has proven the link between media violence and youth violence. She emphasizes the effects of other nonmedia factors, such as poverty and increasing violence rates in communities.

As you read, consider the following questions:

1.

How does the author support her claim that youth violence is in decline?

2.

What is the main flaw of the "Bobo doll" study, in Sternheimer's view?

3.

How does Sternheimer compare a scene with Wile E. Coyote and Road Runner, a scenario from Law & Order, and an incidence of gun violence at a party?

Media violence has become a scapegoat, onto which we lay blame for a host of social problems. Sociologist Todd

Gitlin describes how "the indiscriminate fear of television in particular displaces justifiable fears of actual dangers— dangers of which television ... provides some disturbing glimpses." Concerns about media and violence rest on several flawed, yet taken-for-granted assumptions about both media and violence. These beliefs appear to be obvious in emotional arguments about "protecting" children. So while these are not the only problems with blaming media, this

[viewpoint] will address four central assumptions:

1.

As media culture has expanded, children have become more violent.

2.

Children are prone to imitate media violence with deadly results.

3.

Real violence and media violence have the same meaning.

4.

Research proves media violence is a major contributor to social problems.

As someone who has been accused of only challenging the media-violence connection because I am secretly funded by the entertainment industry (which I can assure you I am not), I can attest we are entering hostile and emotional territory. This [viewpoint] demonstrates where these assumptions come from and why they are misplaced.

Assumption #1: As Media Culture Has Expanded, Children Have Become

More Violent

Our involvement with media culture has grown to the degree that media use has become an integral part of everyday life. There is so much content out there that we cannot know about or control, so we can never be fully sure what children may come in contact with. This fear of the unknown underscores the anxiety about harmful effects. Is violent media imagery, a small portion of a vast media culture, poisoning the minds and affecting the behavior of countless children, as an August 2001 Kansas City Star article warns? The fear seems real and echoes in newsprint across the country.

Perhaps an article in the Pittsburgh Post-Gazette comes closest to mirroring popular sentiment and exposing three fears that are indicative of anxiety about change. Titled "Media, Single Parents Blamed for Spurt in Teen Violence," the article combines anxieties about shifts in family structure and the expansion of media culture with adults' fear of youth by falsely stating that kids are now more violent at earlier and earlier ages. This certainly reflects a common perception, but its premise is fundamentally flawed: as media culture has expanded, young people have become less violent.

During the 1990s arrest rates for violent offenses (like murder, rape, and aggravated assault) among fifteen- to seventeen-year-olds fell steadily, just as they did for people fourteen and under. Those with the highest arrest rates now

and in the past are adults. Fifteen- to seventeen-year-olds only outdo adults in burglary and theft, but these rates have been falling for the past twenty-five years. In fact, theft arrest rates for fifteen- to seventeen-year-olds have declined by

27 percent since 1976 and the rates for those fourteen and under have declined 41 percent, while the arrest rate for adults has increased. Yet we seldom hear public outcry about the declining morals of adults—this complaint is reserved for youth....

So why do we seem to think that kids are now more violent than ever? A Berkeley Media Studies Group report found that half of news stories about youth were about violence and that more than two-thirds of violence stories focused on youth. We think kids are committing the lion's share of violence because they comprise a large proportion of crime news. The reality is that adults commit most crime, but a much smaller percentage of these stories make news. The voices of reason that remind the public that youth crime decreased in the 1990s are often met with emotional anecdotes that draw attention away from dry statistics. A 2000 Discovery Channel "town meeting" called "Why Are We Violent" demonstrates this well. The program, described as a "wake-up call" for parents, warned that violence is everywhere, and their kids could be the next victims. Host Forrest Sawyer presented statistics indicating crime had dropped but downplayed them as only "part of the story." The bulk of the program relied on emotional accounts of experiences participants had with violence. There was no mention of violence committed by adults, the most likely perpetrators of violence against children. Kids serve as our scapegoat, blamed for threatening the rest of us, when, if anything, kids are more likely to be the victims of adult violence.

But how do we explain the young people who do commit violence? Can violent media help us here? Broad patterns of violence do not match media use as much as they mirror poverty rates. Take the city of Los Angeles, where I live, as an example. We see violent crime rates are higher in lower-income areas relative to the population. The most dramatic example is demonstrated by homicide patterns. For example, the Seventy-Seventh Street division (near the flashpoint of the 1992 civil unrest) reported 12 percent of the city's homicides in 1999, yet comprised less than 5 percent of the city's total population. Conversely, the West Los Angeles area (which includes affluent neighborhoods such as

Brentwood and Bel Air) reported less than 1 percent of the city's homicides but accounted for nearly 6 percent of the total population. If media culture were a major indicator, wouldn't the children of the wealthy, who have greater access to the Internet, video games, and other visual media be at greater risk for becoming violent? The numbers don't bear out because violence patterns do not match media use.

Violence can be linked with a variety of issues, the most important one being poverty. Criminologist E. Britt Patterson examined dozens of studies of crime and poverty and found that communities with extreme poverty, a sense of bleakness, and neighborhood disorganization and disintegration were most likely to support higher levels of violence.

Violence may be an act committed by an individual, but violence is also a sociological, not just an individual, phenomenon. To fear media violence we would have to believe that violence has its origins mostly in individual psychological functioning and thus that any kid could snap from playing too many video games. On-going sociological research has identified other risk factors that are based on environment: poverty, substance use, overly authoritarian or lax parenting, delinquent peers, neighborhood violence, and weak ties to one's family or community. If we are really interested in confronting youth violence, these are the issues that must be addressed first. Media violence is something worth looking at, but not the primary cause of actual violence....

Assumption #2: Children Are Prone to Imitate Media Violence with

Deadly Results

Blaming a perceived crime wave on media seems reasonable when we read examples in the news about eerie parallels between a real-life crime and entertainment. Natural Born Killers, The Basketball Diaries, South Park, and Jerry

Springer have all been blamed for inspiring violence. Reporting on similarities from these movies does make for a dramatic story and good ratings, but too often journalists do not dig deep enough to tell us the context of the incident.

By leaving out the non-media details, news reports make it is easy for us to believe that the movies made them do it.

Albert Bandura's classic 1963 "Bobo doll" experiment initiated the belief that children will copy what they see in media. Bandura and colleagues studied ninety-six children approximately three to six years old (details about community or economic backgrounds not mentioned). The children were divided into groups and watched various acts

of "aggression" against a five-foot inflated "Bobo" doll. Surprise: when they had their chance, the kids who watched adults hit the doll pummeled it too, especially those who watched the cartoon version of the doll-beating. Although taken as proof that children will imitate aggressive models from film and television, this study is riddled with leaps in logic.

Parents are often concerned when they see their kids play fighting in the same style as the characters in cartoons. But as author Gerard Jones point out in Killing Monsters: Why Children Need Fantasy, Super Heroes, and Make-Believe

Violence, imitative behavior in play is a way young people may work out pent-up hostility and aggression and feel powerful. The main problem with the Bobo doll study is fairly obvious: hitting an inanimate object is not necessarily an act of violence, nor is real life something that can be adequately recreated in a laboratory. In fairness, contemporary experiments have been a bit more complex than this one, using physiological measures like blinking and heart rate to measure effects. The only way to assess a cause-effect relationship with certainty is to conduct an experiment, but violence is too complex of an issue to isolate into independent and dependent variables in a lab. What happens in a laboratory is by nature out of context, and real world application is highly questionable. We do learn about children's play from this study, but by focusing only on how they might become violent we lose a valuable part of the data....

Assumption #3: Real Violence and Media Violence Have the Same

Meaning

Nestor Herrara's [an eleven-year-old boy who was killed by another eleven-year-old boy during a dispute in a movie theater in February 2001] accused killer watched a violent film; on that much we can agree. But what the film actually meant to the boy we cannot presume. Yet somehow press accounts would have us believe that we could read his mind based on his actions. It is a mistake to presume media representations of violence and real violence have the same meaning for audiences. Consider the following three scenarios:

1.

Wile E. Coyote drops an anvil on Road Runner's head, who keeps on running;

2.

A body is found on Law and Order (or your favorite police show);

3.

A shooting at a party leaves one person dead and another near death after waiting thirty minutes for an ambulance.

Are all three situations examples of violence? Unlike the first two incidents, the third was real. All three incidents have vastly different contexts, and thus different meanings. The first two are fantasies in which no real injuries occurred, yet are more likely to be the subject of public concerns about violence. Ironically, because the third incident received no media attention, its details, and those of incidents like it, are all but ignored in discussions of violence. Also ignored is the context in which the real shooting occurred; it was sparked by gang rivalries which stem from neighborhood tensions, poverty, lack of opportunity, and racial inequality. The fear of media violence is founded on the assumption that young people do not recognize a difference between media violence and real violence. Ironically, adults themselves seem to have problems distinguishing between the two.

Media violence is frequently conflated with actual violence in public discourse, as one is used to explain the other. It is adults who seem to confuse the two. For instance, the Milwaukee Journal Sentinel reported on a local school district that created a program to deal with bullying. Yet media violence was a prominent part of the article, which failed to take into account the factors that create bullying situations in schools. Adults seem to have difficulty separating media representations from actual physical harm. Media violence is described as analogous to tobacco, a "smoking gun" endangering children. This is probably because many middle-class white adults who fear media have had little exposure to violence other than through media representations....

Assumption #4: Research Conclusively Demonstrates the Link Between

Media and Violent Behavior

We engage in collective denial when we continually focus on the media as main sources of American violence. The frequency of news reports of research that allegedly demonstrates this connection helps us ignore the real social

problems in the United States. Headlines imply that researchers have indeed found a preponderance of evidence to legitimately focus on media violence. Consider these headlines:

"Survey Connects Graphic TV Fare, Child Behavior" (Boston Globe)

"Cutting Back on Kids' TV Use May Reduce Aggressive Acts" (Denver Post)

"Doctors Link Kids' Violence to Media" (Arizona Republic)

"Study Ties Aggression to Violence in Games" (USA Today)

The media-violence connection seems very real, with studies and experts to verify the alleged danger in story after story. Too often studies reported in the popular press provide no critical scrutiny and fail to challenge conceptual problems. In our sound-bite society, news tends to contain very little analysis or criticism of any kind.

The Los Angeles Times ran a story called "In a Wired World, TV Still Has Grip on Kids." The article provided the reader the impression that research provided overwhelming evidence of negative media effects: only three sentences out of a thousand-plus words offered any refuting information. Just two quoted experts argued against the conventional wisdom, while six offered favorable comments. Several studies' claims drew no challenge, in spite of serious shortcomings.

For example, researchers considered responses to a "hostility questionnaire" or children's "aggressive" play as evidence that media violence can lead to real-life violence. But aggression is not the same as violence, although in some cases it may be a precursor to violence. Nor is it clear that these "effects" are anything but immediate. We know that aggression in itself is not necessarily a pathological condition; in fact we all have aggression that we need to learn to deal with.

Second, several of the studies use correlation statistics as proof of causation. Correlation indicates the existence of relationships, but cannot measure cause and effect. Reporters may not recognize this, but have the responsibility to present the ideas of those who question such claims.

This pattern repeats in story after story. A Denver Post article described a 1999 study that claimed that limiting TV and video use reduced children's aggression. The story prefaced the report by stating that "numerous studies have indicated a connection between exposure to violence and aggressive behavior in children," thus making this new report appear part of a large body of convincing evidence. The only "challenge" to this study came from psychologist James

Garbarino, who noted that the real causes of violence are complex, although his list of factors began with "television, video games, and movies." He did cite guns, child abuse, and economic inequality as important factors, but the story failed to address any of these other problems.

The reporter doesn't mention the study's other shortcomings. First is the assumption that the television and videos kids watch contain violence at all. The statement we hear all the time in various forms—"the typical American child will be exposed to 200,000 acts of violence on television by age eighteen"—is based on the estimated time kids spend watching television, but tells us nothing about what they have actually watched. Second, in these studies, aggression in play serves as a proxy for violence. But there is a big difference between playing "aggressively" and committing acts of violence. Author Gerard Jones points out that play is a powerful way by which kids can deal with feelings of fear.

Thus, watching the Power Rangers and then play-fighting is not necessarily an indicator of violence, it is part of how children fantasize about being powerful without ever intending to harm anyone. Finally, the researchers presumed that reducing television and video use explained changes in behavior, when in fact aggression and violence are complex responses to specific circumstances created by a variety of environmental factors. Nonetheless, the study's author stated that "if you ... reduce their exposure to media you'll see a reduction in aggressive behavior."...

Source Citation:

Sternheimer, Karen. "Media Violence Does Not Cause Youth Violence." It's Not the Media: The Truth About Pop Culture's

Influence on Children. Cambridge, MA: Westview Press, 2003. Rpt. in Mass Media. Ed. William Dudley. San Diego: Greenhaven

Press, 2005. Opposing Viewpoints. Gale Opposing Viewpoints In Context. Web. 12 Apr. 2012.

National Service

National service is a period of government service that certain citizens are required to perform. For example, during

World War II, millions of adult male Americans were drafted to serve in the US military. While national service has not been used in the United States since the Vietnam War, it remains an important part of the military makeup of some

European nations and is considered a vital strategy for Israel’s national defense. Still, the debate over national service versus volunteerism cuts to the heart of what it means to be free, as well as the true costs of maintaining a free society.

Global Examples of National Service

The notion of national service is an old one, and in ancient cultures the differences between national service, indentured servitude, and slavery become hazy. In modern times, national service usually refers to military service, also known as conscription. Conscription during wartime remains fairly common, though nations are increasingly turning to a voluntary or career enlistment model for military service. While the United States has not instituted a military draft for nearly forty years, the Selective Service Act still allows the government to launch a draft if necessary. American males are required by law to register for Selective Service upon turning eighteen, and the federal government keeps this information on file.

A few European countries retain national service to provide for their common defense. Switzerland, for example, requires all healthy male citizens to train and serve in the country’s military beginning at age twenty. The service can last between five and ten months, with a small amount of reserve duty each year thereafter, all the way up until age fifty. Adult male citizens are also each given a firearm to keep in their home, in case they must be called to active duty without notice. Although Switzerland has been famously characterized as remaining neutral during World War II, its well-trained civilian military force may be one reason why it was one of the few European nations that did not fall under Axis control despite being located directly between Germany and Italy.

Israel is often considered the prime modern example of national service. When the country first became recognized in

1948, its leaders were acutely aware of the need for a strong national defense. Israel, identified as a Jewish state, is surrounded by countries dominated by Muslim Arabs. Even before Israel officially existed, the Arab nations of the region threatened to destroy any Jewish state formed on former Palestinian lands. This led Israel to create the Israeli

Defense Force (IDF), a highly trained army that maximizes its use of able-bodied Israeli citizens. The IDF requires adult male citizens to serve three years in active duty. The IDF is also the only national military service that requires women to serve, though their required term is only two years. After completing active service, Israeli citizens serve as reserve forces for an additional twenty to twenty-five years, and may be called to active duty at any time.

For individuals who do not wish to engage in combat duty, many countries offer other options for national service.

Swiss citizens can instead serve in the Civilian Service, which offers opportunities in health care, agriculture, and environmental services. Those serving in the Civilian Service must remain on duty for nearly a full year instead of the five months served by military personnel. In Israel, citizens who object to military service can only be excused under certain circumstances. If the citizen is an Arab, he or she is not required to serve since the main antagonists of the IDF are Arabs. Citizens with ultra-conservative religious views may also be excused from service. Those who object for philosophical reasons, such as disagreement with government policy or a broad opposition to war, are often placed in jail instead of being excused.

National Service in the United States?

In the United States, some politicians have floated the idea of national service as a way to fund state-sponsored education. School districts in several states require high school students to serve a certain number of volunteer hours performing community service before they are allowed to graduate. President Barack Obama has publicly stated his desire to promote and expand public service programs, which some critics have interpreted as a plan to impose national service on US citizens.

The main objection to national service in the United States is that it violates a citizen’s basic freedom by forcing that person to do something. There are no provisions in the US Constitution for national service, and in fact the Constitution does not even allow for the creation of a permanent military force. The very idea of “drafting” soldiers to fight was objectionable to early American lawmakers; during the War of 1812, the US Congress defeated a bill that would have forced American men to fight in the war. Instead, the war was fought largely by voluntary state militias.

Rather than force national service on American citizens, the federal government instead sponsors many voluntary service organizations such as the Peace Corps. Since 1985, the Peace Corps has grown steadily in its number of active volunteers. Increased funding has put the program on track to support at least eleven thousand active volunteers by

2015. Other planned incentives for volunteer service include offering paid college tuition after the completion of service. The United States also relies upon a voluntary paid military force. With nearly 1.5 million active personnel, the

American military is second only to China in size. Unlike some other nations, forced military service is not necessary to keep the United States protected.

The main argument in favor of national service is that each citizen has a duty to help look after the common good of their country. Many Americans would likely agree with this sentiment, and according to a 2002 poll quoted in TIME,

70 percent of Americans favor a greater push for all citizens to volunteer for the betterment of society. But it would seem that, to many Americans, the gap between volunteerism and forced service is the place in which the notion of freedom resolutely resides.

Source Citation:

"National Service." Opposing Viewpoints Online Collection. Gale, Cengage Learning, 2010. Gale Opposing Viewpoints In Context.

Web. 12 Apr. 2012.

National Service Addresses America's Social Problems

"Faith-based and community volunteers are not only effective but they are an essential element of our nation's response to critical challenges we face at home and abroad."

In the following viewpoint, David L. Caprara claims that national service addresses pressing American social problems. National service organizations such as AmeriCorps and other grassroots and faith-based groups have proven success fighting poverty, gang violence, environmental degradation, and other social problems, he maintains. For example, Caprara asserts, although studies have shown that the children of prisoners often end up in prison themselves, the Amachi mentoring program has significantly reduced that number. Volunteer service groups are closest to the problems that their communities face and are therefore better able to identify effective solutions, he reasons. Caprara, whose remarks were made before the US House

Committee on Education and Labor, directs the Initiative on International Volunteering and Service at the Brookings Institution.

As you read, consider the following questions:

1.

Through what type of groups do most Americans volunteer, according to Caprara?

2.

What does the author claim has been one of the most effective gang intervention programs in the nation?

3.

What example does the author give to support his belief that faith-based organizations are more nimble and innovative than governmental bureaucratic bodies?

I am pleased to speak about the powerful work of volunteers serving through faith-based and community organizations and the positive impacts they are having on our nation's most challenging social issues. I commend you for recognizing the potential of these dedicated volunteers.

I also applaud President Barack Obama for his signal leadership in making the cause of service a centerpiece of his presidency. His call to a new generation to give national and even global leadership in service to others has the potential to become a defining legacy of this administration.

Addressing Social Difficulties

Expanding partnerships with neighborhood mediating institutions has proven to be an effective path in addressing many of the social difficulties we face as a country.

During my service at the Corporation for National and Community Service [CNCS], I was tasked with leveling the playing field and advancing innovative service programs—VISTA [Volunteers in Service to America], AmeriCorps,

Senior Corps, and Learn and Serve America. I often considered the insightful words of one of my mentors, Robert

Woodson, founder and president of the Center for Neighborhood Enterprise, and author of the landmark book,

Triumphs of Joseph.

Woodson, who has been frequently called to testify about grassroots community remedies by Congress and our nation's governors, told me that faith-based initiatives are not about promoting a particular faith, but rather, advancing secular outcomes that faith-based and other grassroots groups are uniquely positioned to effect. He notes that not only are these groups generally the closest to the problems in a community, they are the ones most often trusted by residents, particularly in times of need like our present economic crisis.

Volunteer efforts brought to bear by faith-based groups, since Tocqueville

1

first noted our nation's founding charitable traditions and social capital in the nineteenth century, have been immensely important throughout American history. In fact, according to Bureau of Labor Statistics [BLS] data, more Americans volunteer through religious groups than any other kind of organization.

Successful Faith-Based Models

A CNCS Research and Policy Development report, entitled "Volunteer Management Capacity in America's Charities and Congregations," found that volunteers can boost both the quality of services and delivery capabilities in charities and congregations while reducing costs.

We could cite many examples of successful faith-based models, such as the Latino Pastoral Action Center of Rev. Ray

Rivera in the Bronx, which has made great use of AmeriCorps volunteers in building community capacity. Southeast

Idaho's Retired and Senior Volunteer Initiative and the Columbus, Ohio, based Economic and Community

Development Institute serving Muslim refugees from Somalia and Ethiopia, as well as Jewish and Pentecostal Christian refugees from the former Soviet Union, provide other models.

At the Corporation, we teamed up with HHS [Department of Health and Human Services] administration for Children and Families in leveraging volunteer expertise with family strengthening, fatherhood and healthy marriage programs, and economic asset development with groups like People for People founded by Rev. Herb Lusk, the former

Philadelphia Eagles "praying running back." Bishop Joseph Henderson converted a former juvenile detention facility into the Bragg Hill Family Life Center in Fredericksburg, Virginia, supported by Doris Buffett's Sunshine Lady

Foundation. The Potters House of Bishop TD Jakes in Dallas launched a nationwide initiative promoting responsible fatherhood and ex-offender reentry with faith-based volunteers and new media technology.

Mentoring Children of Prisoners

I would like to touch more deeply upon two innovative program models—one, the Amachi initiative, which utilizes

CNCS volunteer resources, and another, the Violence Free Zone Initiative engaging former gang members and other forms of indigenous community volunteer capacity.

Researchers at the Cambridge University Institute of Criminology have shown that children of prisoners are far more likely to become involved in crime in the future than children from other backgrounds. The Amachi program, founded by former Philadelphia Mayor Rev. Wilson Goode, provides this vulnerable cohort of young people with caring adult mentors who help guide them to success in life, avoiding a pathway to incarceration, which statistics show would too often be the case without such intervention.

Amachi, whose name in Africa means, "who knows what God will bring forth from this child," began training faithbased organizations to play a key role in scaling up the program after its founding in Philadelphia in 2003, with the support of Big Brothers Big Sisters and area congregations. To date the initiative has enrolled 3,000 congregations as partners mentoring more than 100,000 children across America.

The Amachi mentoring model, supported by AmeriCorps members who assist recruitment of community volunteers and form congregational mentoring hubs, has proven so effective that it was adopted by the Department of Health and

Human Services as the basis of the federal Mentoring Children of Prisoners program. At the Corporation for National and Community Service, it was our great honor to support Dr. Goode in helping to replicate the Amachi success with the help of Senior Corps, AmeriCorps, and VISTA volunteers nationwide. We then expanded that effective approach with a new initiative of VISTA and DOJ [Department of Justice] programs that built mentoring and support hubs with faith-based and community volunteers who share their love and practical transition support for ex-offenders coming home.

Promoting Violence-Free Zones

Robert Woodson's Center for Neighborhood Enterprise [CNE] has developed one of the most effective gang intervention programs in our country, by tapping indigenous community healing agents and volunteers from within crime-ridden neighborhoods. The Center reaches out to former gang members who have been transformed by faith, and connects them with other adjudicated and at-risk youths in high-crime schools and community centers.

In 1997, CNE stepped in after Darryl Hall, a twelve-year-old District boy, was shot and killed in a senseless gang war between the "Circle" and "Avenue" crews and others that had already left fifty young people dead in southeast

Washington, DC. In partnership with the Alliance of Concerned Men, many who were ex-offenders themselves, CNE negotiated a truce and helped the young people involved gain skills and find jobs as an alternative to drug dealing and crime. Those young people were then engaged as ambassadors of peace in their neighborhoods, motivating other youths toward positive attitudes and behaviors. Ten years later, crew-related homicides have been eliminated in the area since the intervention began.

Today CNE is expanding the reach of Violence Free Zones [VFZ] to cities across the country including Chicago, where a major spike in gang violence threatens to cut short the lives of our young people and their fellow neighborhood residents.

Evidence of Success

Baylor University researchers, who Woodson recently cited in testimony before the House Judiciary Committee, documented the impact of the VFZ intervention model in six Milwaukee public schools where violent incidents were reduced by 32%. Suspension rates were also dramatically reduced, and student grade point averages rose compared to the control sites.

Dramatic decreases of violent incidents where CNE grassroots leaders intervened were also reported in Baltimore,

Dallas, Atlanta, and Washington, DC.

Congress, the administration, and private foundations would be well served to advance dynamic linkages and partnerships with such effective grassroots, faith-based programs together with the volunteer power of the Corporation for National and Community Service and programs at the Departments of Education, Labor, and Justice. Attorney

General Eric Holder could be a natural leader for such a cross-sector effort. He has been a strong proponent of Violence

Free Zones since their inception during his prior tenure at Justice.

I believe these initiatives represent "low-hanging fruit" if the new White House Council on Faith-Based and

Community Partnerships wants to scale up such results-oriented models with expanded private-sector and public support.

In addition to their unique quality of being deeply embedded in communities, faith-based organizations are usually much more nimble and innovative than governmental bureaucratic bodies. Take for instance the response to Hurricane

Katrina. Groups like Lutheran Disaster Response, Islamic Relief USA, and the Points of Light and Interfaith Works

Faith and Service Institute, directed by Rev. Mark Farr and Eric Schwarz, were able to mobilize quickly. They and countless other faith-based groups galvanized congregations, synagogues and mosques into action with donations and volunteer "boots on the ground" to help families recover, while FEMA [Federal Emergency Management Agency] and other agencies famously struggled to respond.

International Voluteering

Our nations' volunteers have also made great headway in promoting global solutions. Freedom from Terror polls have noted a marked drop in support for violent terrorism and a dramatic increase in positive views toward the United States in populous Muslim nations like Indonesia, Bangladesh and Pakistan following our national and volunteer responses after the tsunami and earthquake disasters that were sustained beyond the initial period of aid.

According to a BLS assessment report by researchers with Washington University's Center for Social Development, approximately 52% of global volunteers from America said their main volunteering organization was a religious one.

The [Initiative on] International Volunteering [and Service] at the Brookings Institution, launched at a forum with

General Colin Powell nearly three years ago, has achieved solid gains in doubling a cohort from 50,000 to 100,000 international volunteers through the Building Bridges Coalition, comprised of more than 180 US-based international service NGOs [nongovernmental organizations], faith-based groups, universities and corporations.

Together with the national policy leadership, ... the Brookings volunteering team crafted a design for a new Global

Service Fellowship initiative that would empower tens of thousands of new international service volunteers supported with modest stipends that could be redeemed by NGO and faith-based entities registered with the State Department.

Global Service Fellowship legislation patterned after our research has attracted broad bipartisan support.... Our team also helped to craft the Service Nation global volunteering platform, which calls for doubling the Peace Corps, enacting

Global Service Fellowships, and authorizing Volunteers for Prosperity at USAID.

In the past year my travels have included visits to hot spots of Israel and Palestine, Kenya, at the Philippines, Brazil and other nations supporting ongoing Global Peace Festival initiatives on each continent. Through these efforts I have witnessed firsthand the tremendous power of interfaith partnerships and volunteering to heal conflicts across tribal and religious divides. Upcoming Global Peace Festival initiatives in Mindanao, Jakarta, and other cities including an

International Young Leaders Summit in Nairobi ... hold particular promise. Over 120 global leaders, NGOs and faithbased groups have supported the call for a new Global Service Alliance in these endeavors. Such a "global peace corps" will build a vital link between volunteering and global development to impact peace-building outcomes.

In conclusion, faith-based and community volunteers are not only effective but they are an essential element of our nation's response to critical challenges we face at home and abroad. Now is the time for our national leaders and the private sector to tap into their full potential in light of our massive challenges ahead.

We have only begun to scratch the surface of the incredible wisdom and resources of transformative hope, embodied in today's grassroots "Josephs."

I hope we can rally across party lines with this president to connect and support these groups in a force for good, as proven allies in the fight against poverty and disease, gang violence, environmental degradation and global conflict and disasters. Such an alliance would show the world the full potential of America's best diplomats, our volunteers.

I would like to close by quoting Dr. [Martin Luther] King's words that my former mentor and boss Jack Kemp, the distinguished former House member and President Bush 41's HUD [US Department of Housing and Urban

Development] Secretary, often cited in his testimony:

"I don't know what the future holds, but I know who holds the future."

1.

1. Alexis de Tocqueville, a French philosopher and historian, penned Democracy in America after he traveled throughout the United States in 1831.

Source Citation:

Caprara, David L. "National Service Addresses America's Social Problems." National Service. Ed. Louise Gerdes. Detroit:

Greenhaven Press, 2011. Opposing Viewpoints. Rpt. from "Renewing America Through National Service and Volunteerism."

2009. Gale Opposing Viewpoints In Context. Web. 12 Apr. 2012.

National Service Does Not Address Social Problems

"[AmeriCorps] has never provided credible evidence of benefit to the United States."

In the following viewpoint, conservative author James Bovard argues that national service programs such as AmeriCorps are political, feel-good programs with little real impact on social problems. In fact, he claims, in 2003 the Office of Management and

Budget found that AmeriCorps had not demonstrated any measurable results. Indeed, Bovard asserts, national service programs measure the number who serve and the amount of time served, not the actual impact of service on the community. Thus, he reasons, using taxpayer money to pay volunteers to meet needs that politicians did not believe needed direct government intervention is unsound. Bovard is author of Attention Deficit Democracy and Lost Rights.

As you read, consider the following questions:

1.

How many American tax dollars has AmeriCorps consumed since its creation in 1993, according to Bovard?

2.

What did Leslie Lenkowsky concede about AmeriCorps after he resigned in 2003?

3.

In the author's view, why are the legions of needs identified by national service advocates currently unmet?

National service is one of the hottest causes of presidential candidates [referring to candidates in the 2008 US presidential race]. Both Barack Obama and John McCain are gung ho for expanding AmeriCorps to hire a quarter million people to perform federally orchestrated good deeds. Former presidential candidate Senator Chris Dodd wanted to make community service mandatory for high school students and boost AmeriCorps to a million members. John

Edwards also favored making national service mandatory.

But does America have a shortage of government workers?

Putting a Smiley Face on Uncle Sam

AmeriCorps is the epitome of contemporary federal good intentions. AmeriCorps, which currently has roughly 75,000 paid recruits, has been very popular in Washington in part because it puts a smiley face on Uncle Sam at a time when many government policies are deeply unpopular.

AmeriCorps has consumed more than $4 billion in tax dollars since its creation in 1993. During the [Bill] Clinton administration, AmeriCorps members helped run a program in Buffalo that gave children $5 for each toy gun they brought in—as well as a certificate praising their decision not to play with toy guns. In San Diego, AmeriCorps members busied themselves collecting used bras and panties for a homeless shelter. In Los Angeles, AmeriCorps members busied themselves foisting unreliable ultra-low-flush toilets on poor people. In New Jersey, AmeriCorps members enticed middle-class families to accept subsidized federal health insurance for their children.

President George W. Bush was a vigorous supporter of AmeriCorps in his 2000 campaign, and many Republicans expected that his team would make the program a pride to the nation. But the program is still an administrative train wreck. In 2002, it illegally spent more than $64 million than Congress appropriated—and yet was rewarded with a higher budget.

Bush's first AmeriCorps chief, Leslie Lenkowsky, started out as a visionary idealist who promised great things from the federal program. But, when he resigned in 2003, Lenkowsky conceded that AmeriCorps is just "another cumbersome, unpredictable government bureaucracy."

No Credible Evidence

Though AmeriCorps abounds in "feel-good" projects, it has never provided credible evidence of benefit to the United

States. Instead, it relies on Soviet bloc-style accounting—merely counting labor inputs and pretending that the raw numbers prove grandiose achievements. The Office of Management and Budget concluded in 2003 that "AmeriCorps

has not been able to demonstrate results. Its current focus is on the amount of time a person serves, as opposed to the impact on the community or participants." The General Accounting Office [GAO] noted that AmeriCorps "generally reports the results of its programs and activities by quantifying the amount of services AmeriCorps participants perform." GAO criticized AmeriCorps for failing to make any effort to measure the actual effect of its members' actions.

Most AmeriCorps success claims have no more credibility than a political campaign speech. The vast majority of

AmeriCorps programs are "self evaluated": The only evidence AmeriCorps possesses of what a program achieved is what the grant recipients claim. One of the agency's consultants encouraged AmeriCorps programs to inflate the number of claimed beneficiaries: "If you feel your program affects a broad group of individuals who may not be receiving personal services from members ... then list the whole community."

The advocates of a vast national service program assume that there are legions of unmet needs that the new government workers could perform. But the reason such needs are currently unmet is that politicians have either considered them not part of government's obligation or because meeting the need is not considered worth the cost to taxpayers. There are hundreds of thousands of government agencies across the land, counting federal, state, and local governments. There are already more than 20 million people working for government in this country. Yet national service advocates talk as if the public sector is starved of resources.

More Profitable for Politicians than for Citizens

National service programs are more profitable for politicians than for citizens. USA Today noted in 1998 that

AmeriCorps's "T-shirted brigade is most well known nationally as the youthful backdrop for White House photo ops."

President Bush politically exploited AmeriCorps members almost as often as did Clinton.

Some congressmen also profiteer off AmeriCorps's image. After some congressmen showed up one day in March 2004 to hammer some nails at a Habitat for Humanity house-building project in Washington, AmeriCorps issued a press release hyping their participation in the good deed. The press release named eight members of Congress and noted,

"Working alongside the elected officials were two dozen AmeriCorps members from the D.C. chapter of Habitat for

Humanity and AmeriCorps." The home they helped build was to be given to a single mother of three. Photos from the appearance could add flourishes to newsletters to constituents or for reelection campaigns. Congressmen also benefit when they announce AmeriCorps grants to organizations in their districts.

Some national service advocates insist that AmeriCorps's failings should not be held against proposals to expand the federal role in service because their preferred program would leave it up to communities to decide how to use the new

"volunteers."

But if programs are not centrally controlled, local "initiatives" will soon transform it into a national laughingstock. This happened with CETA [the Comprehensive Employment and Training Act], a make-work program that was expanded to its doom under President [Jimmy] Carter. CETA bankrolled such job-creating activities as building an artificial rock in

Oregon for rock climbers to practice on, conducting a nude sculpture class in Miami where aspiring artists practiced

Braille reading on each other, and sending CETA workers door-to-door in Florida to recruit people for food stamps.

More than 60 million Americans work as unpaid volunteers each year. Even if AmeriCorps was expanded to a quarter million recruits, it would amount to less than one-half of one percent of the total of people who donate their time for what they consider good causes. And there is no reason to assume that paying "volunteers" multiplies productivity.

Rather than expanding national service programs, Congress should pull the plug on AmeriCorps. At a time of soaring deficits, the federal government can no longer afford to spend half a billion dollars a year on a bogus volunteer program whose results have been AWOL since the last century.

Source Citation:

Bovard, James. "National Service Does Not Address Social Problems." National Service. Ed. Louise Gerdes. Detroit: Greenhaven

Press, 2011. Opposing Viewpoints. Rpt. from "The National Service Illusion." Ripon Forum 42 (Apr.-May 2008): 42-44. Gale

Opposing Viewpoints In Context. Web. 12 Apr. 2012.

Nutrition

Nutrition provides the essential sustenance for all life forms. In human beings, it refers to the process of absorbing nutrients from food and processing them in the body in order to grow or maintain health. The word also denotes the science that deals with foods and their effects on health.

The Human Diet

Humans are omnivores, which means they eat both plants and animals. Some people, however, choose not to eat meat for religious, health-related, or political reasons. The first human populations ate meat, fish, vegetables, fruit, roots, and nuts. The emergence of agricultural practices around 9,500 .b.c.e. centered in present-day Iraq radically altered human history. The successful cultivation of grains, such as flax, barley, lentils, chickpeas, and other early crops is deemed by anthropologists as the most-significant factor in the development of stable communities that grew into complex civilizations.

Various flours milled from those first grains, mixed with water and subject to heating, became the earliest bread products in the world. The keeping and breeding of livestock animals is thought to have originated at roughly the same period of human history as agriculture, and also in the region known as the Fertile Crescent. This area stretched from the eastern Mediterranean Sea to the confluence of the Tigris and Euphrates rivers in Iraq. Cows, goats, and sheep proved to be particularly worthy additions to a household’s possessions, for they could be fed on a diet consisting entirely of dry grass, which was abundant, and they provided milk on a daily basis.

The first controlled scientific experiment that proved the link between diet and health was carried out in 1747 by James

Lind, a physician with the British navy. Sailors on long sea voyages were often plagued by scurvy, a disease that caused bleeding from mucous membranes, fatigue, and, in some cases, tooth loss. It was known that citrus fruits had some connection to scurvy, but during Lind’s era, vitamin C and other essential nutrients found in food had not been discovered yet. Lind conducted the first known clinical trial, and the scurvy-afflicted sailors who were given lime juice showed marked improvement. It took another four decades, however, before sailors were given regular amounts of lemon juice to eradicate the disease.

A Revolution in Eating Habits

In the centuries that followed the emergence of agriculture, the human diet remained relatively unchanged. It varied wildly by region, of course, according to the local resources, and food preparation eventually moved from being a ritualistic event—done to ensure continued provisions and guarantee food purity—to a common household chore. In some privileged, elite circles, cooking became an art form, but food and nutrition did not undergo any significant changes in the West until the Industrial Revolution. New methods of preserving and processing foods were invented during the nineteenth century that resulted in an entirely different type of revolution—one found to have immensely negative consequences for human health and well-being.

The manufacturing process for most packaged foods usually involves the application of heat, which often destroys vital nutrients. As a result, manufacturers add these nutrients back into their products and label them as "enriched" or

"fortified." Flour is one example. Flour milled from wheat is the most common type of flour, but consumers found its yellowish color unattractive. The color is the result of xanthophylls, a pigment that is found in some tree leaves that turn color in the fall. A natural bleaching process occurs in flour over several days, but leaving out large amounts of flour is dangerous because it is highly flammable, and it was not a cost-effective strategy for flour manufacturers. A method of speeding up the process was developed that turns flour a white hue, but this also removes most nutrients. So manufacturers add thiamin, riboflavin, and niacin to the flour and label it "enriched."

Trans Fats and High-Fructose Corn Syrup

Trans fats are another hotly debated issue in nutritional science. Trans fats are certain kinds of unsaturated fats, either monounsaturated or polyunsaturated, which refers to the number of bonds in a fatty acid chain. A few trans fats occur naturally, but the majority are created in the industrial process by partial hydrogenation, the addition of hydrogen atoms to plant oils. Crisco, a brand of shortening introduced in 1911, was the first consumer product made by this process.

Hydrogenation gives fats a higher melting point and also extends their shelf life by significantly reducing the point where they become rancid. Most processed foods contain trans fats because of the presence of partially hydrogenated fats. Trans fats have no nutritional value, and have been linked to coronary heart disease. They raise levels of the unwanted type of cholesterol in the blood known as low-density lipoprotein, or LDL. This type of cholesterol aids in the formation of plaque on artery walls that causes heart attacks, strokes, and certain types of vascular disease.

High-fructose corn syrup is another controversial food ingredient. This is a synthetic sweetener that is found in nearly all processed foods sold in the United States. Coca-Cola, fruit-infused yogurt products, and even bottled salad dressings and canned soups contain high-fructose corn syrup. It came into use in the 1970s, and has been blamed by some nutrition experts for the rising rate of obesity among Americans since 1980. It was cheaper to use than other forms of refined sugars in part because of import taxes and tariffs on cane sugar. High-fructose corn syrup is made from corn, which grows abundantly in the American Midwest. Agribusiness companies have consistently lobbied Congress for a continuation of these taxes and tariffs that make imported sugar more costly for U.S. manufacturers instead of the cheaper, more easily transportable high-fructose corn syrup.

Fructose is the sweetest of naturally occurring sugars. Some research findings suggest that fructose is metabolized—or converted into a usable form of energy by the body’s chemistry—differently than other naturally occurring sugars.

Most sugars trigger the production of hormones that regulate appetite and manage fat storage, but studies show that fructose is not one of them. Consuming products made with high-fructose corn syrup also appears to be linked to insulin resistance, which can trigger type 2 diabetes. According to the U.S. Centers for Disease Control and Prevention, more than 23 million people had diabetes in 2007. In adults, type 2 diabetes accounts for 90 to 95 percent of all diagnosed cases of diabetes. In 2006 diabetes was the seventh-leading cause of death in the United States, and the costs of treating it represent a major portion of health-care spending. Others note that there is no proven link between highfructose corn syrup and obesity and assert that other dietary factors, combined with a sedentary lifestyle, are to blame for the rise in obesity.

The Link Between Profit and Diet

Americans who live in poverty are more likely to have poor diets high in trans fats and high-fructose corn syrup products and are more likely to suffer the medical risks associated with an unhealthy diet. A large number of

Americans who live below the poverty line do not have health insurance and rely on government-entitlement programs like Medicaid. Some economic analysts assert that both Medicaid—designed to provide health coverage to low-income

Americans—and Medicare, health coverage for senior citizens, are already strained because of the rising costs of health care and will be unable to meet the needs of what may be a deeply unhealthy population later in the twenty-first century.

A diet rich in fresh fruits, vegetables, and whole grains is markedly more expensive than a diet of processed convenience foods. The U.S. government has taken some steps to educate consumers about how to have a healthy diet.

In 1956, the U.S. Department of Agriculture released the publication Essentials of an Adequate Diet, which listed four basic food groups and their daily recommendations. These were milk, fruits and vegetables, breads and cereals, and meat. These guidelines were rescinded in 1992 and replaced by the Food Pyramid, which recommends six food groups: grains, vegetables, fruits, oils, milk and other dairy products, and meat and beans.

Rising Health-Care Costs

In 2007, one study found that 31 percent of American men and 33 percent of American women have a body mass index of at least 30, which means they are considered obese. There is debate over whether the government should classify obesity as an actual illness, which would permit Medicare or Medicaid funds to be used to treat it. Opponents counter

that many chronic diseases are the result of genetics or luck and that it is unfair to force these conditions to compete with obesity for dwindling funds. Obesity, they contend, is a result of lifestyle choices.

In the 1970s, some nutritionists urged a return to a Paleolithic diet, the types of food the first human populations ate.

They asserted that human genetics became ideally adapted to this diet and that humans’ genetic structure had not undergone any significant changes in the roughly 120,000 years since the diet was initially adopted. The modern version, like its ancestral predecessor, consists largely of meat, fresh fruits and vegetables, and nuts, and shuns dairy products, salt, and refined sugar. Its advocates offer evidence from studies conducted among contemporary communities whose diets are similar and who have a much lower rate of the so-called diseases of affluence, such as coronary heart disease, type 2 diabetes, certain types of vascular diseases and cancers, and obesity. Arguments against this counter that it may not be possible to replicate the exact Paleolithic diet of our ancestors. Furthermore, its detractors assert, the earth’s resources could not sustain a diet rich in meat and fish products if it was adopted by a majority of inhabitants.

Source Citation:

"Nutrition." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints In Context. Web. 12

Apr. 2012.

Fast-Food Franchises Are Unfairly Targeted for Serving Unhealthy Food

Greg Beato is a writer and a contributing editor at Reason magazine. He lives in San Francisco, California.

Fast-food giants are frequently criticized for peddling fattening, artery-clogging burgers, fries, and other menu items, but long before McDonald's was franchised, lunch wagons, diners, and drive-ins offered fatty, sugary food. In fact, these independently owned restaurants dare customers to eat unhealthy meals, such as steak dinners with the caloric content of ten Big Macs and sky-high platters of burgers and fries. However, instead of being lambasted by nutritionists and anti-fast-food crusaders, these establishments are lauded for bringing communities together and serving up authentic, greasy American fare. The food at

McDonald's, Wendy's, and the like pale in comparison—for which we should be thankful.

Imagine McDonald's picked up your bill any time you managed to eat 10 Big Macs in an hour or less. What if Wendy's replaced its wimpy Baconator with an unstoppable meat-based assassin that could truly make your aorta explode—say,

20 strips of bacon instead of six, enough cheese slices to roof a house, and instead of two measly half-pound patties that look as emaciated as the Olsen twins, five pounds of the finest ground beef, with five pounds of fries on the side?

[Super Size Me director and star] Morgan Spurlock's liver would seek immediate long-term asylum at the nearest vegan co-op.

Alas, this spectacle will never come to pass. McDonald's, Wendy's, and the rest of their fast-food brethren are far too cowed by their critics to commit such crimes against gastronomy. But you can get a free dinner with as many calories as 10 Big Macs at the Big Texan Steak Ranch in Amarillo, Texas, if you can eat a 72-ounce sirloin steak, a baked potato, a salad, a dinner roll, and a shrimp cocktail in 60 minutes or less. And if you're craving 10 pounds of junk food on a single plate, just go to Eagle's Deli in Boston, Massachusetts, where the 10-storey Challenge Burger rises so high you practically need a ladder to eat it.

A Savory Scapegoat

Fast food makes such a savory scapegoat for our perpetual girth control failures that it's easy to forget we eat less than

20 percent of our meals at the Golden Arches and its ilk. It's also easy to forget that before America fell in love with cheap, convenient, standardized junk food, it loved cheap, convenient, independently deep-fried junk food.

During the first decades of the 20th century, lunch wagons, the predecessors to diners, were so popular that cities often passed regulations limiting their hours of operation. In 1952, three years before Ray Kroc franchised his first

McDonald's, one out of four American adults was considered overweight; a New York Times editorial declared that obesity was "our nation's primary health problem." The idea that rootless corporate invaders derailed our healthy native diet may be chicken soup for the tubby trial lawyer's soul, but in reality overeating fatty, salty, sugar-laden food is as

American as apple pie.

Nowhere is this truth dramatized more deliciously than in basic-cable fare like the Food Channel's Diners, Drive-Ins, and Dives and the Travel Channel's World's Best Places to Pig Out. Watch these shows often enough, and your

Trinitron may develop Type 2 diabetes. Big Macs and bk Stackers wouldn't even pass as hors d'oeuvres at these heart attack factories.

Community Centers

Yet unlike fast food chains, which are generally characterized as sterile hegemons that force-feed us like foie gras geese, these independently owned and operated greasy spoons are touted as the very (sclerosed) heart of whatever town they're situated in, the key to the region's unique flavor, and, ultimately, the essence of that great, multicultural melting pot that puts every homogenizing fast-food fryolator to shame: America!

Instead of atomizing families and communities, dives and diners bring them together. Instead of tempting us with empty calories at cheap prices, they offer comfort food and honest value. Instead of destroying our health, they serve us greasy authenticity on platters the size of manhole covers.

As the patrons of these temples to cholesterol dig into sandwiches so big they could plug the Lincoln Tunnel, they always say the same thing. They've been coming to these places for years. They started out as kids accompanying their parents, and now they bring their kids with them.

Relative Restraint

While such scenes play out, you can't help but wonder: Doesn't that obesity lawsuit trailblazer John Banzhaf have cable? Shouldn't he be ejaculating torts out of every orifice upon witnessing such candid testimonies to the addictive power of old-timey diner fare? And more important: Shouldn't we thank our fast food chains for driving so many of these places out of business and thus limiting our exposure to chili burgers buried beneath landfills of onion rings?

Were it not for the relative restraint of Big Macs and Quarter Pounders, the jiggling behemoths who bruise the scales on The Biggest Loser each week might instead be our best candidates for America's Next Top Model.

When Super Size Me appeared in theaters and fast food replaced [terrorist leader] Osama bin Laden as the greatest threat to the American way of life, the industry sought refuge in fruit and yogurt cups and the bland, sensible countenance of Jared the Subway Guy. Today chains are still trying to sell the idea that they offer healthy choices to their customers; see, for example, Burger King's plans to sell apple sticks dolled up in French fry drag. But they're starting to reclaim their boldness too, provoking the wrath of would-be reformers once again.

Only diet hucksters and true chowhounds would benefit from a world where the local McDonald's gave way to places serving

72-ounce steaks and [sky-high] burgers.

[In summer 2007], when McDonald's started selling supersized sodas under a wonderfully evocative pseudonym, the

Hugo, it earned a prompt tsk-tsk-ing from The New York Times. When Hardee's unveiled its latest affront to sensible eating, a 920-calorie breakfast burrito, the senior nutritionist for the Center for Science in the Public Interest derided it as "another lousy invention by a fast-food company." When San Francisco Chronicle columnist Mark Morford saw a

TV commercial for Wendy's Baconator, he fulminated like a calorically correct Jerry Falwell: "Have the noxious fastfood titans not yet been forced to stop concocting vile products like this, or at least to dial down the garish marketing of their most ultra-toxic products, given how the vast majority of Americans have now learned (haven't they?) at least a tiny modicum about human health?"

Forcing Accountability

Culinary reformers around the country have been trying to turn such microwaved rhetoric into reality. In New York

City, health officials have been attempting to introduce a regulation that will require any restaurant that voluntarily publicizes nutritional information about its fare to post calorie counts on its menus and menu boards. Because most single-unit operations don't provide such information in any form, this requirement will apply mainly to fast-food outlets and other chains. When a federal judge ruled against the city's original ordinance, city health officials went back for seconds, revising the proposal to comply with his ruling. If this revised proposal goes into effect, any chain that operates 15 or more restaurants under the same name nationally will have to post nutritional information on the menus and menu boards of the outlets it operates in New York City. [In April 2008 this law was passed.]

In Los Angeles, City Council-member Jan Perry has been trying to get her colleagues to support an ordinance that would impose a moratorium on fast-food chains in South L.A., where 28 percent of the 700,000 residents live in poverty and 45 percent of the 900 or so restaurants serve fast food. "The people don't want them, but when they don't have any other options, they may gravitate to what's there," Perry told the Los Angeles Times, gravitating toward juicy, flame-broiled delusion. Apparently her constituents are choking down Big Macs only because they've already eaten all the neighborhood cats and figure that lunch at McDonald's might be slightly less painful than starving to death. And how exactly will banning fast-food outlets encourage Wolfgang Puck and Whole Foods Markets to set up shop in a part

of town they've previously avoided? Is the threat of going head to head with Chicken McNuggets that much of a disincentive?

Suppose reformers like Perry get their wish and fast-food chains are regulated out of existence. Would the diners and dives we celebrate on basic cable start serving five-pound veggie burgers with five pounds of kale on the side? Only diet hucksters and true chowhounds would benefit from a world where the local McDonald's gave way to places serving 72-ounce steaks and burgers that reach toward the heavens like Manhattan skyscrapers. The rest of us would be left longing for that bygone era when, on every block, you could pick up something relatively light and healthy, like a

Double Western Bacon Cheeseburger from Carl's Jr.

Source Citation:

Beato, Greg. "Fast-Food Franchises Are Unfairly Targeted for Serving Unhealthy Food." Fast Food. Ed. Tracy Brown Collins. San

Diego: Greenhaven Press, 2005. At Issue. Rpt. from "Where's the Beef? Thank McDonald's for Keeping You Thin." Reason (Jan.

2008): 15-16. Gale Opposing Viewpoints In Context. Web. 12 Apr. 2012.

Fast Food Is Linked to Obesity and Other Serious Health Problems

Seth Stern is a staff writer at The Christian Science Monitor.

Despite the fact that nutritional information about fast food is readily available, many fast food chains are taking the blame for the rise in obesity and other health problems across the nation. Some lawyers are considering the possibility that fast food chains could be held accountable for the health consequences of eating their food. The chains could also be responsible for the effects of their potentially misleading advertising, especially to children. These advertising messages can lead people to overeat, which is one of the reasons behind the obesity problem.

For decades, Caesar Barber ate hamburgers four or five times a week at his favorite fast-food restaurants, visits that didn't end even after his first heart attack.

But his appetite for fast food didn't stop Mr. Barber, who is 5 foot 10 and weighs 272 pounds, from suing four chains last month, claiming they contributed to his health problems by serving fatty foods.

Legal Matters

Even the most charitable legal experts give Barber little chance of succeeding. But his suit is just the latest sign that the

Big Mac may eventually rival Big Tobacco as public health enemy No. 1 in the nation's courts.

Lawyers who successfully challenged cigarette manufacturers have joined with nutritionists to explore whether the producers of all those supersize fries and triple cheeseburgers can be held liable for America's bulging waistlines.

Prompted by reports that the nation's obesity is getting worse, lawyers as well as nutrition, marketing, and industry economics experts will come together at a conference at Northeastern University in Boston to discuss possible legal strategies.

Obesity can be linked to some 300,000 deaths and $117 billion in health care costs a year.

They're looking at whether food industry marketing—particularly messages aimed at kids—may be misleading or downright deceptive under consumer protection laws, says Richard Daynard, a Northeastern law professor and chair of its Tobacco Products Liability Project. They'll also consider the more complex question of whether the producers of fatty foods—and even the public schools that sell them—should be held responsible for the health consequences of eating them.

A Toxic Food Environment

Medical professionals argue that too much unhealthy food is sold by using tempting messages that encourage overeating. "People are exposed to a toxic food environment," says Kelly Brownell of Yale's Center for Eating and

Weight Disorders. "It really is an emergency."

The figures are certainly startling. Obesity can be linked to some 300,000 deaths and $117 billion in health care costs a year, a report by the Surgeon General found [in 2001].

Such numbers prompted President [George W.] Bush to launch his own war on fat this summer [in 2002], calling on all

Americans to get 30 minutes of physical activity each day.

But fast-food industry representatives are quick to say, "Don't just blame us." Steven Anderson, president of the

National Restaurant Association, a trade group, says attorneys who attempt to compare the health risk of tobacco with those of fast food are following a "tortuous and twisted" logic.

"All of these foods will fit into [the] diet of most Americans with proper moderation and balance," he says.

To be sure, there are big differences between tackling food and tobacco. Any amount of tobacco consumption is dangerous but everyone has to eat, Mr. Daynard says. And few if any foods are inherently toxic.

What's more, while there were only four or five tobacco manufacturers, there are thousands of food manufacturers and restaurants serving some 320,000 different products, says Marion Nestle, a professor of nutrition and food studies at

New York University.

People usually smoke one brand of cigarette. They eat in many restaurants and eat the same foods at home. That makes it almost impossible to prove that a person's obesity or health problems are caused by a particular food or restaurant.

As a result, suits such as Barber's that attempt to pin the blame for weight-related problems on specific plaintiffs will run into difficulty in court, says Steven Sugarman, a law professor at the University of California, Berkeley. Suits by state attorneys general to try to recover the cost of treating obese patients, a tactic that's worked with tobacco, also could prove tough.

Deceptive Advertising

That's why lawyers are focusing on more modest suits aimed at advertising and marketing techniques, says John

Banzhaf III, a George Washington University law professor who helped initiate the tobacco litigation three decades ago.

For example, students in one of Professor Banzhaf's courses helped sue McDonald's [in 2000] for advertising its french fries as vegetarian even though the company continued to use beef fat in their preparation. The company agreed to donate $10 million to Hindu and vegetarian groups as part of a settlement.

But only in the past few months has Banzhaf considered similar suits as part of a concerted strategy to sue the food industry for false or deceptive advertising as a way of fighting Americans' obesity.

State consumer-protection laws require sellers to disclose clearly all important facts about their products. Just as a sweater manufacturer should disclose that it may shrink in the wash, Banzhaf says fast-food companies might have an obligation to disclose that a meal has more fat than the recommended daily allowance.

Such class-action suits on behalf of people deceived by advertisements could recover the amounts customers spent on the food items but not money spent on related health costs.

As with tobacco, marketing aimed at kids will be a particular focus of Banzhaf and his coalition of lawyers and nutritionists.

"Everybody is looking at children as the vulnerable point in this," says Dr. Nestle. She says she's received "loads" of emails and calls from plaintiff lawyers interested in advice since publishing "Food Politics," a book critical of the food industry's marketing and its dominant role in shaping nutritional guidelines.

"While they know a quarter pounder is not a health food, a lot of people would be surprised to learn it uses up a whole day of calories for women."

At a meeting in Boston [August 2002], Banzhaf said attorneys talked about suing Massachusetts school districts that sell fast food in their cafeterias or stock soda in their vending machines. These suits would be based on the legal notion that schools have a higher "duty of care" than restaurants.

Fast-food restaurant chains, for their part, say they're not hiding what's in their food. At Burger King, for example, nutritional information is supposed to be posted in every dining room. And on its website, Wendy's lists 15 categories

of information about its products, including total fat and calories for everything from the whole sandwich down to the pickles.

Nutritionists say that the information doesn't put the calories in a context people can understand.

"While they know a quarter pounder is not a health food, a lot of people would be surprised to learn it uses up a whole day of calories for women," says Margo Wootan of the Center for Science in the Public Interest in Washington.

Banzhaf acknowledges that litigation alone won't get Americans in better shape. He'd like nutritional information on the fast-food menu boards and wrappers or even health warnings similar to the ones now required on cigarettes.

Still, Banzhaf says litigation will put producers of fatty foods on notice. "When we first proposed smoker suits, people laughed too."

Source Citation:

Stern, Seth. "Fast Food Is Linked to Obesity and Other Serious Health Problems." Fast Food. Ed. Tracy Brown Collins. San Diego:

Greenhaven Press, 2005. At Issue. Rpt. from "Fast-Food Restaurants Face Legal Grilling." The Christian Science Monitor. 2002.

Gale Opposing Viewpoints In Context. Web. 12 Apr. 2012.

Eating Disorders

Eating disorders are unhealthy behaviors related to food and body weight. They may involve a severe reduction of food intake or excessive overeating, as well as extreme concern about body shape and size. In 2007 McLean Hospital in

Massachusetts published the first national survey of individuals with eating disorders. It found that, overall, 4.5 percent of adults, or over 9 million people, have struggled with eating disorders such as anorexia, bulimia, and binge eating at some point in their lives. According to a study by the National Association of Anorexia Nervosa and Associated

Disorders, the majority of these illnesses begin by the age of twenty. Obesity, the condition of having an abnormally high proportion of body fat, is an equally serious problem in the United States. According to the American Obesity

Association (AOA), this growing epidemic threatens the health of approximately 60 million Americans—nearly onethird of the adult population.

Contributing Factors

Eating disorders may be caused by psychological factors, such as depression; interpersonal factors, such as a troubled family life; and social factors, such as a cultural emphasis on physical appearance. Researchers also suggest a possible link between eating disorders and genetic causes, but these issues are still under investigation. Although obesity may be linked to heredity, it is typically caused by overeating and inadequate physical activity.

Popular culture plays a large role in the harmful eating habits of children and adults. People tend to compare themselves to thin actors and models on television and in other forms of media. Magazines for female readers contain numerous articles about weight loss, as well as advertisements promoting special diet foods and pills. Peers and even family members may send the message that fat is ugly and thin is beautiful.

These factors can lead to an obsession with body weight and thinness, and in some cases, to self-starvation. About 90 percent of Americans with anorexia nervosa and bulimia nervosa are girls and young women, but a growing number of boys are also affected by these disorders. Eating disorders impact all types of people, from all ethnic groups and socioeconomic backgrounds.

Anorexia Nervosa

People who have anorexia nervosa suffer from an intense, persistent fear of becoming fat. As a result, they either refuse to eat or are constantly dieting. They weigh themselves often and may exercise compulsively to burn off the small number of calories they do consume. Some force themselves to vomit or use laxatives to purge their bodies of the food they have eaten. People with anorexia tend to be shy, perfectionists, and high achievers in school and athletics.

Bulimia Nervosa

People who have bulimia nervosa are also extremely preoccupied with body weight and food. However, instead of starving themselves, they engage in episodes of binge eating, in which they consume large amounts of food in a short period of time, usually in secret. They feel unable to control the amount that they eat during a binge. Afterward, they purge their bodies of the calories they have consumed through vomiting, laxatives, fasting, or excessive exercise. Teens and young adults with bulimia usually have an average weight for their age and height even though they may consume as many as twenty thousand calories in one session of binge eating. In contrast, most women need to eat between sixteen hundred and twenty-eight hundred calories per day, depending on their level of physical activity. Bulimics tend to be outgoing and impulsive, which sometimes leads to problems with drugs, alcohol, crime, and sexual activity.

Binge Eating Disorder

Like bulimics, people with binge eating disorder (also known as compulsive overeating) have recurrent episodes of bingeing. They feel out of control while eating and then feel guilty and disgusted with themselves after they stop.

Nevertheless, they do not purge their bodies through vomiting, laxatives, or other means. As a result, many are overweight for their age and height. The shame they feel at having overeaten creates a cycle of more binge eating.

Effects and Treatment

Eating disorders are physically and emotionally damaging. Among the physical consequences of anorexia and bulimia are malnutrition, dehydration, digestive problems, and tooth and gum decay. People with anorexia may also risk heart disease and hormonal changes that can cause bone loss, retarded growth, and the absence of menstruation. Anorexia is a major cause of death among females ages fifteen to twenty-four. Individuals who battle eating disorders may experience depression, mood swings, low self-esteem, and problems in their relationships.

Eating disorders can be treated, and early intervention is a key to success. Comprehensive treatment includes medical care, psychological and nutritional counseling, support groups, the help of family and friends, and sometimes medication. About 20 percent of patients require hospitalization to overcome a disorder.

Obesity

According to the AOA, approximately 127 million adults in the United States are overweight, nearly one-third are obese, and 9 million are severely obese. Children and adolescents are becoming part of this growing trend. During the

1990s the groups with the greatest increases of obesity were people ages eighteen to twenty-nine, residents of the

South, and Hispanics. However, the problem worsened for both males and females in every state and crossed racial and ethnic lines, age groups, and educational levels. After tobacco-related illnesses, obesity and its complications, which include heart disease, diabetes, and some cancers, is the second highest cause of premature death in the United States.

A variety of factors contributes to the problem. Travel by car, rather than walking or riding a bicycle, has become the primary mode of transportation. The widespread use of computers for work and entertainment has contributed to a severe lack of physical activity. An overabundance of food, including large amounts of fast-food and snacks, adds to the weight gain.

According to the CDC, Americans can conquer the epidemic of obesity by taking some practical steps. Schools must increase the amount of physical education they require of students and offer healthier foods in the cafeteria. Employers must provide a way for workers to be physically active, and cities should offer more sidewalks and bicycle paths. In addition, parents need to set limits on their children's television watching and computer use and encourage outdoor activities.

Source Citation:

"Eating Disorders." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints In Context.

Web. 12 Apr. 2012.

The Fashion Industry Should Not Be Held Responsible for Eating

Disorders

"The notion that the fashion industry should endure government meddling because its products or marketing techniques may

[promote] an unhealthy desire for thinness seems dubious at best."

In the following viewpoint, Michelle Cottle argues that the fashion industry has no obligation to change its practices, including using ultra-skinny models. It is not in the business of promoting healthy body images, Cottle suggests, just as fast-food restaurants are not in the business of selling healthy food. Michelle Cottle is a senior editor at the New Republic.

As you read, consider the following questions:

1.

To what other industries selling risky products, but nevertheless not subject to regulation, does Cottle compare the fashion industry?

2.

What two criteria, in Cottle's view, would constitute grounds for intervention in the fashion industry?

3.

What does the author say will eventually happen to resolve the issue of ultra-thin models on the catwalk?

Call it Revenge of the Carb Lovers. While much of the Middle East continues to devour itself, the hot controversy to come out of the West [in September 2006] is Madrid's decision to ban super-skinny models from its fashion week, the

Pasarela Cibeles, which begins on September 18th. Responding to complaints from women's groups and health associations about the negative impact of emaciated models on the body image of young women, the Madrid regional government, which sponsors the Pasarela Cibeles, demanded that the show's organizers go with fuller-figured gals, asserting that the industry has a responsibility to portray healthy body images. As Concha Guerra, the deputy finance minister for the regional administration, eloquently put it, "Fashion is a mirror and many teenagers imitate what they see on the catwalk."

Activists' concerns are easy to understand. With ultra-thinness all the rage on the catwalk, your average model is about

5′ 9' and 110 pounds [7.8 stone]. But henceforth, following the body mass index standard set by Madrid, a 5′ 9' model must weigh at least 123 pounds [8.8 stone]. (To ensure there's no cheating, physicians will be on site to examine anyone looking suspiciously svelte.) Intrigued by the move, other venues are considering similar restrictions—notably the city of Milan, whose annual show is considerably more prestigious than Madrid's.

Industry Opposition

Modeling agencies meanwhile, are decidedly unamused. Cathy Gould of New York's Elite agency publicly denounced the ban as an attempt to scapegoat the fashion world for eating disorders—not to mention as gross discrimination against both "the freedom of the designer" and "gazellelike" models. (Yeah, I laughed, too.) Pro-ban activists acknowledge that many designers and models will attempt to flout the new rules. But in that case, declared Carmen

Gonzalez of Spain's Association in Defense of Attention for Anorexia and Bulimia, "the next step is to seek legislation, just like with tobacco."

Whoa, there, Carmen. I dislike catwalk freaks—pardon me, I mean human-gazelle hybrids—as much as the next normal woman. But surely most governments have better things to do than pass laws about what constitutes an acceptable butt size. Yes, without the coiffed tresses and acres of eyeliner, many models could be mistaken for those

Third World kids that ex-celebs like Sally Struthers are always collecting money to feed. But that, ultimately, is their business. These women are paid to be models—not role models. The fashion world, no matter how unhealthy, is not

Big Tobacco. (Though, come to think of it, Donatella Versace does bear a disturbing resemblance to Joe Camel.) And, with all due respect to the Madrid regional government, it is not the job of the industry to promote a healthy body image.

Indeed, there seems to be increasing confusion about what it is the "responsibility" of private industry to do. It is, for example, not the business of McDonald's to promote heart healthiness or slim waistlines. The company's central

mission is, in fact, to sell enough fast, cheap, convenient eats to keep its stockholders rolling in dough. If this means loading up the food with salt and grease—because, as a chef friend once put it, "fat is flavor"—then that's what they're gonna do. Likewise, the fashion industry's goal has never been to make women feel good about themselves. (Stoking insecurity about consumers' stylishness—or lack thereof—is what the biz is all about.) Rather, the fashion industry's raison d'être is to sell glamour—to dazzle women with fantastical standards of beauty that, whether we're talking about a malnourished model or a $10,000 pair of gauchos, are, by design, far beyond the reach of regular people.

This is not to suggest that companies should be able to do whatever they like in the name of maximizing profits. False advertising, for instance, is a no-no. But long ago we decided that manufacturing and marketing products that could pose a significant risk to consumers' personal health and well-being—guns, booze, motorcycles, Ann Coulter—was okay so long as the dangers were fairly obvious (which is one reason Big Tobacco's secretly manipulating the nicotine levels in cigarettes to make them more addictive—not to mention lying about their health risks—was such bad form).

The notion that the fashion industry should endure government meddling because its products or marketing techniques may pose an indirect risk to consumers by promoting an unhealthy desire for thinness seems dubious at best. More often than not, in the recognized trade-off between safety and freedom of choice, consumers tend to go with Option B.

Of course, whenever the issue of personal choice comes up, advocates of regulation typically point to the damage being done to impressionable young people. Be it consuming alcohol, overeating, smoking, watching violent movies, having anything other than straight, married, strictly procreation-aimed sex—whenever something is happening that certain people don't like, the first response is to decry the damage being done to our kids and start exploring legislative/regulatory remedies.

The Fashion Industry Should Be Left Alone

But here, again, the fashion industry's admittedly troubling affinity for women built like little boys doesn't seem to clear the hurdle for intervention. It was one thing for R.J. Reynolds to specifically target teens with its cigarette advertising.

And, while I disagree with the attempts to make the war on fat the next war on smoking (for more on why, see here and here), you could at least make a similar argument that junk-food peddlers use kid-targeted advertising to sell youngsters everything from cupcakes to soda to french fries. But there's a difference between industries that specifically go after young consumers and those that happen to catch their eye—like, say, the fashion industry or Hollywood.

So let's give all those chain-smoking, Evian-guzzling, "gazelle-like" human-coatracks a break. In another couple of years, their metabolisms will slow down or they'll accidentally ingest some real food, and they'll be unceremoniously tossed off the catwalk like a bad pantsuit. Until then, in the name of personal choice, they should be allowed to strut their stuff—no matter how hideously skinny they are.

Source Citation:

Cottle, Michelle. "The Fashion Industry Should Not Be Held Responsible for Eating Disorders." Eating Disorders. Ed. Viqi Wagner.

Detroit: Greenhaven Press, 2007. Opposing Viewpoints. Rpt. from "Model Behavior." New Republic (15 Sept. 2006). Gale

Opposing Viewpoints In Context. Web. 12 Apr. 2012.

The Fashion Industry Promotes Eating Disorders

"The fashion industry, from designer to magazine editors, should not be making icons out of anorexically thin models."

In 2006 the organizers of the annual Cibeles Fashion Show in Madrid took the unprecedented step of banning significantly underweight models from participation, sparking international debate over whether the fashion industry's use of emaciated models encouraged anorexia and other eating disorders in the general population. Professor Janet Treasure and forty of her colleagues at the Eating Disorders Service and Research Unit (EDRU) at King's College, a well-known British eating disorders treatment clinic, applauded the Spanish authorities' decision. The following viewpoint is an open letter from the EDRU group to the international fashion industry urging similar actions to discourage the glamorization of anorexic imagery in the media and modern culture.

As you read, consider the following questions:

1.

According to the authors, what critical effects do disrupted eating patterns have on physical development?

2.

What percentage of models fell below the minimum weight for participation in the 2006 Madrid fashion show, according to Treasure and her colleagues?

3.

According to the authors, what is the normal body mass index (BMI) range? the BMI cutoff for the Madrid fashion show? the BMI cutoff for clinical diagnosis of anorexia?

TO THE FASHION INDUSTRY AS REPRESENTED BY THE BRITISH FASHION COUNCIL

Eating disorders, anorexia nervosa and bulimia nervosa are common disorders found in nearly 10% of young women.

There is a large range in clinical severity. Some cases are mild and transient. However in the clinic we see the dark side whereby the quality of life of the individual and her family shrivels away and the shadow of death looms. These disorders have the highest risk of physical and psychosocial morbidity than any other psychological condition. The costs for the individual, the family and society are huge. Therefore research has focused on trying to prevent these disorders and to identify the factors that cause or maintain them.

Anorexia nervosa has a long history but bulimia nervosa was rare in women born before the 1950's. The incidence of the binge eating disorders like that of obesity has rapidly increased in the last half of the twentieth century. Most experts agree that cultural factors in terms of eating behaviours and values about weight and shape are important causal and maintaining elements in the bingeing disorders. The internalisation of the thin ideal is a key risk factor. Dieting to attain this idealized form can trigger an erratic pattern of eating especially if it is used in combination with extreme behaviours that compensate for overeating.

Studies in animals suggest that persistent, changes in the brain and behaviour like those seen in the addictions result if the pattern of eating is disrupted in critical developmental periods. The paradox can be that a desire to be thin can set in train a pattern of disturbed eating which increases the risk for obesity. So how can this society protect young people from these consequences?

Interesting work in colleges in the USA reported this year has shown that an educational web based intervention promoting a healthy relationship with food and body image can prevent the onset of an eating disorder in those that are at highest risk. Such use of the web can act as an antidote to the pro-ana (pro-anorexia) web sites which foster toxic attitudes and unrealistic body forms.

Public health interventions may also be warranted. Spain has taken the first step. The Health Authorities of the Region of Madrid and the Annual Cibeles Fashion Show (Pasarela Cibeles) banned extremely thin models from participating in this year's event. Models with a Body Mass Index (BMI) below 18kg/m

2

(30% of the participants) were offered medical help rather than a position on the catwalk. To put this in context, the average BMI for a healthy woman is between 19 to 25kg/m 2 . To be clearly diagnosed with anorexia nervosa a BMI of less than 17.5 is needed although in most treatment centres people with a higher BMI have levels of clinical severity that warrant treatment.

The issue is not whether we should place the blame of unhealthy eating behaviours on the Fashion Industry or on anyone else. The issue is that Spanish Health Authorities have decided to intervene in a health issue, which is directly affecting the well-being of models as well as affecting the attitudes and behaviours of many young girls and women who may strive to imitate and attain these unhealthy pursuits.

Adopting what Madrid has done is a good first step but the fashion industry, from designer to magazine editors, should not be making icons out of anorexically thin models. Magazines should stop printing these pictures and designers should stop designing for these models. People may say that clothes look better on skinny models but do not forget there was a time when smoking looked good too.

Janet Treasure and EDRU Team

Source Citation:

Janet Treasure and EDRU Team of King's College. "The Fashion Industry Promotes Eating Disorders." Eating Disorders. Ed. Viqi

Wagner. Detroit: Greenhaven Press, 2007. Opposing Viewpoints. Rpt. from "To the Fashion Industry as Represented by the

British Fashion Council." 2006. Gale Opposing Viewpoints In Context. Web. 16 Apr. 2012.

Skip to Content

Change Resources

Return to My Library

Kitsap Regional Library

 Saved Items (0)

 Search History

 My Library

 Logout

Your search information has been cleared.

Gale Opposing Viewpoints In Context

 All

 Viewpoints

 Images

 Videos

 Audio

 More o o

News

Magazines o o o

Reference

Primary Sources

Statistics o o

Academic Journals

Websites

Search

Advanced Search

 Home

 Issues

 Maps

 Curriculum Standards

 Resources

Topic

\

Document

Medical Marijuana Should Be Legalized

Marijuana has many medical benefits, including the relief from nausea, reduction of muscle spasms, and relief from chronic pain. However, the 1937 Marijuana Tax Act federally prohibited the smoking of marijuana for any purpose. In addition, the

Controlled Substance Act of 1970 placed all drugs into five categories depending on their utility as medicine and perceived harm; marijuana was placed in Schedule I, defining it as having a high potential for abuse and no medicinal qualities. Nevertheless, illicit marijuana use continued, with many people realizing the therapeutic qualities of the drug. Several states legalized medical marijuana, but since federal law still prohibits its use users in those states are subject to arrest. This situation is untenable. The federal government should legalize marijuana so that patients sick with cancer, AIDS, and other illnesses can reap the enormous benefits of smoking the drug.

For thousands of years, marijuana has been used to treat a wide variety of ailments. Until 1937, marijuana (Cannabis sativa L.) was legal in the United States for all purposes. Presently, federal law allows only seven Americans to use marijuana as a medicine.

On March 17, 1999, the National Academy of Sciences' Institute of Medicine (IOM) concluded that "there are some limited circumstances in which we recommend smoking marijuana for medical uses." The IOM report, the result of two years of research that was funded by the White House drug policy office, analyzed all existing data on marijuana's therapeutic uses. Please see http://www.mpp.org/science.html.

Medicinal Value

Marijuana is one of the safest therapeutically active substances known. No one has ever died from an overdose, and it has a wide variety of therapeutic applications, including:

 Relief from nausea and appetite loss;

 Reduction of intraocular (within the eye) pressure;

 Reduction of muscle spasms; and

 Relief from chronic pain.

Marijuana is frequently beneficial in the treatment of the following conditions:

AIDS. Marijuana can reduce the nausea, vomiting, and loss of appetite caused by the ailment itself and by various

AIDS medications.

Glaucoma. Marijuana can reduce intraocular pressure, alleviating the pain and slowing—and sometimes stopping— damage to the eyes. (Glaucoma is the leading cause of blindness in the United States. It damages vision by increasing eye pressure over time.)

Cancer. Marijuana can stimulate the appetite and alleviate nausea and vomiting, which are side effects of chemotherapy treatment.

Multiple Sclerosis. Marijuana can limit the muscle pain and spasticity caused by the disease, as well as relieving tremor and unsteadiness of gait. (Multiple sclerosis is the leading cause of neurological disability among young and middleaged adults in the United States.)

Epilepsy. Marijuana can prevent epileptic seizures in some patients.

Chronic Pain. Marijuana can alleviate the chronic, often debilitating pain caused by myriad disorders and injuries.

Each of these applications has been deemed legitimate by at least one court, legislature, and/or government agency in the United States.

Many patients also report that marijuana is useful for treating arthritis, migraine, menstrual cramps, alcohol and opiate addiction, and depression and other debilitating mood disorders.

Marijuana could be helpful for millions of patients in the United States. Nevertheless, other than for the seven people with special permission from the federal government, medical marijuana remains illegal under federal law!

People currently suffering from any of the conditions mentioned above, for whom the legal medical options have proven unsafe or ineffective, have two options:

1.

Continue to suffer without effective treatment; or

2.

Illegally obtain marijuana—and risk suffering consequences directly related to its illegality, such as:

3.

an insufficient supply due to the prohibition-inflated price or scarcity;impure, contaminated, or chemically adulterated marijuana;arrests, fines, court costs, property forfeiture, incarceration, probation, and criminal records.

Background

Prior to 1937, at least 27 medicines containing marijuana were legally available in the United States. Many were made by wellknown pharmaceutical firms that still exist today, such as Squibb (now Bristol-Myers Squibb) and Eli Lilly. The Marijuana Tax Act of 1937 federally prohibited marijuana. Dr. William C. Woodward of the American Medical Association opposed the Act, testifying that prohibition would ultimately prevent the medicinal uses of marijuana.The Controlled Substances Act of 1970 placed all illicit and prescription drugs into five "schedules" (categories). Marijuana was placed in Schedule I, defining it as having a high potential for abuse, no currently accepted medical use in treatment in the United States, and a lack of accepted safety for use under medical supervision.This definition simply does not apply to marijuana. Of course, at the time of the Controlled

Substances Act, marijuana had been prohibited for more than three decades. Its medicinal uses forgotten, marijuana was considered a dangerous and addictive narcotic.A substantial increase in the number of recreational users in the 1970s contributed to the rediscovery of marijuana's medicinal uses:

 Many scientists studied the health effects of marijuana and inadvertently discovered marijuana's medicinal uses in the process.

 Many who used marijuana recreationally also suffered from diseases for which marijuana is beneficial. By accident, they discovered its therapeutic value.

As the word spread, more and more patients started self-medicating with marijuana. However, marijuana's Schedule I status bars doctors from prescribing it and severely curtails research.

The Struggle in Court

In 1972, a petition was submitted to the Bureau of Narcotics and Dangerous Drugs—now the Drug Enforcement

Administration (DEA)—to reschedule marijuana to make it available by prescription.

After 16 years of court battles, the DEA's chief administrative law judge, Francis L. Young, ruled:

"Marijuana, in its natural form, is one of the safest therapeutically active substances known....

"... [T]he provisions of the [Controlled Substances] Act permit and require the transfer of marijuana from Schedule I to

Schedule II.

"It would be unreasonable, arbitrary and capricious for DEA to continue to stand between those sufferers and the benefits of this substance...."

(September 6, 1988)

Marijuana's placement in Schedule II would enable doctors to prescribe it to their patients. But top DEA bureaucrats rejected Judge Young's ruling and refused to reschedule marijuana. Two appeals later, petitioners experienced their first defeat in the 22-year-old lawsuit. On February 18, 1994, the U.S. Court of Appeals (D.C. Circuit) ruled that the DEA is allowed to reject its judge's ruling and set its own criteria—enabling the DEA to keep marijuana in Schedule I.

However, Congress has the power to reschedule marijuana via legislation, regardless of the DEA's wishes.

Temporary Compassion

In 1975, Robert Randall, who suffered from glaucoma, was arrested for cultivating his own marijuana. He won his case by using the "medical necessity defense," forcing the government to find a way to provide him with his medicine. As a result, the Investigational New Drug (IND) compassionate access program was established, enabling some patients to receive marijuana from the government.

The program was grossly inadequate at helping the potentially millions of people who need medical marijuana. Many patients would never consider the idea that an illegal drug might be their best medicine, and most who were fortunate enough to discover marijuana's medicinal value did not discover the IND program. Those who did often could not find doctors willing to take on the program's arduous, bureaucratic requirements.

In 1992, in response to a flood of new applications from AIDS patients, the George H.W. Bush administration closed the program to new applicants, and pleas to reopen it were ignored by subsequent administrations. The IND program remains in operation only for the seven surviving, previously-approved patients.

Public and Professional Opinion

There is wide support for ending the prohibition of medical marijuana among both the public and the medical community:

Since 1996, a majority of voters in Alaska, California, Colorado, the District of Columbia, Maine, Montana, Nevada,

Oregon, and Washington state have voted in favor of ballot initiatives to remove criminal penalties for seriously ill people who grow or possess medical marijuana. Polls have shown that public approval of these laws has increased since they went into effect.

A CNN/Time poll published November 4, 2002 found that 80% of Americans believe that "adults should be allowed to legally use marijuana for medical purposes if their doctor prescribes it...." Over the last decade, polls have consistently shown between 60% and 80% support for legal access to medical marijuana. Both a statewide Alabama poll commissioned by the Mobile Register, published in July 2004, and a November 2004 Scripps Howard Texas poll reported

75% support.

 Organizations supporting some form of physician-supervised access to medical marijuana include the American

Academy of Family Physicians, American Nurses Association, American Public Health Association, the New England

Journal of Medicine and many others.

 A 1990 scientific survey of oncologists (cancer specialists) found that 54% of those with an opinion favored the controlled medical availability of marijuana and 44% had already suggested at least once that a patient obtain marijuana illegally. [R. Doblin & M. Kleiman, "Marijuana as Antiemetic Medicine," Journal of Clinical Oncology 9 (1991): 1314-

1319.]

Changing State Laws

The federal government has no legal authority to prevent state governments from changing their laws to remove statelevel criminal penalties for medical marijuana use. Hawaii enacted a medical marijuana law via its state legislature in

2000 and Vermont enacted a similar law in 2004. State legislatures have the authority and moral responsibility to change state law to:

 exempt seriously ill patients from state-level prosecution for medical marijuana possession and cultivation; and

 exempt doctors who recommend medical marijuana from prosecution or the denial of any right or privilege.

Even within the confines of federal law, states can enact reforms that have the practical effect of removing the fear of patients being arrested and prosecuted under state law—as well as the symbolic effect of pushing the federal government to allow doctors to prescribe marijuana.

U.S. Congress: The Final Battleground

State governments that want to allow marijuana to be sold in pharmacies have been stymied by the federal government's overriding prohibition of marijuana.

Patients' efforts to bring change through the federal courts have made little progress, as the courts tend to defer to the

DEA, which works aggresively to keep marijuana illegal. However, a Supreme Court case being considered during the

2004-2005 session could limit federal attacks on patients in states with medical marijuana laws.

Efforts to obtain FDA approval of marijuana are similarly stalled. Though some small studies of marijuana are now underway, the National Institute on Drug Abuse—the only legal source of marijuana for clinical research in the U.S.— has consistently made it difficult (and often nearly impossible) for researchers to obtain marijuana for their studies. At present, it is effectively impossible to do the sort of large-scale, extremely costly trials required for FDA approval.

In the meantime, patients continue to suffer. Congress has the power and the responsibility to change federal law so that seriously ill people nationwide can use medical marijuana without fear of arrest and imprisonment.

Source Citation:

Project, Marijuana Policy. "Medical Marijuana Should Be Legalized." Legalizing Drugs. Ed. Stuart A. Kallen. San Diego:

Greenhaven Press, 2006. At Issue. Rpt. from "Medical Marijuana Briefing Paper—2003: The Need to Change State and Federal

Law." www.mpp.org. 2003. Gale Opposing Viewpoints In Context. Web. 16 Apr. 2012.

Presidential Election Process

The procedure for choosing the President of the United States has changed dramatically since George Washington

(1732–1799) was elected in January 1789, just months after the nation’s Constitution was ratified. Since the early nineteenth century, political parties have controlled the quadrennial presidential election process. Each party nominates a single candidate for president and one for vice president. In years past, these nominees were selected at each party’s national convention, frequently in back-room deals. By the early twentieth century, most states had established primary balloting to choose convention delegates who were pledged to support a particular candidate, so that major party nominations today are usually clinched before the party convention. The need to campaign during the primaries, and raise large sums of money for the purpose, means the election process begins years before each presidential election day.

Caucuses and Primaries, Delegates and Dollars

Article II of the Constitution stipulates that any citizen at least thirty-five years old, born in the United States, and residing in the country for at least fourteen years, is eligible to become president. The Constitution established a unique body for selecting the chief executive of the federal government: the Electoral College (see below). But the

Constitution makes no mention of what would become the major influence shaping the nation’s electoral process: political parties. George Washington himself never belonged to a political party, but by the end of his presidency the nucleus of a two-party system had emerged in the dispute between the Federalists, led by Alexander Hamilton (1755–

1804) and John Adams (1735–1801), and the Democratic-Republicans, led by Thomas Jefferson (1743–1826) and

James Madison (1751–1836). Ever since then, American politics have remained within a structure dominated by two national parties.

At first, each party’s members of Congress formed a caucus to nominate its candidates for president and vice president.

This system fell into disrepute after 1824. In that year’s election, the House of Representatives awarded the presidency to John Quincy Adams (1767–1848), even though he had finished a distant second to Andrew Jackson (1767–1845) in the voting. By 1832, “King Caucus” had fallen by the wayside. The first Democratic National Convention was held that year in Baltimore; it affirmed the party’s nomination of then-incumbent President Jackson for re-election and selected

Martin Van Buren (1782–1862) as his running mate. The party conventions soon became an election-year tradition, larded with pageantry and showmanship; the Republican Party held its first in 1856. To this day, Democrats and

Republicans officially elect their presidential ticket at the convention by majority vote of delegates present. But historically, the nominating conventions strayed far from the democratic ideal, since state and regional party bosses often controlled the selection of delegates and traded blocs of votes for political favors. Over time, the pressure for more transparent, accountable mechanisms for choosing candidates led the parties to institute state primaries.

In the 2008 election cycle, forty-two states held presidential primaries, while seventeen conducted caucuses, a series of meetings in which candidates’ supporters attempt to win over uncommitted voters; some states use both formats. In every case, the goal of the process is to select the individuals who will represent their state in that year’s Republican and Democratic conventions. Delegates chosen are sworn to vote for their declared candidate at the convention—on the first ballot, at least. Most Democratic primaries award delegates using a formula based on each candidate’s percentage of the statewide vote, while many states use a winner-take-all format for Republican primaries. Certain state party leaders and office holders, especially on the Democratic side, are invited to the conventions as so-called

“superdelegates,” further complicating the “delegate math” that determines who will emerge as the presidential nominee.

The Iowa caucuses and New Hampshire primaries mark the official opening of presidential election season, and for this reason, these two states receive a wildly disproportionate share of early attention from the candidates. In recent years, some states have moved their primary dates earlier in hopes of becoming more relevant in the nominating process. In

2008, twenty-four states held primaries or caucuses on a single day in February, dubbed “Super Tuesday.” However, that year’s primary race between Democratic senators Barack Obama (1961–) and Hillary Clinton (1947–) remained competitive until early June.

According to some commentators, the most decisive phase of the campaign comes months before any primary votes are cast. They call this “the money primary,” the largely hidden competition for campaign donations and key endorsements. Because the primaries are so essential and so grueling, campaign finance is an increasingly dominant factor in determining which candidacies will survive, which will sputter to a halt, and—crucially—which will be perceived as viable by the news media. According to the Center for Responsive Politics, the 2008 presidential campaigns raised a combined $2.4 billion, roughly double the cost of the previous cycle. The Supreme Court’s 2010 decision in the Citizens United case, which loosened restrictions on corporate campaign donations, meant the 2012 race would likely be far more expensive. The largest share of the money candidates raise goes ultimately to the television networks to pay for advertising.

The General Election Campaign

Because the primaries usually decide the major-party presidential candidates, the national conventions have lost much of their prior significance. But they remain a major summertime spectacle that marks the beginning of the general election season. The candidates make speeches accepting their party’s nomination—speeches that are televised and widely scrutinized—and often choose their running mates during convention week.

For minor-party and independent candidates, the process of running for president is considerably different. Some small parties do hold primaries and conventions, usually outside the media spotlight. For independents, a major challenge is to gather petition signatures and fulfill each state’s requirements for getting on the ballot. These candidates usually struggle for publicity and donations, although billionaire Ross Perot (1930–) escaped the latter challenge in 1992 by funding his campaign with his own wealth.

The final months of the campaign are a frenzy of polls, debates, endless appearances and stump speeches, ubiquitous advertisements, and news coverage focused on the horse race. While candidates must woo their own party faithful in the primaries, once nominated they typically shift the tone and substance of their messages, tacking toward the center to sway independents and swing voters.

The Electoral College

Years of campaigning, fundraising, and speculation culminate in election day, the Tuesday after the first Monday in

November. Even though the candidates’ names appear on the ballot, voters themselves do not technically elect the president and vice president. They vote for electors—538 of them. Each state’s number of electoral votes corresponds to its number of senators and congresspeople, including three for the District of Columbia. The framers of the

Constitution designed the Electoral College as a compromise between selection of the president by Congress, which they saw as too elitist, and direct popular election, which they saw as too democratic.

With the exception of Maine and Nebraska, each state assigns all its electoral votes to whichever candidate wins that state’s popular vote. A simple majority of electoral votes decides the contest. If no candidate reaches the “magic number” of 270 electoral votes, the election is thrown to the House of Representatives, with each state’s delegation assigned one vote, but this has not happened since 1824. The Electoral College system means a candidate can win the

White House, in theory, by taking as few as eleven of the most populous states. In practice, it means that candidates devote a huge share of their attention to swing states such as Florida and Ohio, rather than aiming to maximize turnout in all fifty states. It further means a candidate can win the electoral vote, and thus the presidency, while losing the popular vote nationwide. Rutherford B. Hayes (1822–1893) in 1876, Benjamin Harrison (1833–1901) in 1888, and

George W. Bush (1946–) in 2000 all earned this dubious distinction.

On the Monday after the second Wednesday in December, the electors meet in their respective state capitals to cast their votes for president and vice president, as prescribed in the Twelfth Amendment. (Nothing in the Constitution actually compels the electors to vote for their state’s winner, and on rare occasions a “faithless” elector switches to another candidate, although none has ever affected the outcome of an election.) A certificate of each state’s vote is sealed and sent to the President of the Senate, who opens them before a joint session of Congress to certify the vote and conclude the election process.

At the end of the embattled election of 2000, several Florida representatives attempted to file a complaint to block certification of their state’s results, alleging irregularities in the polling and vote count. However, because no Senator agreed to second the statement of objection, it could not be officially delivered and was gaveled down by the President of the Senate—who, ironically, was Vice President Al Gore (1948–), the losing Democratic candidate.

Public Financing

Since 1976, American taxpayers have had the opportunity to contribute to public financing of presidential campaigns through a check-off box, currently $3, on the 1040 tax form. Eligible candidates in the primaries may claim government matching funds for the first $250 donated by any individual. The program also gives $20-million grants

(plus a cost of living adjustment [COLA]) to participating general election candidates. The catch is that to receive public financing, candidates must agree to spend no more than $10 million in the primaries or $20 million (both figures adjusted for the COLA) in the general campaign. In 2008, Democratic nominee Barack Obama first indicated he would accept public financing for the general election, then changed his mind, becoming the first major party nominee since the program started to reject the grant. Defenders of public financing said the program helps level the playing field and limit the influence of special interests in the election process, but many critics argued that the public program’s spending limits needed an upgrade to keep pace with the skyrocketing costs of winning a campaign for the White

House. Some predicted that both parties would decline public financing in the 2012 presidential race, putting the system’s survival in jeopardy.

Source Citation:

"Presidential Election Process." Opposing Viewpoints Online Collection . Gale, Cengage Learning, 2010. Gale

Opposing Viewpoints In Context . Web. 16 Apr. 2012.

The Electoral College Ensures Nationwide, Moderate, and Stable Parties

Peter W. Schramm is a professor of Political Science at Ashland University and executive director of the John M.

Ashbrook Center for Public Affairs. He served during the Ronald Reagan Administration in the Department of

Education.

The founding fathers did not intend America to have direct majority rule. Instead, they tried to balance majority power with rationality. The Electoral College system, in which the winner takes all, forces the parties to be ideologically and geographically broad-based and inclusive. This institution therefore makes America's government stable and moderate.

Those who are keen on abolishing the Electoral College in favor of a direct election of the president, are, whether they know it or not, proposing the most radical transformation in our political system that has ever been considered. I am opposed to such transformation for the same reason that I support the Constitution.

Those who make the argument that a simple majority, one man, one vote, as it were, is the fairest system, make the mistake of confusing democracy, or the simple and direct rule of the majority, with good government. When they argue that democracy is subverted by the Electoral College they are mistaken.

The opponents of the Electoral College confuse means with ends, ignore the logic of the Constitution, have not studied history and are oblivious to the ill effects its abolition would have.

The Constitution Restrains Majority Rule

The framers of the Constitution knew that all previous democracies were short-lived and died violently; they became tyrannies, wherein the unrestrained majorities acted only in their own interests and ignored the rights of the minority.

The framers' "new science of politics" sought to avoid this.

The Constitution encourages the people to construct a certain kind of majority, a reasonable majority, a majority that tempers the passions and interests of the people.

While all political authority comes from the people—hence [James] Madison calls this a "popular" regime—the purpose of government according to the Declaration of Independence is to secure the natural rights of each citizen. The purpose of our intricate constitutional architecture—separation of powers, bicameralism, representation, staggered elections, federalism, the Electoral College—is to try to make as certain as possible, knowing that people are not angels, that this be accomplished. The Constitution attempts to combine consent with justice. This is what we mean when we say that the Constitution is a limiting document.

It is self-evident that all these devices depart in one way or another, from simple numerical majoritarianism. For the

Constitution, the formation of majorities is not simply a mathematical or quantitative problem.

Why should California have only two U.S. senators, while Wyoming also has two? Why should the election of senators be staggered so that only a third are up for election in any one cycle? Why should there be an independent judiciary?

Why should the president be elected by an Electoral College that is controlled by the states?

The answers revolve around this massive fact: The Constitution encourages the people to construct a certain kind of majority, a reasonable majority, a majority that tempers the passions and interests of the people. The Constitution attempts to create a majority—one could even say many majorities—that is moderate, that is limited and one that will make violating the rights of the minority very difficult. In short, the Constitution is concerned with the character of majorities formed.

The Electoral College Is a Moderating Force

The Electoral College is the lynchpin in this constitutional structure. Although Alexander Hamilton admitted that it wasn't perfect, yet he called it "excellent." The framers of the Constitution debated at length how a president should be chosen before settling on the Electoral College.

In large measure because of the Electoral College, each political party is broad-based and moderate.

At the Constitutional Convention they twice defeated a plan to elect the president by direct vote, and also defeated a plan to have Congress elect the president. The latter would violate the separation of powers, while the former would, they argued, lead to what Hamilton called "the little arts of popularity," or what we call demagoguery.

So they crafted the Electoral College. This has come to mean that every four years a temporary legislature in each state is elected by the people, whose sole purpose is to elect a president. It then dissolves, to reappear four years later. In other words we have a democratic election for president, but it is democratic within each state. Yet, within each state, the winner of the popular vote takes all the electoral votes of that state. Citizens in Colorado this month [November

2004] made the right decision to keep a winner-take-all system.

This method not only bolsters federalism, but also encourages and supports a two-party system. In large measure because of the Electoral College, each political party is broad-based and moderate. Each party has to mount a national campaign, state by state, that considers the various different interests of this extended republic. Majorities are built that are both ideologically and geographically broad and moderate. While the two-party system does not eliminate partisanship, it does moderate it.

The Electoral College ensures that an elected president would be responsive not to a concentrated majority, but to the nation as a whole.

Each party is pulled to the center, producing umbrella-like coalitions before an election, rather than after, as happens in the more turbulent regimes of Europe, for example. As a result, we do not have runoffs, as most other democracies do.

It forces both parties to practice politics inclusively.

The Electoral College Creates Stability

Nor do we have a radicalized public opinion as the Europeans do. What we have is a system that produces good, constitutional politics, and the kind of stability that no other "popular regime" has ever experienced.

The Electoral College ensures that an elected president would be responsive not to a concentrated majority, but to the nation as a whole. This process is one of the most important safeguards of our democratic form of government. Leave the Electoral College and the Constitution alone.

Source Citation:

Schramm, Peter W. "The Electoral College Ensures Nationwide, Moderate, and Stable Parties." Does the Two-Party System Still

Work? Ed. Noah Berlatsky. Detroit: Greenhaven Press, 2010. At Issue. Rpt. from "Is the Electoral College Passé?: No." Ashbrook

Center. 2004. Gale Opposing Viewpoints In Context. Web. 16 Apr. 2012.

The Electoral College Is Undemocratic and Should Be Abolished

Bradford Plumer is an assistant editor at The New Republic, where he reports on energy and environmental issues. He also has written for The American Prospect, Audubon, The Journal of Life Sciences, In These Times, and Mother

Jones.

The Electoral College is unfair, it has the chance of badly misrepresenting the will of the people, and it gives disproportionate power to a few voters in swing states. Moreover, defenses of the Electoral College are unconvincing. The College does not encourage broad geographical appeal, for example, because candidates tend to refrain from campaigning in states that have few

Electoral College votes or in states that typically are aligned with either the Democratic or the Republican party. Abolishing the

Electoral College would not cause instability or chaos, but would promote democracy and rationality.

What have [Republican President] Richard Nixon, [Democratic President] Jimmy Carter, [Republican Presidential

Candidate] Bob Dole, the U.S. Chamber of Commerce, and the AFL-CIO [a federation of labor unions] all, in their time, agreed on? Answer: Abolishing the Electoral College! They're not alone; according to a Gallup poll in 2000, taken shortly after Al Gore—thanks to the quirks of the Electoral College—won the popular vote but lost the presidency, over 60 percent of voters would prefer a direct election to the kind we have now. This year [2004] voters can expect another close election in which the popular vote winner could again lose the presidency. And yet, the

Electoral College still has its defenders. What gives?

As George C. Edwards III, a professor of political science at Texas A&M University, reminds us in his new book, Why the Electoral College Is Bad for America, "The choice of the chief executive must be the people's, and it should rest with none other than them." Fans of the Electoral College usually admit that the current system doesn't quite satisfy this principle. Instead, Edwards notes, they change the subject and tick off all the "advantages" of the electoral college. But even the best-laid defenses of the old system fall apart under close scrutiny. The Electoral College has to go.

The Electoral College Is Dangerous and Unfair

Under the Electoral College system, voters vote not for the president, but for a slate of electors, who in turn elect the president. If you lived in Texas, for instance, and wanted to vote for Kerry, you'd vote for a slate of 34 Democratic electors pledged to Kerry. On the off-chance that those electors won the statewide election, they would go to Congress and Kerry would get 34 electoral votes. Who are the electors? They can be anyone not holding public office. Who picks the electors in the first place? It depends on the state. Sometimes state conventions, sometimes the state party's central committee, sometimes the presidential candidates themselves. Can voters control whom their electors vote for? Not always. Do voters sometimes get confused about the electors and vote for the wrong candidate? Sometimes.

The single best argument against the electoral college is what we might call the disaster factor. The American people should consider themselves lucky that the 2000 fiasco was the biggest election crisis in a century; the system allows for much worse. Consider that state legislatures are technically responsible for picking electors, and that those electors could always defy the will of the people. Back in 1960, segregationists in the Louisiana legislature nearly succeeded in replacing the Democratic electors with new electors who would oppose John F. Kennedy. (So that a popular vote for

Kennedy would not have actually gone to Kennedy.) In the same vein, "faithless" electors have occasionally refused to vote for their party's candidate and cast a deciding vote for whomever they please. This year, one Republican elector in

West Virginia has already pledged not to vote for Bush; imagine if more did the same. Oh, and what if a state sends two slates of electors to Congress? It happened in Hawaii in 1960. Luckily, Vice President Richard Nixon, who was presiding over the Senate, validated only his opponent's electors, but he made sure to do so "without establishing a precedent." What if it happened again?

At the most basic level, the Electoral College is unfair to voters.

Perhaps most worrying is the prospect of a tie in the electoral vote. In that case, the election would be thrown to the

House of Representatives, where state delegations vote on the president. (The Senate would choose the vice-president.)

Because each state casts only one vote, the single representative from Wyoming, representing 500,000 voters, would have as much say as the 55 representatives from California, who represent 35 million voters. Given that many voters vote one party for president and another for Congress, the House's selection can hardly be expected to reflect the will of the people. And if an electoral tie seems unlikely, consider this: In 1968, a shift of just 41,971 votes would have deadlocked the election. In 1976, a tie would have occurred if a mere 5,559 voters in Ohio and 3,687 voters in Hawaii had voted the other way. The election is only a few swing voters away from catastrophe.

At the most basic level, the Electoral College is unfair to voters. Because of the winner-take-all system in each state, candidates don't spend time in states they know they have no chance of winning, focusing only on the tight races in the

"swing" states. During the 2000 campaign, seventeen states didn't see the candidates at all, including Rhode Island and

South Carolina, and voters in 25 of the largest media markets didn't get to see a single campaign ad. If anyone has a good argument for putting the fate of the presidency in the hands of a few swing voters in Ohio, they have yet to make it.

Defenses of the Electoral College Are Unconvincing

So much for the charges against the Electoral College. The arguments in favor of the Electoral College are a bit more intricate. Here's a quick list of the favorite defenses—and the counterarguments that undo them.

The founding fathers wanted it that way!—Advocates of the Electoral College often appeal to the wisdom of the founding fathers—after all, they set up the system, presumably they had something just and wise in mind, right?

Wrong. History shows that the framers whipped up the Electoral College system in a hurry, with little discussion and less debate. Whatever wisdom the founding fathers had, they sure didn't use it to design presidential elections. At the time, most of the framers were weary after a summer's worth of bickering, and figured that George Washington would be president no matter what, so it wasn't a pressing issue.

Most of the original arguments in favor of an Electoral College system are no longer valid. The Electoral College was partially a concession to slaveholders in the South, who wanted electoral clout without letting their slaves actually vote.

(Under the Electoral College, slaves counted towards a state's electoral vote total.) The framers also thought that ordinary people wouldn't have enough information to elect a president, which is not necessarily a concern today.

It protects state interests!—States don't really have coherent "interests," so it's hard to figure out exactly what this means. (Is there something, for instance, that all New Yorkers want purely by virtue of being New Yorkers?) Under the current system, presidents rarely campaign on local issues anyway—when [political science professor] George

Edwards analyzed campaign speeches from 1996 and 2000, he found only a handful that even mentioned local issues.

And that's as it should be. We have plenty of Congressmen and Senators who cater to local concerns. The president should take a broader view of the national interest, not be beholden to any one state or locale.

It's consistent with federalism!—All history students recall that the Great Compromise of 1787 created the House, which gives power to big populous states, and the Senate, which favors small states. The compromise was just that, a compromise meant to keep delegates happy and the Constitution Convention in motion. Nevertheless, the idea that small states need protection has somehow become legitimated over the years, and is used to support the Electoral

College—which gives small states disproportionate power in electing a president. But what, pray tell, do small states need protection from? It's not as if big states are all ganging up on Wyoming. The fiercest rivalries have always been between regions, like the South and North in the 1800s, or between big states, like California and Texas today.

Furthermore, most small states are ignored in presidential campaigns, so it's not clear that the current system is protecting anything.

Factions already exist—white male voters vote Republican; African-Americans vote Democrat.... If our polarized country is a concern, it has little to do with the Electoral College.

It protects minorities!—Some college buffs have argued that, since ethnic minorities are concentrated in politically competitive states, the Electoral College forces candidates to pay more attention to minorities. This sounds great, but

it's wholly untrue. Most African-Americans, for instance, are concentrated in the South, which has rarely been a

"swing" region. Hispanic voters, meanwhile, largely reside in California, Texas, and New York, all uncompetitive states. It's true that Cubans in Florida have benefited wonderfully from the Electoral College, but they represent an extremely narrow interest group. All other minority voters have less incentive to vote. It's no surprise that the Electoral

College has often enabled presidential candidates to ignore minorities in various states—in the 19th century, for instance, voting rights were poorly enforced in non-competitive states.

George Will's Defense of the Electoral College Is Unconvincing

It makes presidential races more cohesive!—In an August [2004] column for Newsweek, [political commentator]

George Will argued that the Electoral College somehow makes presidential elections more cohesive. Again, fine in principle, untrue in practice. Will first suggests that the system forces candidates to win a broad swathe of states, rather than just focusing on the most populous regions. But even if that happened, how is that worse than candidates focusing on a few random swing states? Or take Will's claim that the Electoral College system prevents "factions" from "uniting their votes across state lines." What? Factions already exist—white male voters vote Republican; African-Americans vote Democrat; evangelicals vote Republican; atheists vote Democrat. If our polarized country is a concern, it has little to do with the Electoral College.

It gives legitimacy to the winner!—Finally, Will argues that the Electoral College strengthens or legitimizes the winner.

For example, Woodrow Wilson won only 41.8 percent of the popular vote, but his 81.9 percent electoral vote victory

"produced a strong presidency." This suggests that voters are fools and that the electoral vote total somehow obscures the popular vote total. (If a candidate gets 45 percent of the popular vote, voters aren't going to think he got more than that just because he got 81 percent of the electoral vote total. And even if they do, do we really want a system whose aim is to mislead voters about election results?) Furthermore, there's no real correlation between a strong electoral vote showing and a strong presidency. George H.W. Bush received 426 electoral votes, while Harry Truman received only

303 in 1948 and George W. Bush a mere 271 in 2000. Yet the latter two were undeniably "stronger" presidents in their dealings with Congress. There's also no evidence that an electoral landslide creates a "mandate" for change. The landslides in 1984 and 1972 didn't give [Ronald] Reagan or [Richard] Nixon a mandate for much of anything—indeed, those two presidents got relatively little done in their second terms.

Direct Elections Would Work Fine

Even after all the pro-College arguments have come unraveled, College advocates often insist on digging in their heels and saying that a direct election would be even worse. They're still wrong. Here are the two main arguments leveled against direct elections:

The Electoral College is unfair, outdated, and irrational. The best arguments in favor of it are mostly assertions without much basis in reality.

1. The recounts would kill us!—It's true, a nationwide recount would be more nightmarish than, say, tallying up all the hanging chads [paper fragments created from partially punched vote cards] in Florida. At the same time, we'd be less likely to see recounts in a direct election, since the odds that the popular election would be within a slim enough margin of error is smaller than the odds that a "swing" state like Florida would need a recount. Under a direct election, since it usually takes many more votes to sway a race (as opposed to a mere 500 in Florida), there is less incentive for voter fraud, and less reason for candidates to think a recount will change the election. But set aside these arguments for a second and ask: why do so many people fear the recount? If it's such a bad idea to make sure that every vote is accurately tallied, then why do we even have elections in the first place?

2. Third parties would run amok!—The ultimate argument against the Electoral College is that it would encourage the rise of third parties. It might. But remember, third parties already play a role in our current system, and have helped swing the election at least four times in the last century—in 1912, 1968, 1992 and 2000. Meanwhile, almost every other office in the country is filled by direct election, and third parties play an extremely small role in those races. There are just too many social and legal obstacles blocking the rise of third parties. Because the Democratic and Republican

parties tend to be sprawling coalitions rather than tightly-knit homogenous groups, voters have every incentive to work

"within the system". Likewise, in a direct election, the two parties would be more likely to rally their partisans and promote voter turnout, which would in turn strengthen the two-party system. And if all else fails, most states have laws limiting third party ballot access anyway. Abolishing the Electoral College won't change that.

 It's official: The Electoral College is unfair, outdated, and irrational. The best arguments in favor of it are mostly assertions without much basis in reality. And the arguments against direct elections are spurious at best. It's hard to say this, but Bob Dole was right: Abolish the Electoral College!

Source Citation:

Plumer, Bradford. "The Electoral College Is Undemocratic and Should Be Abolished." Does the Two-Party System Still Work? Ed.

Noah Berlatsky. Detroit: Greenhaven Press, 2010. At Issue. Rpt. from "The Indefensible Electoral College." Mother Jones Online.

2004. Gale Opposing Viewpoints In Context. Web. 16 Apr. 2012.

Mobile Phones

The first mobile phones, also called cell phones, were marketed mainly to business executives as car phones. Although they were embraced as a way for busy executives to save time by accomplishing more on their drive home, the phones were bulky, prohibitively expensive for the general population, and their features were limited to making and receiving calls. Since that time the use of mobile phones has become a matter of great public concern. In addition to becoming a dinner table distraction, mobile phones are accused of causing traffic fatalities, increasing cancer rates, allowing invasions of privacy, and disrupting classrooms. Since their introduction on the market, mobile phones have become so light and inexpensive that nearly anyone can afford to own one. Further, the technology has advanced so that placing or receiving a phone call is no longer a mobile phone's only function. Modern mobile phone users can purchase phones capable of sending and receiving text messages, running computer applications, obtaining driving directions, and even allowing others to see their location.

Mobile Phones and Radiation

Mobile phones emit electromagnetic radiation when they connect to cell phone towers. The farther a phone is from a tower, the more radiation the phone has to emit to make a connection. Many people are concerned that this electromagnetic radiation from mobile phones may permeate brain tissue and cause health problems, such as tumors in the brain or in other tissue around the ear. For years the scientific community published study after study that seemed to support this concern, but many of these studies have come under intense scrutiny, and the health risk from using mobile phones is far from clear.

According to Web site of the CTIA, the International Association for the Wireless Telecommunications Industry,

"When it comes to your wireless device, rest assured that the scientific evidence and expert reviews from leading global health organizations such as the American Cancer Society, National Cancer Institute, World Health Organization and the United States Food and Drug Administration reflect a consensus based on published impartial scientific research showing there is no reason for concern." A quick scan of the various organizations they list, as well as other related organizations, does support their claim, but some organizations are not quite as reassuring as the CTIA.

In their reports most organizations stop short of saying "there is no reason for concern" and leave open the possibility that mobile phones may be detrimental to public health. The American Cancer Society (ACS), for example, backs up the claims of the CTIA, but includes a caveat: "Taken as a whole, most studies to date have not found a link between cell phone use and the development of tumors. However, these studies have had some important limitations." They state that because mobile phones are still relatively new, more time is needed to see if some health effects simply have not appeared yet. Also, no studies have been done on children, and existing studies have relied on people's memories, which may not be accurate, to determine how much they used a mobile phone.

Other organizations also conclude that there is not enough evidence to say that mobile phones are a health risk. The

World Health Organization (WHO) released a report in 2008 stating, "To date there is no convincing biological or biophysical support for a possible association between exposure to ELF [electromagnetic] fields and the risk of leukaemia or any other cancer." The Federal Communications Commission (FCC) in response to Frequently Asked

Questions on their Web site, states, "While some experimental data have suggested a possible link between exposure and tumor formation in animals exposed under certain specific conditions, the results have not been independently replicated. Many other studies have failed to find evidence for a link to cancer or any related condition."

Some recent studies have even gone as far as to suggest that mobile phones could improve health in surprising ways.

Gary Arendash, the lead author of a study conducted by the University of South Florida in Tampa, was quoted in a

Cosmos magazine article on January 8, 2010, stating, "Most surprising were the benefits to memory we clearly observed—just the opposite of what we predicted." This study found that exposure to radiation from mobile phones actually improved memory in mice and may possibly even protect against Alzheimer's disease.

However, one of the most comprehensive studies to date was released by the Environmental Working Group (EWG) in

September 9, 2009. This report cast a particularly dark shadow on mobile phone safety. The report pointed out

numerous flaws in the studies used by some organizations. For example, the Food and Drug Administration (FDA) based their evaluation of the safety of mobile phones on studies that followed people who had only used mobile phones for three years. EWG also points to several examples of newer studies with more thorough data. For example, they state, "A joint study by researchers in Denmark, Finland, Norway, Sweden and the United Kingdom found that people who had used cell phones for more than 10 years had a significantly increased risk of developing glioma, a usually malignant brain tumor, on the side of the head they had favored for cell phone conversations." Further the EWG points out:

In response to the growing debate over the safety of cell phone emissions, government agencies in Germany, Switzerland, Israel,

United Kingdom, France, and Finland and the European Parliament have recommended actions to help consumers reduce exposures to cell phone radiation, especially for young children.

In contrast, the two U.S. federal agencies that regulate cell phones, the Food and Drug Administration (FDA) and the Federal

Communication Commission (FCC), have all but ignored evidence that long term cell phone use may be risky.

Although most studies continue to be dismissed, particularly in the United States, as inconclusive or needing additional research to confirm results, some in the scientific and medical fields are already taking precautions. In a September 22,

2009 article on Business Week Online, Olga Kharif points out "Many oncologists say they limit their own cellphone usage, don't hold mobiles against their ear, and instead use speakerphones, headsets, and hands-free setups." Kharif also notes Martin Blank, a researcher at Columbia University who is studying the effects of electromagnetic radiation on living cells, does not own a mobile phone, and his wife has one only for emergencies. Kharif wonders what information may emerge from future studies and quotes Senator Tom Harkin from a September 14, 2009 hearing as stating, "I am reminded of this nation's experience with cigarettes … Decades spanned between the fist warning and the final, definitive conclusion that cigarettes cause lung cancer."

Mobile Phones and Driving Distractions

Although the health risks of using a mobile phone may be debated, the risk of using a phone while driving is clear. The

Virginia Tech Transportation Institute (VTTI) issued a press release in October 2009 reporting that dialing a mobile phone increased the risk of a crash or near crash by 2.8 times in a light car and 5.9 times in a heavy vehicle, and texting increased that risk to 23.2 times in a heavy vehicle. The report states, "Our research has shown that teens tend to engage in cell phone tasks much more frequently—and in much more risky situations—than adults. Thus, our studies indicate that teens are four times more likely to get into a related crash or near-crash than their adult counterparts."

Further, although many people use hands-free devices as a way of minimizing the risk from driving and talking, it may not help. A 2008 study conducted at Carnegie Mellon University found that simply listening to a conversation reduced brain activity by 37 percent, and driving showed a "significant deterioration."

Many states are rushing to pass laws that limit mobile phone use on the roads. As of November 2009, texting-whiledriving laws were on the books in eighteen states and the District of Columbia, and more individual cities have passed laws that ban texting or talking on handheld mobile phones while driving in their communities. Additionally, twentyone states ban mobile phone use by all new drivers.

Mobile Phones and Privacy

"We're all too familiar with the concept of technology as a double-edged sword, and wireless is no exception," wrote

Steven Levy in a June 7, 2004 article for Newsweek. He continues that "we can go anywhere and still maintain intimate contact with our work, our loved ones and our real-time sports scores. But the same persistent connectedness may well lead us toward a future where our cell phones tag and track us like FedEx packages, sometimes voluntarily and sometimes when we're not aware." The FCC requires all new mobile phones to include GPS systems to help 911 responders find the caller in the event of an emergency. Levy goes on to describe Worktrack, a product that allows employers to monitor an employee's location in the field. Since employers already have the legal ability to monitor their employees, it does not breach any privacy laws. Only two states, Connecticut and Delaware, require that employees be notified of electronic surveillance. However, as the technology advances, Levy and others fear that it

may be used to track people without their awareness. In fact, within four years, Levy conducted an interview with the

CEO of one such company that provides this technology to individuals.

The CEO is Sam Altman, the founder of Loopt, a service that allows users of mobile phones equipped with GPS to share their current location with others, whom Levy interviewed for a Newsweek article that appeared on May 5, 2008.

In the interview Altman talked about the benefits of being able to share your location with friends. "It's amazing how often you're near someone and don't know about it—not in the same restaurant, but three restaurants down. It's such a common occurrence that some nights, rather than just go home at 11, I'll drive somewhere because I know I'll find people I can meet up with." However, he also pointed out a feature of Loopt that came about following a conversation with the National Network to End Domestic Violence. Atman states, "They were saying if a battered wife turned off the feature, the abusive husband would think something's wrong, so people need the ability to look like they're somewhere that they are not." In response to Levy's question about privacy issues with a service like Loopt, Altman responded, "It's the unwitting-use scenario. If I've opted in and I'm sharing my location, I at least know I'm doing that. But there's a scenario where someone installs a location-based service on my phone and I don't know about it. If then they can see my location with me being unaware of that, that's very scary. There's a lot we do at Loopt to make sure that doesn't happen."

Tracking Teens

Another issue is raised with the tracking of teenagers. While parents now have the technology to track where their teenager is at any moment, some are wondering if this is violation of privacy. John Rosemond doesn't think so. Writing for The Charlotte Observer on November 22, 2005, when the technology was new, he contends that parents absolutely have a right to monitor their children's behavior. He states, "If I were a parent of a teen today … I would also be concerned about safety. In that regard, the mere fact that a teenager knows he's being monitored will significantly reduce, if not completely eliminate, the possibility of the teen venturing into a forbidden area of town, driving recklessly, or otherwise putting his and his friends' lives in danger. Reducing that possibility is worth more to a parent than a teenager can possibly appreciate." Teens may not feel that having their parents track their location is fair, but it is legal.

Mobile Phones and Schools

Many parents enjoy the security of knowing their child has a mobile phone in his or her backpack and argue that mobile phones are invaluable for arranging transportation or handling an emergency. Students enjoy the benefit of being able to text their friends or access the Internet right from their phone. Despite these advantages, mobile phones can also be a hassle for teachers who have to deal with them in the classroom. Armstrong Williams, a nationally syndicated columnist, argues in a June 26, 2006, article on Townhall.com, "Public schools have become war zones with teachers and administrators acting as the unequipped arbitrators. Cell phones are a big reason these behavior problems are occurring in schools everywhere around the country." He goes on to list problems such as kids calling in older kids or gangs when there is an argument, downloading inappropriate content, disrupting class, cheating on tests, or even setting up drug deals. However, many districts, such as Highlands County Schools in Sibring, Florida, have found a simple way to handle the problems mobile phones can create in classrooms without banning them completely. In a

September 3, 2009, article for Highlands Today, Marc Valero reports that the district lifted their ban on mobile phones and allows students to keep them in their backpacks as long as they are turned off during the day.

Interphone Study

Mobile phones come with clear advantages and disadvantages. Although people all over the world have embraced the convenience and security of having a mobile phone, distractions caused by mobile phones in classrooms and on the roads have caused numerous concerns. Less clear is the effect mobile phones may have on health; however, users may not have to wait much longer for definitive evidence. The Interphone study, which spanned thirteen countries to determine if there are correlations between mobile phone use and brain tumors, is expected to be released later in 2010.

This study is expected to offer the most inclusive data to date, and many in the scientific community as well as in the general public are eagerly awaiting the report's release.

Source Citation:

"Mobile Phones." Current Issues: Macmillan Social Science Library. Detroit: Gale, 2010. Gale Opposing Viewpoints In Context.

Web. 18 Apr. 2012.

Cell Phones Should Be Banned in Schools

"Students survived for hundreds of years without cell phones and they don't need them now."

In this viewpoint, Armstrong Williams recommends prohibiting cell phones in school because, in his opinion, they are distracting to the user and to other students. Cell phones, he claims, are used to send text messages during class, browse sexual content on the Internet, cheat on tests, and even coordinate drug deals on school grounds. Regulating the use of cell phones in schools puts undue stress on administrators and teachers, he explains. A Christian conservative, Williams writes nationally syndicated columns and hosts radio and television shows.

As you read, consider the following questions:

1.

In what two ways did Councilwoman Letitia James respond to the cell phone ban, in the author's assertion?

2.

The notion that cell phones should be allowed in schools for safety is comparable to what other idea, according to

Williams?

3.

In the author's view, how do students use cell phones to incite violence?

Last month [May 2006] Mayor [Michael] Bloomberg and Schools Chancellor Joel Klein teamed up to ban cell phones from New York public schools. As expected, uproar ensued, but you may be shocked at where the racket came from.

No, it was not the students who were up in arms about having their precious lifelines taken away. It was the local politicos and parent groups who most opposed the ban.

An Uproar by Parents and Politicians

When I first heard about the cell phone ban for New York schools, I figured students would most vehemently oppose the ban. I guessed that they would be so disappointed about losing the opportunity to text-message their friends while in class, take pictures during breaks, surf the internet during lectures, and talk on the phone between periods that they would do all they could to overturn the ban. Instead, these students simply adjusted to the new rules and went back to the good old days of passing notes under the desks. But their parents and politicians did not back down so easily.

Public Advocate Betsy Gotbaum, city Controller William Thompson, several ranking members of the City Council, including Education Committee Chairman Robert Jackson and Land Use Committee Chairwoman Melinda Katz, all came out against the ban. A parents' group collected more than 1,200 signatures on a petition opposing the ban. And

City Councilwoman Letitia James (Brooklyn) introduced legislation calling for a moratorium on cell phone confiscation. James also is exploring whether the Council has the authority to override Mayor Bloomberg and Klein on the issue, she said.

Excuses

Parent and political groups claim that students need the phones before and after school for safety and security reasons.

They cite the scarce supply of pay phones and the non-existent after school programs as reasons why cell phones are needed to arrange for transportation or deal with an emergency. Also, most parents enjoy the idea of being able to contact their child at a moment's notice to inquire about their whereabouts and current activity.

I am shocked and disappointed that some parents and politicians believe that cell phones as safety devices are a worthy tradeoff for disruptions at school. That philosophy is comparable to claiming that weapons should be allowed in school to prevent after school attacks. Frankly, it just doesn't make sense. Students survived for hundreds of years without cell phones and they don't need them now. If parents are seriously worried about the safety of their children, they can take other steps to ensure their safety. A cell phone is not the answer.

Support Teachers by Upholding the Cell Phone Ban

Public schools have become war zones with teachers and administrators acting as the unequipped arbitrators. Cell phones are a big reason these behavior problems are occurring in schools everywhere around the country. Students are inciting violence by calling gangs and older kids anytime an argument occurs, running away from teachers who see them talking on the phone, and turning their cell phone ring tones to a pitch that adults cannot notice because of hearing deficiencies. Students are downloading inappropriate movies and images and sharing them between friends which disrupts class and can lead to sexual harassment situations. Students are using cell phones to cheat by either taking pictures of their answer sheet, sending the image to other fellow students or even by text-messaging the answers. They also use cell phones to coordinate drug deals and to call into schools where they fake absences by pretending to be their parents or other false identities. Besides distracting the cell phone users, other students are unable to focus because of cell phone disruptions.

Cell phones put unneeded stress on teachers and administrators as they exhaust all of their tools to reach students. Kids today are more rebellious, more disrespectful and more undisciplined than ever. Adults need to take a stand and give kids more boundaries, not more freedom. This discipline starts at home, but it spreads to school as well. If teachers agree with the Mayor's ban (which they overwhelmingly do), then parents and politicians should too. Teachers have a tough enough job as it is and we must make it easier for them by upholding this ban on cell phones at schools.

Source Citation:

Williams, Armstrong. "Cell Phones Should Be Banned in Schools." School Policies. Ed. Jamuna Carroll. Detroit: Greenhaven Press,

2008. Opposing Viewpoints. Rpt. from "Classrooms Are No Place for Cell Phones." Townhall.com. 2006. Gale Opposing

Viewpoints In Context. Web. 18 Apr. 2012.

Cell Phones Should Not Be Banned in Schools

"Why hurt the thousands of parents and students who use the cell phones appropriately—only to and from school or in cases of emergency?"

Randi Weingarten, president of United Federation of Teachers (UFT), submitted an affidavit on behalf of her organization in a

2006 court case challenging a cell phone ban in New York City schools. This viewpoint is excerpted from that affidavit. While

Weingarten concedes that cell phone use in schools can be disruptive, she asserts that a widespread ban is unnecessary. Instead, she suggests, each school should develop its own policy, which may require that cell phones be turned off in class but should allow their use before and after school and in case of emergency.

As you read, consider the following questions:

1.

How does Weingarten explain educators' unique role and responsibilities in the instruction of schoolchildren?

2.

The city administration compares cell phones to what dangerous instruments, according to a resolution cited in the viewpoint?

3.

On what basis did the Department of Education reject a plan to construct lockers where cell phones could be stored, according to the author?

The UFT [United Federation of Teachers] represents more than 100,000 teachers and other educators who work in the

City of New York's public schools ("Educators"). I respectfully make this affidavit in support of the UFT's motion for leave to appear in this action as amicus curiae [friend of the court] and, in that capacity, to present this Court with the unique perspective that Educators have on the issues raised herein.

As discussed in more detail below, cell phones are a lifeline for many parents and children. Indeed, one need look no further than the September 11 [2001] terrorist attacks, this month's [October 2006] [Cory] Lidle plane tragedy

1

or the

Roosevelt Island tram incident [in which sixty-nine passengers were trapped for hours] to see their perceived importance in securing children's safety. At the same time, the use of cell phones inside classrooms and schools can be potentially disruptive and even dangerous. It is necessary, therefore, to find a balance that prohibits cell phone usage in school, but permits children to have them in traveling to and from school. Unlike most urban school systems that have crafted policies to achieve this balance, the [New York City] Department of Education (the "DOE") has instituted an outright ban on the possession of cell phones in schools. It has taken the position that, by doing so, it has facilitated the education of the "City's students in a safe and orderly environment in which teachers can devote their full energy to teaching...," [according to the] Affidavit of Rose Albanese-DePinto. ...

Not All Risky Items Can Be Banned

As the representative of teachers, the UFT is keenly aware that almost any item that a student could conceivably bring to school—including pens, pencils, and even paper—could potentially be used for mischief or harm. Yet, it would be counterproductive to ban every possible source of mischief from the educational environment. Instead, based on citywide parameters that ban their use in schools, parents, teachers and administrators could work together to develop a school-by-school cell phone policy. If teachers, parents and students are involved in this school-by-school planning, all will have a stake in enforcing the rules that are agreed upon—enforcement that is necessary for any aspects of an effective discipline code.

Educators are skilled professionals that, inter alia [among other things], are initially responsible for the supervision of classrooms and the maintenance of discipline and safety therein. Accordingly, they have first-hand experience in what is necessary to create a sound educational environment and safe schools. It is Educators who must, in the first instance, deal with the whole array of concerns—from cheating to bullying to violence—that the DOE claims supports the cell phone ban. Because of the unique role that Educators play in the instruction of New York City's public school children, the UFT respectfully believes that its perspective will be of special assistance to the Court in this matter. ...

The UFT has developed a special expertise with respect to school safety.

Thus ... the UFT's Executive Board unanimously passed a resolution stating, in pertinent part:

Use of Cell Phones in Schools Resolution

Whereas, the City Administration has ordered that students be prohibited from carrying cell phones to schools comparing them to guns, knives, and box cutters; and

Whereas, in an era when students often commute to schools by public transportation, this ban on cell phones has raised serious concerns among parents for the safety of their children; and

Whereas, this Administration pays lip service to empowering administration and staff to maintain orderly schools, but does not trust them to deal with incidents of cell phone abuse; be it

Resolved, that in lieu of banning the possession of student cell phones outright, each school develops and enforces a policy prohibiting cell phone use by students in a school building including escalating penalties on students who violate the school policy; and be it further

Resolved, that this policy be written into the safety plan.

A Wholesale Ban Is Unnecessary

City Council Member Bill de Blasio—a parent of school-aged children himself—joined me at a May 8, 2006 press conference in urging the Mayor and Chancellor to allow students to bring their cell phones to school, but ban their use inside the building. Said Council Member de Blasio:

As a middle school parent, I know that cell phones are an important way for parents and students to communicate. ... While cell phones can cause legitimate problems inside school, this is about safety, too. I want to help school-age families and educators strike a balance that ensures parents are empowered to take responsibility for their children's welfare. ...

Such a balance is possible. Ostensibly, the purpose of the cell phone ban is to remove an item that "negatively impact[s] the learning environment"; is "a tool for cheating"; can be used for "taking and disseminating illicit photograph[s]"; makes it easier to "bully" others as well as eliminates a target for theft [see Respondents' Memorandum of Law in Support of Their Verified Answer ("Resp. Br."), DePinto Affidavit]. The UFT is keenly aware of the need to maintain discipline and order in classrooms and agrees that a ban on the use of cell phones in schools is necessary. This does not translate, however, into a rational basis for a wholesale prohibition on students bringing them into a school, which is tantamount to a ban on their possession. Far more narrow restrictions would achieve the DOE's stated purposes without endangering public school children's safety.

Ms. [Rose Albanese-]DePinto's affidavit provides a series of examples of how cell phones have been misused by individual students. In a school system of over 1,400 schools and 1.1 million students, while these examples provide a sound basis for a classroom prohibition, they do not provide the same for a wholesale ban on outright possession to and from schools. They serve instead to illustrate why empowering Educators and parents to develop an enforceable school-by-school cell phone policy is more appropriate. Indeed, in many schools, a policy requiring students to turn off their cell phones during class time or to keep their cell phones in their locker may be sufficient to prevent the overwhelming majority of instances of their misuse. Surely, a certain percentage of students can be expected to violate such a policy as they do the existing cell phone ban. In those situations, an outright ban on possession may be appropriate, but why hurt the thousands of parents and students who use the cell phones appropriately—only to and from school or in cases of emergency?

Consequences for Cheating and Crime

For example, without doubt a cell phone can be a tool for cheating and cheating is something we must crack down on.

But does that mean we should ban any material that can be used for cheating—including pencils and pens? Because it is obviously impossible to learn in such an environment, the DOE, as with other aspects of the discipline code, must empower its staff to prevent cheating and impose consequences if cheating is discovered. Likewise, with respect to cell phones, the DOE must empower its staff and be willing to impose the consequences for a violation.

Similarly, cell phones may be the target of crime, but so too can almost anything of value. Indeed, the DOE does not ban from schools many other items that are worth a lot more money than cell phones. For example, sneakers in the style [du jour] can cost hundreds of dollars. Instead, it relies, as it must, on Educators, administrators and parents to provide a safe atmosphere for learning on a school-by-school basis.

A Proper Balance

The DOE argues that it declined to adopt a plan similar to Petitioners' proposal that it construct lockers so that students could check their phones as they enter a building because [as stated in the DePinto affidavit] "the significant financial resources needed to design and build the facilities and thereafter supervise and staff such an endeavor in 1,400 schools" are better spent elsewhere. This misses the point. Whereas the DOE makes a compelling case for why cell phones cannot be used in classes, there are many schools that could craft a policy that permits students to keep a cell phone on their person but require it [to] be turned off, allow students to keep a cell phone in a school locker, or develop some other plan that is appropriate for the individual school. Then, and only then, in the few schools where there are persistent violations would a discussion of an outright ban on possession be appropriate. This would maintain the balance of keeping classrooms free from disruption, yet permit students and parents [to] have the perceived security that a cell phone provides.

Footnotes

1.

1. In October 2006, a plane carrying New York Yankees pitcher Cory Lidle and his flight instructor crashed into a high-rise building in Manhattan, killing them both and creating chaos.

Source Citation:

Weingarten, Randi, and Supreme Court of the State of New York. "Cell Phones Should Not Be Banned in Schools." School Policies.

Ed. Jamuna Carroll. Detroit: Greenhaven Press, 2008. Opposing Viewpoints. Rpt. from "affidavit on behalf of United Federation of Teachers, Camella Price et al. v. New York City Board of Education et al." 2006. Gale Opposing Viewpoints In Context. Web. 18

Apr. 2012.

Download