Basic Science Aff/Neg Affirmative 1AC Contention 1 – Scientific Methods Status quo ocean exploration and development is asymmetrically focused on the applied sciences, or the search for knowledge with an applied purpose in mind. This method of applied science enforces a paradigm of negative institutional support and bias which shapes the way we execute research and interpret data Carrier, Ph.D. in philosophy at the University of Münster, 1 (Martin, “Knowledge and Control: On the Bearing of Epistemic Values in Applied Science http://www.uni-bielefeld.de/philosophie/personen/carrier/Knowledge%20and%20ControlPU.pdf, accessed 7/3/14, LLM) The Primacy of Applied Science Among the general public, the esteem for science does not primarily arise from the fact that science endeavors to capture the structure of the universe or the principles that govern the tiniest parts of matter. Rather, public esteem—and public funding—is for the greater part based on the assumption that science has a positive impact on the economy and contributes to securing or creating jobs. Consequently, applied science, not pure research, receives the lion’s share of attention and support. It is not knowledge that is highly evaluated in the first place but control of natural phenomena. The relationship between science and technology is widely represented by the so called cascade model. This model conceives of technological progress as growing out of knowledge gained in basic research. Technology arises from the application of the outcome of epistemically driven research to practical problems. The applied scientist proceeds like an engineer. He employs the toolkit of established principles and brings general theories to bear on technological challenges. The cascade model entails that promoting epistemic science is the best way to stimulating technological advancement. The preference granted to applied science increasingly directs university research at practical goals; not infrequently, it is sponsored by industry. Public and private institutions increasingly pursue applied projects; the scientific work done at a university institute and a company laboratory tend to become indistinguishable. This convergence is emphasized by strong institutional links. Universities found companies in order to market products based on their research. Companies buy themselves into universities or conclude large-scale contracts concerning joint projects. The interest in application shapes large areas of present-day science. This primacy of application puts science under pressure to quickly supply solutions to practical problems. Science is the first institution called upon if advice in practical matters is needed. This applies across the board to economic challenges (such as measures apt to stimulate the economy), environmental problems (such as global climate change or ozone layer depletion), or biological risks (such as AIDS or BSE). The reputation of science depends on whether it reliably delivers on such issues. The question naturally arises, then, whether this pressure toward quick, tangible and useful results is likely to alter the shape of scientific research and to compromise the epistemic values that used to characterize it. There are reasons for concern. Given the intertwining of science and technology, it is plausible to assume that the dominance of technological interests affects science as a whole. The high esteem for marketable goods could shape pure research in that only certain problem areas are addressed and that proposed solutions are judged exclusively by their technological suitability. That is, the dominant technological interests might narrow the agenda of research and encourage sloppy quality judgments. The question is what the search for control of natural phenomena does to science and whether it interferes with the search for knowledge. The prioritization of applied science and utility distorts the epistemology of science and allows error replication; a new emphasis on pure research can change the ethos of science Hansson, Royal Institute of Technology Department of Philosophy and History of Technology Chair, 7 [Sven Ove, 3/28/07, “Values in pure and applied science,” Foundations of Science, 12:3, p. 258-260, EBSCO, IC] The corpus consists of generalized statements that describe and explain features of the world we live in, in terms defined by our methods of investigation and the concepts we have developed. Hence, what enters the corpus is not a selection of data but a set of statements of a more general nature. Whereas data refer to what has been observed, statements in the corpus refer to how things are and to what can be observed. Hypotheses are included into the corpus when the data provide sufficient evidence for them, and the same applies to corroborated generalizations that are based on explorative research.1 The scientific corpus is a highly complex construction, much too large to be mastered by a single person. Different parts of it are maintained by different groups of scientific experts. These parts are all constantly in development. New statements are added, and old ones removed, in each of the many subdisciplines, and a consolidating process based on contacts and cooperations between interconnected disciplines takes place continuously. In spite of this, the corpus is, at each point in time, reasonably welldefined. In most disciplines it is fairly easy to distinguish those statements that are, for the time being, generally accepted by the relevant experts from those that are contested, under investigation, or rejected. Hence, although the corpus is not perfectly well-defined, its vague margins are fairly narrow. The process that leads to modifications of the corpus is based on strict standards of evidence that are an essential part of the ethos of science. When determining whether or not a new scientific hypothesis should be accepted for the time being, the onus of proof falls squarely to its adherents. Similarly, those who claim the existence of an as yet unproven phenomenon have the burden of proof. In other words, the corpus has high entry requirements. This is essential to prevent scientific progress from being blocked by wishful thinking and from the pursuit of all sorts of blind alleys. We must be cautious with what we take for granted in our scientific work. But of course there are limits to how high the requirements can be. We cannot leave everything open. We must be prepared to take some risks of being wrong, but these must be relatively small risks. The entry requirements of the corpus can be described in terms of how we weigh the disadvantages for future research of unnecessarily leaving a question unsettled against those of settling it incorrectly. This is closely related to what values we assign to truth and to avoidance of error. In addition, our decisions on corpus inclusion can be influenced by other values that concern usefulness in future science, such as the simplicity and the explanatory power of a theory. All these are values, but they are not moral values. Hempel called them epistemic utilities and delineated them as follows: “[T]he utilities should reflect the value or disvalue which the different outcomes have from the point of view of pure scientific research rather than the practical advantages or disadvantages that might result from the application of an accepted hypothesis, according as the latter is true or false. Let me refer to the kind of utilities thus vaguely characterized as purely scientific, or epistemic, utilities.” (Hempel 1960: 465) Whereas epistemic values determine what we allow into the corpus, influence from non-epistemic values is programmatically excluded. According to the ethos of science, what is included in the corpus should not depend on how we would like things to be but on what we have evidence for. Therefore, it is part of every scientist’s training to leave out non-epistemic values from her scientific deliberations as far as possible. This, of course, is not perfectly achieved. As was noted by Ziman, we researchers all have interests and values that we try to promote in our scientific work, “however hard we try to surpass them”. But as he also noted, “the essence of the academic ethos is that it defines a culture designed to keep them as far as possible under control” (Ziman 1996: 72).2 The focus on applied science ignores the fundamental constitutive value of pure science research; pure science is necessary to view knowledge and life as intrinsically valuable. Kirschenmann, University of Amsterdam Department of Philosophy of Religion and Comparative Study of Religions Professor, 1 [Peter P., 2001, “”INTRINSICALLY” OR JUST “INSTRUMENTALLY” VALUABLE? ON STRUCTURAL TYPES OF VALUES OF SCIENTIFIC KNOWLEDGE,” Journal for General Philosophy of Science, 32, EBSCO, p. 254-255, IC] In particular, I have pointed out that, and in what sense, scientific knowledge, like everyday knowledge, also possesses functional value and constitutive value. Furthermore, my investigation could, in general terms, be said to have issued in a certain defense of the intrinsic value of scientific knowing, along with the inherent value of scientific knowledge. In this connection, I have cautioned those who might be inclined to draw hasty moral conclusions from the intrinsic value of things. Taking a broader perspective, I should maintain that all forms of knowing can be attributed a fundamental constitutive value. Knowing is an essential part of our being and acting in the world, which in general is considered to be a good thing. The cognitive dimension pervades all of our lives, e.g. our emotions, our relations with things and with other people. At the same time, on the considerations and analysis presented here, many cases of knowing can be experiences that as such are intrinsically valuable, e.g. an everyday perception of some object or getting acquainted with a stranger. There are various views which would question, maybe not the conceptual distinctions proposed, but the value-type attributions countenanced here. I mentioned sociological views which prevent the question of an “intrinsic value” of scientific knowledge from arising. I referred to pragmatist views which in their way refuse to grant any “intrinsic value” of knowledge or truth. In a Hegelian perspective, clearly, one would not speak of the intrinsic or inherent value of finite, individual things or experiences at all; similarly, one would not accord them any final value. In a Christian theological perspective, one might at least hesitate to do so, holding that nothing in created reality has the source of its value in itself (with the possible exception of autonomous persons leading a God-pleasing life). As concerns knowledge, this idea is backed up by quite a theological tradition which confines itself to discussing differences in its “usefulness” and its “uselessness”, or vanity. Finally, I owe the reader some answer to the query, mentioned in the beginning, of how one could even consider attributing “intrinsic value” to such diverse things as knowledge and nature or animals. Apart from my arguing that not intrinsic value, but at most inherent value, can be attributed to concrete particular things, the answer is rather straightforward. Intrinsic value, like all other types of value discussed here, is a structurally characterized value type; such value types can in principle be applied to knowledge and sundry other things. Saying that something has intrinsic value just means that the source of its value lies in the thing itself. This does not imply that the intrinsic value of knowing is the same, or of the same kind, as the intrinsic value of natural activities of animals. There of course remains the laborious task of specifying what these intrinsic values substantially consist of. That of knowing surely includes some satisfaction of curiosity, while that of animal activity may include enjoyment of movement. Whether something morally ought to be striven for or be protected will also depend on such further substantial specifications.31 Applied science is a failing model for ocean policy; only a return to pure exploration can build understanding between scientists, policymakers, and science educators. Baptista et al, Ph. D in Civil Engineering from MIT and director of NSF Sci & Tech center, 8 [Antonio, 2008, “Scientific exploration in the era of ocean observatories,” http://vgc.poly.edu/~juliana/pub/cmop-cise2008.pdf, 7-6-14, FCB] Future scientific exploration is likely to involve groups that are occasionally geographically distributed and often diverse in expertise. The disparity of expertise in handling and interpreting complex scientific data will be even wider when comparing trained scientists with managers and policy makers who will attempt to use observatories to inform their decisions or with students whose education will depend crucially on unfettered access to observatory data and products. A major challenge (and opportunity) is thus to facilitate a redefined scientific exploration of ocean data, in which we no longer expect that expert scientists who collect or generate the data sets will also conduct the first line of data analysis. Instead, analysts will face an abundance of heterogeneous data and tools, and they will lack expert knowledge of at least some of these ingredients. Under these circumstances, it will be necessary to expertly assist these analysts. It’s useful, as an abstraction, to conceptualize that such assistance will be provided in part by a multi--sensorial softwareand--data environment that we refer to as RoboCMOP, as Figure 3 shows. RoboCMOP could advance scientific exploration by accelerating the cycle of science and education, fostering creative thinking, reducing opportunities for key data going unnoticed, and providing tools for capturing, managing, and reusing abstract representations of scientific expertise. Plan The United States federal government should substantially increase its investment in the Okeanos Explorer program. Contention 2 - Solvency Current ocean budgets are cutting funds for pure science – the Okeanos Explorer needs increased federal investment Adams, NRDC Ocean writer, 14 [Alexandra, 3/25/14, NRDC Switchboard, “A Blue Budget Beyond Sequester: Taking care of our oceans,” http://switchboard.nrdc.org/blogs/aadams/a_blue_budget_beyond_sequester.html, accessed 7/11/14, TYBG] Unfortunately, some critical programs won’t get what they need this year. This year’s budget cuts funding for Ocean Exploration and Research by $7 million. This funding has supported exploration by the research vessel Okeanos of deep sea corals and other marine life in the submarine canyons and seamounts off the Mid-Atlantic and New England coasts that fisheries managers and ocean conservation groups, including NRDC, are working to protect. Even though funds are stretched, shortchanging exploration and research will lead to weaker protections for species and resources that are already under stress. Now is the key time to shift our federal budget priorities for ocean exploration – the potential benefits are endless Etzioni, Professor of International Affairs at George Washington University, 14 (Amitai Etzioni is University Professor and professor of International Affairs and director of the Institute for Communitarian Policy Studies at George Washington University, “Final Frontier vs. Fruitful Frontier: The Case for Increasing Ocean Exploration, Issues in Science and Technology, Summer 14, http://etzioni.typepad.com/files/etzioni---final-frontier-vs.-fruitful-frontier-ist-summer-2014.pdf) Every year, the federal budget process begins with a White House-issued budget request, which lays out spending priorities for federal programs. From this moment forward, President Obama and his successors should use this opportunity to correct a longstanding misalignment of federal research priorities: excessive spending on space exploration and neglect of ocean studies. The nation should begin transforming the National Oceanic and Atmospheric Administration (NOAA) into a greatly reconstructed, independent, and effective federal agency. In the present fiscal climate of zero-sum budgeting, the additional funding necessary for this agency should be taken from the National Aeronautics and Space Administration (NASA). The basic reason is that deep space—NASA’s favorite turf—is a distant, hostile, and barren place, the study of which yields few major discoveries and an abundance of overhyped claims. By contrast, the oceans are nearby, and their study is a potential source of discoveries that could prove helpful for addressing a wide range of national concerns from climate change to disease; for reducing energy, mineral, and potable water shortages; for strengthening industry, security, and defenses against natural disasters such as hurricanes and tsunamis; for increasing our knowledge about geological history; and much more. Nevertheless, the funding allocated for NASA in the Consolidated and Further Continuing Appropriations Act for FY 2013 was 3.5 times higher than that allocated for NOAA. Whatever can be said on behalf of a trip to Mars or recent aspirations to revisit the Moon, the same holds many times over for exploring the oceans; some illustrative examples follow. (I stand by my record: In The Moondoggle, published in 1964, I predicted that there was less to be gained in deep space than in near space—the sphere in which communication, navigations, weather, and reconnaissance satellites orbit—and argued for unmanned exploration vehicles and for investment on our planet instead of the Moon.) The Okeanos Explorer is a unique vessel; it has the best equipment and planning for exploration and best coordination with research scientists Lobecker et al, Physical Scientist with the NOAA, 12 [Elizabeth, 3-12, Oceanography VOL. 25 NO. 1, “Always Exploring,” 7-5-14, FCB] NOAA’s Okeanos Explorer, “America’s ship for ocean exploration,” systematically explores the ocean every day of every cruise to maximize public benefit from the ship’s unique capabilities. “Always Exploring” is a guiding principle. With 95% of the ocean unexplored, we pursue every opportunity to map, sample, explore, and survey at planned destinations as well as during transits. Throughout the ship’s geographically diverse 2010 and 2011 field seasons, multiple opportunities arose to transform standard operational transit cruises into interdisciplinary explorations by acquiring high-quality, innovative scientific data around the clock, and rapidly disseminating those data to the public. During cruise planning, transits are optimized to allow mapping of unexplored or unmapped regions. We review input received from ocean science and management communities to identify unexplored regions for possible inclusion. We also consult those scientists and managers to verify that potential targets remain a high priority and were not recently explored. The Okeanos Explorer Program also supports surveys of opportunity to add layers of scientific value to cruises. We conduct nonmapping surveys of opportunity and include well-defined exploratory operations that help transform standard ship shakedown and transit mapping cruises into multilayered voyages of discovery. Surveys selected are those that reflect the exploration mission or provide an opportunity to test additional capabilities that could be incorporated into systematic exploration operations. Institutional support of exploration is necessary to advance marine science Deacon et al, historian specializing in oceanography and fellow @ School of Ocean and Earth Southampton U, 01 [Margaret, Understanding the Oceans: A Century of Ocean Exploration, pg. 1, FCB] Expedition was a pioneering venture that had a profound impact both on the contemporary development of marine science, and on its subsequent metamorphosis into an international scientific discipline. This is why it continues to capture the imagination of successive generations of oceanographers. But what has its true legacy been? While the scale of the scientific achievement, and of its impact on later work, are amply borne out by examples given in this book. Chapters I and 2 take a rather more critical look at the expedition than perhaps might have been possible at the time of the celebrations, in (1972), of the centenary of its departure. They reveal that, in spite of being a truly remarkable achievement, both in terms of its organization and in the work it carried out, the Challenger Expedition did not provide the sort of impetus that its subsequent reputation might lead us to expect in cither of the particular aspects of marine science highlighted in this book, that is, the significance of technological innovation and of adequate institutions in scientific development. History plainly shows that, in science, institutions as well as individuals have an important role to play. Science is not an abstract body of knowledge, but represents the human activity of observing and interpreting independently existing complex natural phenomena. To encapsulate the closest approximation possible at any one time concerning how these operate, scientists use concepts that they constantly seek to extend and refine, but do not necessarily agree over. The formulation of ideas may be the present; of the individual, but those ideas only gain their power and influence through being promulgated and discussed in scientific societies and journals, and the detailed and interdisciplinary work needed to con- firm or transform them, especially when directed towards an objective as large and complex as the ocean, needs to be done through organizations or groups. The Challenger’s enduring as the ocean, needs to be done through organizations or groups. The Challenger’s enduring influence on marine science was due in great measure to the publication of the report of the expedition, but after this promising start there remained no public organization to carry on its work. It is only in the twentieth century that permanent institutions dedicated to occanographic research have been established, mostly since 1945. Basic research is a precondition to applied research – without open-ended scientific questions, we can never determine the frame within which we need to solve problems Roll-Hansen, Historian and Philosopher of biology at University of Oslo, 9 [Nils, Centre for the Philosophy of Natural and Social Science, “Why the distinction between basic (theoretical) and applied (practical) research is important in the politics of science,” http://www.lse.ac.uk/CPNSS/research/concludedResearchProjects/ContingencyDissentInScience/DP/DP Roll-HansenOnline0409.pdf, accessed 7/5/14, TYBG] Basic research, on the other hand, is successful when it discovers new phenomena or new ideas of general interest. The general scientific interest is judged in the first instance by the discipline in question. But in the long run the promotion of other scientific disciplines is essential, and in the last instance the improvement of our general world picture is decisive. The aim of basic research is theoretical, to improve general understanding. It has no specific aim outside of this. But it is, of course, not accidental that improved understanding of the world increases our ability to act rationally and efficiently. It improves our grasp of what the world is like and is thus also a basis for developing efficient technologies. Some degree of realism with respect to scientific theories is inherent in basic research in this sense. The social effect of applied research, when successful, is solutions to practical problems as recognized by politicians, government bureaucrats, commercial entrepreneurs, etc. It is an instrument in the service of its patron. Applied research helps interpret and refine the patron's problems to make them researchable, and then investigates possible solutions. The practical problems of the patron set the frame for the activity. Applied research is in this sense subordinate to social, economic and political aims. Rewards are primarily for results that help the patron realize his purposes. The result of basic research, when successful, is discovery of new phenomena and new ideas of general interest. By shaping our understanding of the world the discoveries of basic science become preconditions for any precise formulation of political and other practical problems. Sometimes basic research has a direct and dramatic effect by discovering new threatening problems and thus immediately setting a new political agenda. The present grave concern over climate change is a striking example of how politics is completely dependent on science to assess the problem, i.e. make educated guesses about its future magnitude and development, and think of possible countermeasures. The differences between applied and basic research in content, in social effects, and in criteria for success imply a different relationship to politics. Science does not only provide means (instruments) for solving tasks or problems set by politics, it also shapes social and political values and goals. Applied research is generally well adapted to serve the first task while basic research is best suited for the second. From the point of view of liberal democratic decision- making there is an important distinction between solving recognized problems and introducing and formulating new problems. In the first case science has an instrumental role subordinate to politics. In the second case the role is politically enlightening and depends on independence from politics to work well. When science is asked for advice on a fearful threat like climate change, which has not yet materialized but is only a prediction about future events, the importance of autonomy becomes particularly acute and correspondingly hard to maintain. Now is the key time to reemphasize pure science and exploration; observational data is necessary to allow subsequent developments in applied science Martin, Science and Technology Policy Researcher at University of Sussex, and Calvery, Science Technology and Innovation Studies, University of Edinburgh, 1 [Jane, Ben, 9/2001, SPRU, “Changing Conception of Basic Research,” http://www.oecd.org/science/scitech/2674369.pdf, accessed 7/5/14, TYBG] 3.5.1 Is basic research becoming more important? Some interviewees judged that conditions were getting better for basic research in the current climate. Several scientists in the UK commented that the political climate for basic research is better than it was in the 1970s and 1980s. One reason given for the increased importance of basic research is the emergence of certain new technologies (such as biotechnology) which require very basic research but then quickly produce marketable products (Elzinga 1985) – now a ‘fundamental’ breakthrough can simultaneously be a commercial breakthrough (Crook 1992). This is how ‘strategic’ research is often described. A UK policy maker observed that now it is often difficult to make a distinction between basic and applied research. Because the speed of research is increasing, because the speed of moving from discovery to exploitation is increasing, and because the same individual people can be involved in any point of the cycle. One US policy maker ascribed this phenomenon to more advanced instrumentation; because tools are better, it is possible to go straight from the modeling stage (often involving computer imaging) to development, without having to go through the traditional intermediate phases. The interviewee mentioned pharmaceuticals in this context but implied that this was occurring more generally. This could also feed into the justification for the funding of basic research; a UK policy maker pointed out that because of the rapid pull-through from basic research into application it was now easier for the public to accept the importance of basic research. Yet if these suggestions are correct and basic research is becoming more important, it may be that because of its closer links with technology the research itself is changing in subtle ways. Basic research is necessary to make science a self-correcting process, which resolves problems in our knowledge and applications. Criticisms of science are targeted toward applied science, not basic research. Hutcheon, former prof of sociology of education @ U British Columbia, 93 [Pat, A Critique of "Biology as Ideology: The Doctrine of DNA", http://www.humanists.net/pdhutcheon/humanist%20articles/lewontn.htm, 7-5-14, FCB] The introductory lecture in this series articulated the increasingly popular "postmodernist" claim that all science is ideology. Lewontin then proceeded to justify this by stating the obvious: that scientists are human like the rest of us and subject to the same biases and socio-cultural imperatives. Although he did not actually say it, his comments seemed to imply that the enterprise of scientific research and knowledge building could therefore be no different and no more reliable as a guide to action than any other set of opinions. The trouble is that, in order to reach such an conclusion, one would have to ignore all those aspects of the scientific endeavor that do in fact distinguish it from other types and sources of belief formation. Indeed, if the integrity of the scientific endeavor depended only on the wisdom and objectivity of the individuals engaged in it we would be in trouble. North American agriculture would today be in the state of that in Russia today. In fact it would be much worse, for the Soviets threw out Lysenko's ideology-masquerading-as-science decades ago. Precisely because an alternative scientific model was available (thanks to the disparaged Darwinian theory) the former Eastern bloc countries have been partially successful in overcoming the destructive chain of consequences which blind faith in ideology had set in motion. This is what Lewontin's old Russian dissident professor meant when he said that the truth must be spoken, even at great personal cost. How sad that Lewontin has apparently failed to understand the fact that while scientific knowledge -- with the power it gives us -- can and does allow humanity to change the world, ideological beliefs have consequences too. By rendering their proponents politically powerful but rationally and instrumentally impotent, they throw up insurmountable barriers to reasoned and value-guided social change. What are the crucial differences between ideology and science that Lewonton has ignored? Both Karl Popper and Thomas Kuhn have spelled these out with great care -- the former throughout a long lifetime of scholarship devoted to that precise objective. Stephen Jay Gould has also done a sound job in this area. How strange that someone with the status of Lewontin, in a series of lectures supposedly covering the same subject, would not at least have dealt with their arguments! Science has to do with the search for regularities in what humans experience of their physical and social environments, beginning with the most simple units discernible, and gradually moving towards the more complex. It has to do with expressing these regularities in the clearest and most precise language possible, so that cause-and-effect relations among the parts of the system under study can be publicly and rigorously tested. And it has to do with devising explanations of those empirical regularities which have survived all attempts to falsify them. These explanations, once phrased in the form of testable hypotheses, become predictors of future events. In other words, they lead to further conjectures of additional relationships which, in their turn, must survive repeated public attempts to prove them wanting -- if the set of related explanations (or theory) is to continue to operate as a fruitful guide for subsequent research. This means that science, unlike mythology and ideology, has a self-correcting mechanism at its very heart. A conjecture, to be classed as scientific, must be amenable to empirical test. It must, above all, be open to refutation by experience. There is a rigorous set of rules according to which hypotheses are formulated and research findings are arrived at, reported and replicated. It is this process -- not the lack of prejudice of the particular scientist, or his negotiating ability, or even his political power within the relevant university department -- that ensures the reliability of scientific knowledge. The conditions established by the community of science is one of precisely defined and regulated "intersubjectivity". Under these conditions the theory that wins out, and subsequently prevails, does so not because of its agreement with conventional wisdom or because of the political power of its proponents, as is often the case with ideology. The survival of a scientific theory such as Darwin's is due, instead, to its power to explain and predict observable regularities in human experience, while withstanding worldwide attempts to refute it -- and proving itself open to elaboration and expansion in the process. In this sense only is scientific knowledge objective and universal. All this has little relationship to the claim of an absolute universality of objective "truth" apart from human strivings that Lewontin has attributed to scientists. Because ideologies, on the other hand, do claim to represent truth, they are incapable of generating a means by which they can be corrected as circumstances change. Legitimate science makes no such claims. Scientific tests are not tests of verisimilitude. Science does not aim for "true" theories purporting to reflect an accurate picture of the "essence" of reality. It leaves such claims of infallibility to ideology. The tests of science, therefore, are in terms of workability and falsifiability, and its propositions are accordingly tentative in nature. A successful scientific theory is one which, while guiding the research in a particular problem area, is continuously elaborated, revised and refined, until it is eventually superseded by that very hypothesismaking and testing process that it helped to define and sharpen. An ideology, on the other hand, would be considered to have failed under those conditions, for the "truth" must be for all time. More than anything, it is this difference that confuses those ideological thinkers who are compelled to attack Darwin's theory of evolution precisely because of its success as a scientific theory. For them, and the world of desired and imagined certainty in which they live, that very success in contributing to a continuously evolving body of increasingly reliable -- albeit inevitably tentative -- knowledge can only mean failure, in that the theory itself has altered in the process. Ocean exploration should be our highest research priority. The benefits from ocean exploration outweigh any other science expenditure, and are key to understanding climate change, energy production, medicine, and the economy. Etzioni, Professor of International Affairs at George Washington University, 14 (Amitai Etzioni is University Professor and professor of International Affairs and director of the Institute for Communitarian Policy Studies at George Washington University, “Final Frontier vs. Fruitful Frontier: The Case for Increasing Ocean Exploration, Issues in Science and Technology, Summer 14, http://etzioni.typepad.com/files/etzioni---final-frontier-vs.-fruitful-frontier-ist-summer-2014.pdf) Although these technologies are promising, additional research is needed not only for further development but also to adapt them to regional differences. For instance, ocean wave conversion technology is suitable only in locations in which the waves are of the same sort for which existing technologies were developed and in locations where the waves also generate enough energy to make the endeavor profitable. One study shows that thermohaline circulation— ocean circulation driven by variations in temperature and salinity—varies from area to area, and climate change is likely to alter thermohaline circulation in the future in ways that could affect the use of energy generators that rely on ocean currents. Additional research would help scientists understand how to adapt energy technologies for use in specific environments and how to avoid the potential environmental consequences of their use. Renewable energy resources are the ocean’s particularly attractive energy product; they contribute much less than coal or natural gas to anthropogenic greenhouse gas emissions. However, it is worth noting that the oceans do hold vast reserves of untapped hydrocarbon fuels. Deep-sea drilling technologies remain immature; although it is possible to use oil rigs in waters of 8,000 to 9,000 feet, greater depths require the use of specially-designed drilling ships that still face significant challenges. Deep-water drilling that takes place in depths of more than 500 feet is the next big frontier for oil and natural-gas production, projected to expand offshore oil production by 18% by 2020. One should expect the development of new technologies that would enable drilling petroleum and natural gas at even greater depths than presently possible and under layers of salt and other barriers. In addition to developing these technologies, entire other lines of research are needed to either mitigate the side effects of large-scale usage of these technologies or to guarantee that these effects are small. Although it has recently become possible to drill beneath Arctic ice, the technologies are largely untested. Environmentalists fear that ocean turbines could harm fish or marine mammals, and it is feared that wave conversion technologies would disturb ocean floor sediments, impede migration of ocean animals, prevent waves from clearing debris, or harm animals. Demand has pushed countries to develop technologies to drill for oil beneath ice or in the deep sea without much regard for the safety or environmental concerns associated with oil spills. At present, there is no developed method for cleaning up oil spills in the Arctic, a serious problem that requires additional research if Arctic drilling is to commence on a larger scale. More ocean potential When large quantities of public funds are invested in a particular research and development project, particularly when the payoff is far from assured, it is common for those responsible for the project to draw attention to the additional benefits—“spinoffs”— generated by the project as a means of adding to its allure. This is particularly true if the project can be shown to improve human health. Thus, NASA has claimed that its space exploration “benefit[ted] pharmaceutical drug development” and assisted in developing a new type of sensor “that provides realtime image recognition capabilities,” that it developed an optics technology in the 1970s that now is used to screen children for vision problems, and that a type of software developed for vibration analysis on the Space Shuttle is now used to “diagnose medical issues.” Similarly, opportunities to identify the “components of the organisms that facilitate increased virulence in space” could in theory—NASA claims—be used on Earth to “pinpoint targets for anti-microbial therapeutics.” Ocean research, as modest as it is, has already yielded several medical “spinoffs.” The discovery of one species of Japanese black sponge, which produces a substance that successfully blocks division of tumorous cells, led researchers to develop a late-stage breast cancer drug. An expedition near the Bahamas led to the discovery of a bacterium that produces substances that are in the process of being synthesized as antibiotics and anticancer compounds. In addition to the aforementioned cancer fighting compounds, chemicals that combat neuropathic pain, treat asthma and inflammation, and reduce skin irritation have been isolated from marine organisms. One Arctic Sea organism alone produced three antibiotics. Although none of the three ultimately proved pharmaceutically significant, current concerns that strains of bacteria are developing resistance to the “antibiotics of last resort” is a strong reason to increase funding for bioprospecting. Additionally, the blood cells of horseshoe crabs contain a chemical—which is found nowhere else in nature and so far has yet to be synthesized—that can detect bacterial contamination in pharmaceuticals and on the surfaces of surgical implants. Some research indicates that between 10 and 30 percent of horseshoe crabs that have been bled die, and that those that survive are less likely to mate. It would serve for research to indicate the ways these creatures can be better protected. Up to two-thirds of all marine life remains unidentified, with 226,000 eukaryotic species already identified and more than 2,000 species discovered every year, according to Ward Appeltans, a marine biologist at the Intergovernmental Oceanographic Commission of UNESCO. Contrast these discoveries of new species in the oceans with the frequent claims that space exploration will lead to the discovery of extraterrestrial life. For example, in 2010 NASA announced that it had made discoveries on Mars “that [would] impact the search for evidence of extraterrestrial life” but ultimately admitted that they had “no definitive detection of Martian organics.” The discovery that prompted the initial press release—that NASA had discovered a possible arsenic pathway in metabolism and that thus life was theoretically possible under conditions different than those on Earth—was then thoroughly rebutted by a panel of NASAselected experts. The comparison with ocean science is especially stark when one considers that oceanographers have already discovered real organisms that rely on chemosynthesis— the process of making glucose from water and carbon dioxide by using the energy stored in chemical bonds of inorganic compounds—living near deep sea vents at the bottom of the oceans. The same is true of the search for mineral resources. NASA talks about the potential for asteroid mining, but it will be far easier to find and recover minerals suspended in ocean waters or beneath the ocean floor. Indeed, resources beneath the ocean floor are already being commercially exploited, whereas there is not a near-term likelihood of commercial asteroid mining. Another major justification cited by advocates for the pricey missions to Mars and beyond is that “we don’t know” enough about the other planets and the universe in which we live. However, the same can be said of the deep oceans. Actually, we know much more about the Moon and even about Mars than we know about the oceans. Maps of the Moon are already strikingly accurate, and even amateur hobbyists have crafted highly detailed pictures of the Moon—minus the “dark side”—as one set of documents from University College London’s archives seems to demonstrate. By 1967, maps and globes depicting the complete lunar surface were produced. By contrast, about 90% of the world’s oceans had not yet been mapped as of 2005. Furthermore, for years scientists have been fascinated by noises originating at the bottom of the ocean, known creatively as “the Bloop” and “Julia,” among others. And the world’s largest known “waterfall” can be found entirely underwater between Greenland and Iceland, where cold, dense Arctic water from the Greenland Sea drops more than 11,500 feet before reaching the seafloor of the Denmark Strait. Much remains poorly understood about these phenomena, their relevance to the surrounding ecosystem, and the ways in which climate change will affect their continued existence. In short, there is much that humans have yet to understand about the depths of the oceans, further research into which could yield important insights about Earth’s geological history and the evolution of humans and society. Addressing these questions surpasses the importance of another Mars rover or a space observatory designed to answer highly specific questions of importance mainly to a few dedicated astrophysicists, planetary scientists, and select colleagues. Science Diplomacy Advantage 1AC Pure science is key to sustainable science diplomacy and global leadership Coletta, PhD in Political Science at Duke University, 9 (Damon, Masters in Public Policy @ Harvard, Assoc Prof of Geopolitics & National Security Policy @ US Air Force Academy, September 2009, http://www.usafa.edu/df/inss/Research%20Papers/2009/09%20Coletta%20Science%20and%20Influenc eINSS(FINAL).pdf, accessed 7/9/14, LLM) Less appreciated is how scientific progress facilitates diplomatic strategy in the long run, how it contributes to Joseph Nye‘s soft power, which translates to staying power in the international arena. One possible escape from the geopolitical forces depicted in Thucydides‘ history for all time is for the current hegemon to maintain its lead in science, conceived as a national program and as an enterprise belonging to all mankind. Beyond the new technologies for projecting military or economic power, the scientific ethos conditions the hegemon‘s approach to social-political problems. It effects how the leader organizes itself and other states to address well-springs of discontent—material inequity, religious or ethnic oppression, and environmental degradation. The scientific mantle attracts others‘ admiration, which softens or at least complicates other societies‘ resentment of power disparity. Finally, for certain global problems—nuclear proliferation, climate change, and financial crisis—the scientific lead ensures robust representation in transnational epistemic communities that can shepherd intergovernmental negotiations on to a conservative, or secular, path in terms of preserving international order. In today‘s order, U.S. hegemony is yet in doubt even though military and economic indicators confirm its status as the world‘s lone superpower. America possesses the material where withal to maintain its lead in the sciences, but it also desires to bear the standard for freedom and democracy. Unfortunately, patronage of basic science does not automatically flourish with liberal democracy. The free market and the mass public impose demands on science that tend to move research out of the basic and into applied realms. Absent the lead in basic discovery, no country can hope to pioneer humanity‘s quest to know Nature. There is a real danger U.S. state and society could permanently confuse sponsorship of technology with patronage of science, thereby delivering a self-inflicted blow to U.S. leadership among nations. US Science Diplomacy promotes solutions to multiple systemic issues Federoff, Science and Technology Adviser to the Secretary of State and the Administrator of USAID, 8 (Nina, April 2, 2008, TESTIMONY BEFORE THE HOUSE SCIENCE SUBCOMMITTEE ON RESEARCH AND SCIENCE EDUCATION, http://gop.science.house.gov/Media/Hearings/research08/April2/fedoroff.pdf, accessed 7-11-2014, LK) The welfare and stability of countries and regions in many parts of the globe require a concerted effort by the developed world to address the causal factors that render countries fragile and cause states to fail. Countries that are unable to defend their people against starvation, or fail to provide economic opportunity, are susceptible to extremist ideologies, autocratic rule, and abuses of human rights. As well, the world faces common threats, among them climate change, energy and water shortages, public health emergencies, environmental degradation, poverty, food insecurity, and religious extremism. These threats can undermine the national security of the United States, both directly and indirectly. Many are blind to political boundaries, becoming regional or global threats. The United States has no monopoly on knowledge in a globalizing world and the scientific challenges facing humankind are enormous. Addressing these common challenges demands common solutions and necessitates scientific cooperation, common standards, and common goals. We must increasingly harness the power of American ingenuity in science and technology through strong partnerships with the science community in both academia and the private sector, in the U.S. and abroad among our allies, to advance U.S. interests in foreign policy. 2AC – Pure Science Solves SD Pure science is key to international science diplomacy Anthis, Rhodes Scholar Post-Doctoral Researcher, 9 (Nick, September 17th, “THE UNIVERSALITY OF BASIC SCIENCE MAY BE THE DEEPEST LINK BETWEEN THE US AND THE MUSLIM WORLD”, http://seedmagazine.com/content/article/a_universal_truth/, accessed 7/9/14, LLM) In more general terms, scientific diplomacy is an idea that makes a great deal of sense. Most simply, in our 21st century society, science and technology so permeate our everyday lives that few areas of government policy can regularly ignore such considerations. More poignantly, however, science is fundamentally an international endeavor. Even the least senior scientists (i.e. grad students and postdocs) may travel internationally at a frequency that rivals that of the more senior members of many other professions. A lab in the US may have ongoing scientific collaborations (or heated competitions) with labs in Europe, Asia, or elsewhere. Advances in technology have aided these collaborations tremendously, making differences in time zones the only real obstacle still preventing regular face-toface communication (by voice-over IP video conferencing) between scientists on opposite sides of the globe. Finally, scientific findings are published in international journals accessible to anyone who reads English and whose institution subscribes to the journal (although the rise of open-access publishing is easing this final constraint). This internationality stems from another fundamental aspect of science: that its truths are universal. Independent of location, culture, or religion, the process of evaluating scientific knowledge should—in principle, at least—remain the same. Of course, as Jasanoff points out, the successful application of scientific findings to address societal needs is affected by all of these subjective factors. But the universality of basic science may be the deepest link that the US and the Muslim world share. (On the flipside, we also share many of the same enemies of scientific progress; as in the US, creationism has flourished in many majority-Muslim countries.) Today, the US can still claim to be the world’s greatest scientific power—though maybe only tenuously. A thousand years ago, however, the Middle East would have unequivocally held that designation—another common link and an important reminder that preeminence is not permanent. So, where does this leave us in terms of actual scientific diplomacy? Centers of scientific excellence and science envoys are both good ideas, and I expect that we’ll see a vamped-up corps of science envoys in the very near future. Beyond these actions, though, the Obama administration should look for ways to encourage further collaborations between practicing scientists in the US and the Muslim world, and programs along these lines may be simpler to implement and more likely to yield the desired results. New education and travel grants to send American scientists to work in the Middle East and elsewhere—and vice versa—would be one avenue. One of the greatest gifts the US has to offer the outside world is graduate education at our many research universities, and we need to ensure that this option is as accessible as possible—and not hampered by the visa and immigration difficulties that became so much more common after 9/11. Additional grants to bring outside scientists to the US to attend conferences or workshops or to meet with collaborators could also be helpful. These actions should help foster the exchange of ideas between scientists in the US and Muslim countries. New scientific collaborations will help advance scientific progress and may help focus resources to pertinent problems that would otherwise be neglected. Such collaborations also have the immediate benefit of improving the scope and impact of the scientists’ work, assisting with career advancement and raising the prestige of local research communities. In the long run, the hope is that this exchange of scientific ideas will contribute to greater cross-cultural appreciation and understanding. Given the vast resources that have been wasted creating an enormous credibility gap between the US and the Muslim world (particularly through the Iraq war), scientific diplomacy is certainly a cause worth funding. 2AC – SD Solves WoT Science diplomacy is key to the war on terror – it fosters development that weakens the impetus and secures loose WMDs Federoff, Science and Technology Adviser to the Secretary of State and the Administrator of USAID, 8 (Nina, April 2, 2008, TESTIMONY BEFORE THE HOUSE SCIENCE SUBCOMMITTEE ON RESEARCH AND SCIENCE EDUCATION, http://gop.science.house.gov/Media/Hearings/research08/April2/fedoroff.pdf, accessed 7-11-2014, LK) The creation of economic opportunity can do much more to combat the rise of fanaticism than can any weapon. The war of ideas is a war about rationalism as opposed to irrationalism. Science and technology put us firmly on the side of rationalism by providing ideas and opportunities that improve An essential part of the war on terrorism is a war of ideas. people’s lives. We may use the recognition and the goodwill that science still generates for the United States to achieve our diplomatic and developmental goals. Additionally, the Department continues to use science as a means to reduce the proliferation of the weapons’ of mass destruction and prevent what has been dubbed ‘brain drain’. Through cooperative threat reduction activities, former weapons scientists redirect their skills to participate in peaceful, collaborative international research in a large variety of scientific fields. In addition, new global efforts focus on improving biological, chemical, and nuclear security by promoting and implementing best scientific practices as a means to enhance security, increase global partnerships, and create sustainability. 2AC – SD Solves Warming International science diplomacy key to international solutions to warming Hulme and Mahony, Fellows on the Science, Technology and Society Program at Harvard University 10 [Mike and Martin, “Climate change: what do we know about the IPCC?”, http://mikehulme.org/wpcontent/uploads/2010/01/Hulme-Mahony-PiPG.pdf, accessed 7/12/14, LK] The consequences of this ‘geography of IPCC expertise’ are significant, affecting the construction of IPCC emissions scenarios (Parikh, 1992), the framing and shaping of climate change knowledge (Shackley, 1997; Lahsen, 2007; O’Neill et al., 2010) and the legitimacy of the knowledge assessments themselves (Elzinga, 1996; Weingart, 1999; Lahsen, 2004; Grundmann, 2007; Mayer & Arndt, 2009; Beck, 2010). As Bert Bolin, the then chairmen of the IPCC remarked back in 1991: “Right now, many countries, especially developing countries, simply do not trust assessments in which their scientists and policymakers have not participated. Don’t you think credibility demands global representation?” (cited in Schneider, 1991). Subsequent evidence for such suspicions has come from many quarters (e.g. Karlsson et al., 2007) and Kandlikar and Sagar concluded their 1999 study of the North-South knowledge divide by arguing, “... it must be recognised that a fair and effective climate protection regime that requires cooperation with developing countries, will also require their participation in the underlying research, analysis and assessment” (p.137). This critique is also voiced more recently by Myanna Lahsen (2004) in her study of Brazil and the climate change regime: “Brazilian climate scientists reflect some distrust of ... the IPCC, which they describe as dominated by Northern framings of the problems and therefore biased against interpretations and interest of the South” (p.161). Ext - Inherency Underfunding The USfg is underfunding ocean exploration – it’s key to solve the economy and strengthen leadership Bidwell, US News, 13 [Allie, 9-25-14, “Scientists Release First Plan for National Ocean Exploration Program,” http://www.usnews.com/news/articles/2013/09/25/scientists-release-first-plan-for-national-oceanexploration-program, US News, FCB] More than three-quarters of what lies beneath the surface of the ocean is unknown, even to trained scientists and researchers. Taking steps toward discovering what resources and information the seas hold, the National Oceanic and Atmospheric Administration and the Aquarium of the Pacific released on Wednesday a report that details plans to create the nation's first ocean exploration program by the year 2020. The report stems from a national convening of more than 100 federal agencies, nongovernmental organizations, nonprofit organizations and private companies to discuss what components should make up a national ocean exploration program and what will be needed to create it. "This is the first time the explorers themselves came together and said, 'this is the kind of program we want and this is what it's going to take,'" says Jerry Schubel, president and CEO of the Aquarium of the Pacific, located in Long Beach, Calif. "That's very important, particularly when you put it in the context that the world ocean is the largest single component of Earth's living infrastructure ... and less than 10 percent of it has ever been explored." In order to create a comprehensive exploration program, Schubel says it will become increasingly important that federal and state agencies form partnerships with other organizations, as it is unlikely that government funding for ocean exploration will increase in the next few years. Additionally, Schubel says there was a consensus among those explorers and stakeholders who gathered in July that participating organizations need to take advantage of technologies that are available and place a greater emphasis on public engagement and citizen exploration – utilizing the data that naturalists and nonscientists collect on their own. "In coastal areas at least, given some of these new low-cost robots that are available, they could actually produce data that would help us understand the nation's coastal environment," Schubel says. Expanding the nation's ocean exploration program could lead to more jobs, he adds, and could also serve as an opportunity to engage children and adults in careers in science, technology, engineering and mathematics, or STEM. "I think what we need to do as a nation is make STEM fields be seen by young people as exciting career trajectories," Schubel says. "We need to reestablish the excitement of science and engineering, and I think ocean exploration gives us a way to do that." Schubel says science centers, museums and aquariums can serve as training grounds to give children and adults the opportunity to learn more about the ocean and what opportunities exist in STEM fields. "One thing that we can contribute more than anything else is to let kids and families come to our institutions and play, explore, make mistakes, and ask silly questions without being burdened down by the kinds of standards that our formal K-12 and K-14 schools have to live up to," Schubel says. Conducting more data collection and exploration quests is also beneficial from an economic standpoint because explorers have the potential to identify new resources, both renewable and nonrenewable. Having access to those materials, such as oils and minerals, and being less dependent on other nations, Schubel says, could help improve national security. Each time explorers embark on a mission to a new part of the ocean, they bring back more detailed information by mapping the sea floor and providing high-resolution images of what exists, says David McKinnie, a senior advisor for NOAA's Office of Ocean Exploration and Research and a co-author of the report. On almost every expedition, he says, the scientists discover new species. In a trip to Indonesia in 2010, for example, McKinnie says researchers discovered more than 50 new species of coral. "It's really a reflection of how unknown the ocean is," McKinnie says. "Every time we go to a new place, we find something new, and something new about the ocean that's important." And these expeditions can have important impacts not just for biological cataloging, but also for the environment, McKinnie says. In a 2004 expedition in the Pacific Ocean, NOAA scientists identified a group of underwater volcanoes that were "tremendous" sources of carbon dioxide, and thus contributed to increasing ocean acidification, McKinnie says. Research has shown that when ocean waters become more acidic from absorbing carbon dioxide, they produce less of a gas that protects the Earth from the sun's radiation and can amplify global warming. But until NOAA's expedition, no measures accounted for carbon dioxide produced from underwater volcanoes. "It's not just bringing back pretty pictures," McKinnie says. "It's getting real results that matter." US funding of ocean exploration is chronically underfunded Dove, Georgia Aquarium director of research, and McClain, Assistant Director of Science for the National Evolutionary Synthesis Center 12 [Craig, 10-16-12, Deep Sea News, “We Need an Ocean NASA Now,” http://deepseanews.com/2012/10/we-need-an-ocean-nasa-now-pt-1/, 7-12-14, FCB] For too long ocean exploration has suffered from chronic underfunding and the lack of an independent agency with a dedicated mission. Here, Al Dove and I call for the creation of a NASA-style agency to ensure the future health of US ocean science and exploration. Over a decade ago, one of us (CM) made his first submersible dive off of Rum Cay in the Bahamas. At the surface the temperature was a warm 91˚F and at the bottom 2,300 feet down the temperature was near freezing. Despite my large size, I don’t remember feeling cramped inside the soda can-sized sub at any moment. The entire time I pressed my face against a 6-inch porthole, my cheek against the cool glass, and focused my eyes on the few feet of illuminated sea floor around me and the miles of black beyond. Here in the great depths of oceans I got my first look at the giant isopod, a roly-poly the size of a large shoe. This beast and the surrounding abyss instantly captured my imagination, launching me on a journey of ocean science and exploration to unravel the riddles of life in the deep. A thousand miles away, off the coast of Yucatan Mexico, the other of us (AD) experienced equal wonder at the discovery of the largest aggregation ever recorded of the largest of fish in the world, the whale shark. These spotted behemoths gather annually in the hundreds off the coast of Cancun, one of the world’s most popular tourist destinations, and yet this spectacular biological was unknown to science until 2006. Swimming among them, I reverted to a childish state of wonder, marveling at their size, power and grace, and boggling that they have probably been feeding in these waters since dinosaurs, not tourists, inhabited the Yucatan. Whether giant fish or giant crustaceans, are opportunities to uncover the ocean’s mysteries are quickly dwindling. The Ghost of Ocean Science Present Our nation faces a pivotal moment in exploration of the oceans. The most remote regions of the deep oceans should be more accessible now than ever due to engineering and technological advances. What limits our exploration of the oceans is not imagination or technology but funding. We as a society started to make a choice: to deprioritize ocean exploration and science. In general, science in the U.S. is poorly funded; while the total number of dollars spent here is large, we only rank 6th in world in the proportion of gross domestic product invested into research. The outlook for ocean science is even bleaker. In many cases, funding of marine science and exploration, especially for the deep sea, are at historical lows. In others, funding remains stagnant, despite rising costs of equipment and personnel. The Joint Ocean Commission Initiative, a committee comprised of leading ocean scientists, policy makers, and former U.S. secretaries and congressmen, gave the grade of D- to funding of ocean science in the U.S. Recently the Obama Administration proposed to cut the National Undersea Research Program (NURP) within NOAA, the National Oceanic and Atmospheric Administration, a move supported by the Senate. In NOAA’s own words, “NOAA determined that NURP was a lower-priority function within its portfolio of research activities.” Yet, NURP is one of the main suppliers of funding and equipment for ocean exploration, including both submersibles at the Hawaiian Underwater Research Laboratory and the underwater habitat Aquarius. This cut has come despite an overall request for a 3.1% increase in funding for NOAA. Cutting NURP saves a meager $4,000,000 or 1/10 of NOAA’s budget and 1,675 times less than we spend on the Afghan war in just one month. One of the main reasons NOAA argues for cutting funding of NURP is “that other avenues of Federal funding for such activities might be pursued.” However, “other avenues” are fading as well. Some funding for ocean exploration is still available through NOAA’s Ocean Exploration Program. However, the Office of Ocean Exploration, the division that contains NURP, took the second biggest cut of all programs (-16.5%) and is down 33% since 2009. Likewise, U.S. Naval funding for basic research has also diminished. The other main source of funding for deep-sea science in the U.S. is the National Science Foundation which primarily supports biological research through the Biological Oceanography Program. Funding for science within this program remains stagnant, funding larger but fewer grants. This trend most likely reflects the ever increasing costs of personnel, equipment, and consumables which only larger projects can support. Indeed, compared to rising fuel costs, a necessity for oceanographic vessels, NSF funds do not stretch as far as even a decade ago. Shrinking funds and high fuel costs have also taken their toll on The University-National Oceanographic Laboratory System (UNOLS) which operates the U.S. public research fleet. Over the last decade, only 80% of available ship days were supported through funding. Over the last two years the gap has increasingly widened, and over the last ten years operations costs increased steadily at 5% annually. With an estimated shortfall of $12 million, the only solution is to reduce the U.S. research fleet size. Currently this is expected to be a total of 6 vessels that are near retirement, but there is no plan of replacing these lost ships. The situation in the U.S. contrasts greatly with other countries. The budget for the Japanese Agency for Marine-Earth Science and Technology (JAMSTEC) continues to increase, although much less so in recent years. The 2007 operating budget for the smaller JAMSTEC was $527 million, over $100 million dollars more than the 2013 proposed NOAA budget. Likewise, China is increasing funding to ocean science over the next five years and has recently succeeded in building a new deep-sea research and exploration submersible, the Jiaolong. The only deep submersible still operating in the US is the DSV Alvin, originally built in 1968. The Ghost of Ocean Science Past 85% of Americans express concerns about stagnant research funding and 77% feel we are losing our edge in science. So how did we get here? Part of the answer lies in how ocean science and exploration fit into the US federal science funding scene. Ocean science is funded by numerous agencies, with few having ocean science and exploration as a clear directive. Contrast to this to how the US traditionally dealt with exploration of space. NASA was recognised early on as the vehicle by which the US would establish and maintain international space supremacy, but the oceans have always had to compete with other missions. We faced a weak economy and in tough economic times we rightly looked for areas to adjust our budgets. Budget cuts lead to tough either/or situations: do we fund A or B? Pragmatically we choose what appeared to be most practical and yield most benefit. Often this meant we prioritized applied science because it was perceived to benefit our lives sooner and more directly and, quite frankly, was easier to justify politically the expenditures involved. In addition to historical issues of infrastructure and current economic woes, we lacked an understanding of the importance of basic research and ocean exploration to science, society, and often to applied research. As example, NOAA shifted funding away from NURP and basic science and exploration but greatly increased funding to research on applied climate change research. Increased funding for climate change research is a necessity as we face this very real and immediate threat to our environment and economy. Yet, did this choice, and others like it, need to come at the reduction of our country’s capability to conduct basic ocean exploration and science and which climate change work relies upon? Just a few short decades ago, the U.S. was a pioneer of deep water exploration. We are the country that in 1960 funded and sent two men to the deepest part of the world’s ocean in the Trieste. Five years later, we developed, built, and pioneered a new class of submersible capable of reaching some of the most remote parts of the oceans to nimbly explore and conduct deep-water science. Our country’s continued commitment to the DSV Alvin is a bright spot in our history and has served as model for other countries’ submersible programs. The Alvin allowed us to be the first to discover hydrothermal vents and methane seeps, explore the Mid-Atlantic ridge, and countless other scientific firsts. Our rich history with space exploration is dotted with firsts and it revolutionized our views of the world and universe around us; so has our rich history of ocean exploration. But where NASA produced a steady stream of occupied space research vehicles, Alvin remains the only deep-capable research submersible in the service in the United States. Neglected US neglecting ocean exploration now McClain 12 (Craig, the Assistant Director of Science for the National Evolutionary Synthesis Center and has conducted deep-sea research for 11 years and published over 40 papers in the area, October 16, 2012, “We Need an Ocean NASA Now,” http://deepseanews.com/2012/10/we-need-an-ocean-nasanow-pt-1/, LK) Whether giant fish or giant crustaceans, are opportunities to uncover the ocean’s mysteries are quickly dwindling. The Ghost of Ocean Science Present Our nation faces a pivotal moment in exploration of the oceans. The most remote regions of the deep oceans should be more accessible now than ever due to engineering and technological advances. What limits our exploration of the oceans is not imagination or technology but funding. We as a society started to make a choice: to deprioritize ocean exploration and science. In general, science in the U.S. is poorly funded; while the total number of dollars spent here is large, we only rank 6th in world in the proportion of gross domestic product invested into research. The outlook for ocean science is even bleaker. In many cases, funding of marine science and exploration, especially for the deep sea, are at historical lows. In others, funding remains stagnant, despite rising costs of equipment and personnel. The Joint Ocean Commission Initiative, a committee comprised of leading ocean scientists, policy makers, and former U.S. secretaries and congressmen, gave the grade of D- to funding of ocean science in the U.S. Recently the Obama Administration proposed to cut the National Undersea Research Program (NURP) within NOAA, the National Oceanic and Atmospheric Administration, a move supported by the Senate. In NOAA’s own words, “NOAA determined that NURP was a lower-priority function within its portfolio of research activities.” Yet, NURP is one of the main suppliers of funding and equipment for ocean exploration, including both submersibles at the Hawaiian Underwater Research Laboratory and the underwater habitat Aquarius. This cut has come despite an overall request for a 3.1% increase in funding for NOAA. Cutting NURP saves a meager $4,000,000 or 1/10 of NOAA’s budget and 1,675 times less than we spend on the Afghan war in just one month. One of the main reasons NOAA argues for cutting funding of NURP is “that other avenues of Federal funding for such activities might be pursued.” However, “other avenues” are fading as well. Some funding for ocean exploration is still available through NOAA’s Ocean Exploration Program. However, the Office of Ocean Exploration, the division that contains NURP, took the second biggest cut of all programs (-16.5%) and is down 33% since 2009. Likewise, U.S. Naval funding for basic research has also diminished. The other main source of funding for deep-sea science in the U.S. is the National Science Foundation which primarily supports biological research through the Biological Oceanography Program. Funding for science within this program remains stagnant, funding larger but fewer grants. This trend most likely reflects the ever increasing costs of personnel, equipment, and consumables which only larger projects can support. Indeed, compared to rising fuel costs, a necessity for oceanographic vessels, NSF funds do not stretch as far as even a decade ago. Shrinking funds and high fuel costs have also taken their toll on The University-National Oceanographic Laboratory System (UNOLS) which operates the U.S. public research fleet. Over the last decade, only 80% of available ship days were supported through funding. Over the last two years the gap has increasingly widened, and over the last ten years operations costs increased steadily at 5% annually. With an estimated shortfall of $12 million, the only solution is to reduce the U.S. research fleet size. Currently this is expected to be a total of 6 vessels that are near retirement, but there is no plan of replacing these lost ships. The situation in the U.S. contrasts greatly with other countries. The budget for the Japanese Agency for Marine-Earth Science and Technology (JAMSTEC) continues to increase, although much less so in recent years. The 2007 operating budget for the smaller JAMSTEC was $527 million, over $100 million dollars more than the 2013 proposed NOAA budget. Likewise, China is increasing funding to ocean science over the next five years and has recently succeeded in building a new deep-sea research and exploration submersible, the Jiaolong. The only deep submersible still operating in the US is the DSV Alvin, originally built in 1968. Ext - Solvency US Key – Heg, Environment The US maintains the “largest most capable” fleet of ocean exploration vehicles, they rely on federal ocean pure research funding, and they’re key to naval power and the environment UNOLS, The University-National Oceanographic Laboratory System, 96 [2/96, UNOLS, “The University-National Oceanographic Laboratory System: Celebrating 25 Years as the Nation's Premier Oceanographic Research Fleet,” https://www.unols.org/info/25annpap.html, 7-12-14, FCB] The University-National Oceanographic Laboratory System (UNOLS) is a consortium of 57 academic institutions with significant marine science programs that either operate or use the U.S. academic research Fleet. It is now entering its 25th year as the world leader in oceanographic facilities. The 27 research vessels in the UNOLS Fleet stand as the largest and most capable Fleet of oceanographic research vessels in the world. UNOLS owes its success to a unique management strategy. The UNOLS Council, which consists of seagoing scientists, vessel operators and marine technicians, ensures that ship and equipment schedules are coordinated to make efficient use of finite resources. This coordination is governed by one simple reality - every dollar used to support ships is one less dollar for science. Part of the UNOLS management philosophy is to maintain an entrepreneurial spirit among the various operators of the ships. This fosters a competition among the ships for science operations that has resulted in a level of effectiveness not found in any other oceanographic fleet. The close integration between the users of the Fleet and the academic institutions that operate the research vessels also results in a substantial financial savings. The academic institutions that operate the vessels subsidize the costs through a variety of direct and indirect means. Operations of the Fleet are highly responsive to changes in the annual science needs. Each operator of a UNOLS vessel functions on a year to year grant basis. Funding is only available as required to provide the services needed by the scientific community. In the past three years, the level of Federal funding for ocean science has decreased nearly 30%. The decrease in science funding is projected by UNOLS to lead to a long term excess capacity in the Fleet. If the trends in funding that we have seen over the past three years continue, the Fleet will have to change in one of two ways. Its size can be reduced to match its capacity to the smaller amount of research that will be performed with decreased budgets. Alternatively, other Federal and State users of the Fleet must be found. The UNOLS Council is charged with planning for future facility requirements for ocean science research to ensure that the Fleet maintains its vitality. This includes planning for replacement of ships as they age (with a lifetime of about 30 years and 27 ships, that's nearly one a year). Despite the reduction in Federal support for oceanographic research, UNOLS must continue to plan for new facilities to replace our existing assets as they age, and to explore the requirements for new types of facilities as the needs of ocean science change. The University-National Oceanographic Laboratory System (UNOLS) is a consortium of 57 academic institutions (Appendix) with significant marine science programs that either operate or use the U.S. academic research fleet. In the early 1960's operators of oceanographic research vessels formed a Research Vessel Operators Committee (RVOC) to coordinate work on operational and regulatory issues. UNOLS was established in 1971, in recognition of the need to ensure scientific access to research vessels and to extend the work of RVOC. It is now entering its 25th year as the world leader in oceanographic facilities. The 27 research vessels in the UNOLS Fleet (Table 1) stand as the largest and most capable Fleet of oceanographic research vessels in the world. It is a substantial national asset. The UNOLS Fleet provides the platforms on which the bulk of American oceanographic research is performed. Research performed on ships of the UNOLS Fleet contributes to our understanding of interannual changes in climate that are driven by El Nino, formation of tropical storms, and fisheries management. The Fleet supports studies of global ocean circulation, fundamental studies of ocean acoustics and light scattering that are basic to the Navy's mission of national defense, and the pure research needed to manage the ocean wisely. US Key – Laundry List Only the USfg solves - military power, heath, education, intellectual property Kaysen, MIT Political Economy Professor, JFK advisor, 65 [Carl, Basic Research and National Goals: A Report to the Committee on Science and Astronautics, U.S. House of Representatives, pg. 147 – 167, FCB] The foregoing classification of the kinds of benefits that basic research can be expected to provide makes clearer why this activity qualifies for support from the Government budget. Of the benefits listed above, those relating to military capability fall directly within the sphere of Federal responsibility, and only the Federal Government can and will pay for them. This applies both to military requirements for applied research and development, and to the insurance value of the scientific reserve corps. Those relating to health are increasingly an area of social concern, in which governmental responsibilities are recognized. The same can be said of those relating to higher education. It can be argued that beneficiaries of services should pay their full costs in both higher education and health. However, this is not the direction that public policy appears to be currently taking. Thus only two classes of benefits arc potentially the basis for support through the market system: The value of research outputs as inputs for technical developments of direct value to business firms, and the value of basic scientists as stimuli to the better functioning of scientists and engineers working directly on applied research and development projects in the same laboratory. (So far as the latter are involved in defense and related enterprises, this too is a matter of Government finance.) On the second count, we may say that, by and large, the market system will work so as to provide for the support of a level of basic research activity appropriate to that purpose taken in isolation. On the first, as we have seen already (p. 2 above), there are good reasons for expecting that business firms, acting individually, will systematically underinvest in basic research to a substantial degree. These reasons—the difficulty of appropriating the benefits of basic research to any single firm, and the uncertainty in the character, magnitude, and timing of the payoff in new technology of the fruits of any particular piece of basic research—are not absolutes; they are rather a matter of degree. The longer the time horizon over which a particular business can look ahead, the broader the scientific basis of the technology underlying its processes and products, the more its activities cover the whole range of that technology, the less its position in the markets in which it operates is subject to competitive inroads, the more likely it is to invest in basic research. Thus the relatively few firms that make large investments in basic science— outside those financed through defense contracts in any event—are those like Bcil Telephone, General Electric, Du Pont, Standard Oil of New Jersey, and the like. Indeed, to a significant extent, the competitive positions and prospects of these firms are such that the question of whether it pays to make these expenditures is not one which they need face too sharply. But for the generality of firms, the extent to which such expenditure appears wise is limited. US Key – Best Universities US is key, we have the best university facilities for integrating pure research, but that leadership is eroding due to lack of support Kigotho, University World News Writer, 14 (Wachira, February 28th, “China’s rapid rise in global science and engineering”, http://www.universityworldnews.com/article.php?story=20140227152409830, accessed 7/13/14, LLM) Science and Engineering Indicators 2014 identified the quality of higher education in science, technology, engineering and mathematics – STEM – as critical to providing the advanced work skills necessary to strengthen an innovation-based economic landscape. In thi regard, the US awarded the largest number of science and engineering PhDs of any country followed by China, Russia, Germany and the United Kingdom. “Of 200,000 doctorates in science and engineering earned worldwide in 2010, about 33,000 were awarded by universities in the United States, China 31,000, Russia 16,000, Germany 12,000 and the United kingdom 11, 000,” says the report. But China leads the world when factoring in doctorates in the biological, physical, Earth, atmospheric, ocean and agricultural sciences and computer sciences. “The issue is that the numbers of doctoral degrees in natural sciences and engineering have risen dramatically in China, whereas the numbers awarded in the United States, South Korea, and many European countries have risen more modestly,” says the report. Also, in the United States only 57% of doctorates were earned by citizens and permanent residents, while temporary visa holders obtained the remainder. Available statistics indicated that in 2010 more than 5.5 million first degrees were awarded in science and engineering worldwide, with students in China earning about 22% against the European Union’s 17% and the United States’ 10%. “Currently, science and engineering degrees account for about one-third of all bachelor degrees awarded in the United States, 60% in Japan and about 50% in China,” said Jaquelina Falkenheim, senior analyst in the National Center for Science and Engineering Statistics at the US National Science Foundation. In her analysis of the global higher education system, Falkenheim noted that only 5% of all bachelor degrees awarded in 2010 were in engineering – compared to 31% in China. Other places with a high proportion of engineering degrees were Singapore, Iran, South Korea and Taiwan. Emerging global competition for scientific innovation leadership seems to be encouraging governments to boost university enrolments in science and engineering fields. The number of these degrees awarded in China, Taiwan, Turkey, Germany and Poland more than doubled between 2000 and 2012. “During this period, science and engineering first university degrees awarded in the United States, Australia, Italy, the United Kingdom, Canada and South Korea also increased between 23% and 56%,” said Falkenheim. Marginal declines were noted in France (14%), Japan (9%) and Spain (4%). US sets the bar in influence Despite intense competition in visibility, performance and investment in STEM fields, the United States continues to set the bar in terms of influential research results. For instance, from 2002-12, researchers in the US authored 48% of the world’s top 1% of cited papers. American inventors were also awarded the highest number of high value patents registered in the world’s largest markets – the US, European Union and Japan. According to the report, there were few such patents issued in China and India. One outstanding aspect that cannot be missed in the 600-page Science and Engineering Indicators 2014 report is China’s catch-up efforts. Apart from upping spending on R&D, China now has the largest contingent of doctoral students in American research universities. Between 1991 and 2011, more than 63,000 Chinese students were awarded doctorates in science and engineering from leading research universities in the United States, accounting for 27% of 235,582 such awards to foreign students. “Over the 20-year period, the number of science and engineering doctorates earned by Chinese nationals has more than doubled,” says the report. And so the battle for supremacy in the fiercely contested areas of global leadership in science and technology will likely be decided in laboratories in American universities. US Key - NSF Specifically, the NSF is key – US S&T is funded by NSF Bement, Director of NSF, member of US National Commission for UNESCO, 8 [Arden, 4/2/2008, “International Science and Technology Cooperation,” Government Printing Office, http://www.gpo.gov/fdsys/pkg/CHRG-110hhrg41470/html/CHRG-110hhrg41470.htm, FCB] The U.S. portion of international S&E research and education activities is funded by all NSF directorates and research offices. International implications are found throughout all of NSF's activities, from individual research awards and fellowships for students to study abroad, to centers, collaborations, joint projects, and shared networks that demonstrate the value of partnering with the United States. As a result of its international portfolio encompassing projects in all S&E disciplines, NSF effectively partners with almost every country in the world. The following examples illustrate the international breadth and scope of NSF's international portfolio.The Research Experiences for Undergraduates program, an NSFwide activity, gives undergraduate students the opportunity to engage in high-quality research, often at important international sites. One of these sites is CERN, the European Laboratory for Particle Physics in Switzerland, and one of the world's premier international laboratories. Undergraduate students work with faculty mentors and research groups at CERN, where they have access to facilities unavailable anywhere else in the world. NSF also provides support for the Large Hadron Collider housed at CERN. Collaborations among individual NSF-supported investigators are also common in NSF's portfolio. Recently, scientists at the University of Chicago created a single-molecule diode, a potential building block for nanoelectronics. Theorists at the University of South Florida and the Russian Academy of Sciences then explained the principle of how such a device works. They jointly published their findings. There are also examples where NSF partners with the United States Agency for International Development (USAID) to support international S&T programs to facilitate capacity building. For example, the U.S.-Pakistan Science and Technology Program, led by a coordinating committee chaired by Dr. Arden Bement, NSF Director, and Dr. Atta-ur-Rahman, Pakistan Minister of Education and Science Advisor to the Prime Minister. USAID funds the U.S. contribution of the joint program and supports other programs in Pakistan involving NIH and other agencies. This U.S.-Pakistan S&T program supports a number of joint research projects peer reviewed by the National Academy of Sciences and approved by the joint S&T committee. Over the past year, the Committee has also established sixteen S&T working groups that involve interagency participation in Pakistan and in the United States to carry out joint research projects of mutual interest (with direct benefit to Pakistan). Through this collaboration, NSF just completed a network connection of Internet 2 with Pakistan to facilitate research and education collaborations and data exchanges under the program. This project embodies one of NSF's top priorities, the development of the national science and engineering cyberinfrastructure, enabling a prime role for the United States in global research networks. NSF's goals for the national cyberinfrastructure include the ability to integrate data from diverse disciplines and multiple locations, and to make them widely available to researchers, educators, and students. Already, the Grid Physics Network and the international Virtual Data Grid Laboratory are advancing IT-intensive research in physics, cosmology, and astrophysics. In today's highly sophisticated, technology-driven science, many international partnerships center around major, high-budget research facilities that are made possible only by combining the resources of more than one nation. For example, NSF's facilities budget includes construction funds for the IceCube neutrino detector, antennas for the Atacama Large Millimeter Array (ALMA), and observation technologies for the Arctic Observing Network (AON). The IceCube Neutrino Observatory-- the world's first high-energy neutrino observatory--offers a powerful example of an international, interagency research platform. Agencies in Belgium, Germany, and Sweden have joined NSF and Department of Energy (DOE) in providing support for IceCube, which will search for neutrinos from deep within the ice cap under the South Pole in Antarctica. Neutrinos are hard-to-detect astronomical messengers that carry information from cosmological events. The Atacama Large Millimeter Array, currently under construction near San Pedro de Atacama, Chile, will be the world's most sensitive, highest resolution, millimeter wavelength telescope. The array will make it possible to search for planets around hundreds of nearby stars and will provide a testing ground for theories of star birth, galaxy formation, and the evolution of the universe. ALMA has been made possible via an international partnership among North America, Europe, and East Asia, in cooperation with the Republic of Chile. NSF is the U.S. lead on this ground-breaking astronomical facility. As part of the aforementioned IPY activities, NSF serves as lead contributing agency for the Arctic Observing Network (AON)--an effort to significantly advance our observational capability in the Arctic. AON will help us document the state of the present climate system, and the nature and extent of climate changes occurring in the Arctic regions. The network, organized under the direction of the U.S. Interagency Arctic Research Policy Committee, involves partnerships with the National Oceanic and Atmospheric Administration, National Aeronautics and Space Administration, Department of Interior, Department of Defense, Smithsonian Institution, National Institutes of Health, DOE, and USDA. NSF coordinates AON activities across the U.S. government, as well as with international collaborators, including Canada, Norway, Sweden, Germany, and Russia. Such international infrastructure projects will continue to play a key role in advancing S&E capacity worldwide. NSF leadership and proactive involvement in large international research projects helps ensure that U.S. S&E stays at the frontier. Basic-Applied Integration Basic science is a pre requisite to applied science Douglas, Department of Philosophy at the University of Waterloo, 12 (Heather, “Pure Science and the Problem of Progress”, https://www.academia.edu/4547054/Pure_Science_and_the_Problem_of_Progress, accessed 7/9/14, LLM) First efforts were made by chemist Alexander Williamson in Britain, whose 1870 “Plea for Pure Science” argued that “pure science [is] an essential element of national greatness and progress”, and thus that State support of pure science was also essential. (quoted in Gooday 2012, p. 548) The charge was taken up by both T. H. Huxley (in the UK) and Henry Rowland (in the US) in the 1880s. Huxley’s 1880 essay, “Science and Culture,” argued for a new college curriculum, with science taught “as a coherent institutionalized body of knowledge, uncompromised by a concern with utility.” (ibid., p. 550) Pure science was unconcerned with practical applications, but applied science, Huxley argued, could not exist without pure science. As Huxley famously wrote: “I often wish that this phrase, ‘applied science,’ had never been invented. For it suggests that there is a sort of scientific knowledge of direct practical use, which can be studied apart from another sort of scientific knowledge, which is of no practical utility, and which is termed “pure science.” But there is no more complete fallacy than this. What people call applied science is nothing but the application of pure science to particular classes of problems.” (Huxley 1880, as quoted in Kline 1995, p. 194) . Here we have a clear articulation, indeed perhaps the invention, of the so-called linear model. Pure science comes first, then the application of that knowledge is applied science, and it is that which produces utility. There is no applied science without pure science prior. But, at the same time, one cannot expect immediate utility from pure science. It is not pure science’s job (nor pure scientists’ job) to produce things of utility. That comes later, through application, which is often done by someone else, usually someone of lesser talent. Basic science key to applied science International Council for Science, a non-governmental organization with global a membership of national scientific bodies and International Science Unions, 4 (J. A. De La, December 2004, “The Value of Basic Scientific Research,” http://www.icsu.org/publications/icsu-position-statements/value-scientific-research, accessed 7-92014, LK) Knowledge is more than the information and data that might be provided via the internet; it is fundamentally a matter of cognitive capability, skills, training and learning. The exploitation and application of scientific information requires skilled scientists with a good understanding of the basic theories and practice of science. Successful transfer of scientific knowledge requires well-trained scientists at both ends of the exchange. Excessive dependency on scientific progress in other countries is rarely likely to lead to the resolution of local problems. Countries need to be able to generate their own scientific knowledge and adapt this to their own local context and needs. The practice of science is increasingly international and the research agenda is set by those who participate. A country with no basic scientific research capacity effectively excludes itself from having any real influence on the future directions of science. As the move towards a global knowledge economy accelerates, the necessity of having a thriving scientific community to generate new knowledge and to exploit it, both in the academic world and industry, becomes irrefutable. Adequate public investment in basic science education and research is a critical factor under-pinning socio-economic development. All countries need to develop longterm sustainable strategies for investment in science. Support for basic science is not something that can be postponed or diminished when times are hard in the misplaced hope that applied research alone will provide a better return. Integration of both basic and applied science is key to problem solving – a focus on only applied science is bad Pena, Chair of the International Council for Science, 4 (J. A. De La, December 2004, “The Value of Basic Scientific Research,” http://www.icsu.org/publications/icsu-position-statements/value-scientific-research, accessed 7-92014, LK) Major innovation is rarely possible without prior generation of new knowledge founded on basic research. Strong scientific disciplines and strong collaboration between them are necessary both for the generation of new knowledge and its application. Retard basic research and inevitably innovation and application will be stifled. New scientific knowledge is essential not only for fostering innovation and promoting economic development, but also for informing good policy development, and as a sound foundation for education and training. Notwithstanding, it is sometimes argued at a national level that investment in research should focus primarily, or even exclusively, on the use of existing information to develop applied solutions. Superficially at least, such an approach appears to be facilitated by the emergence of a global society, linked by internet and a continuous flow of information that anyone is able to access and use. Whilst an exclusive focus on application may have some merit in the short-term, there are several reasons why neglecting basic research is seriously flawed in the longer-term: Basic and applied science are a continuum. They are inter-dependent. The integration of basic and applied research is crucial to problem-solving, innovation and product development. Applied Bad Applied science is bad – asymmetric focus on profit and unfounded assumptions Pfaffman, Professor at Brown University and Committee Member of Science and Public Policy 65 (Carl, June 1965, http://books.google.com/books?id=q4wrAAAAYAAJ&pg=PA10&lpg=PA10&dq=%22Basic+research+and+ national+goals%22&source=bl&ots=VDP8907W8k&sig=DLPUL_EeLAIMAUxcTqd13hRI9CE&hl=en&sa=X& ei=i_q9U63_K8PtoASS54GQDw&ved=0CEIQ6AEwBA#v=twopage&q&f=false, “Basic Research and National Goals,” accessed 7-8-14, LK) The distinction between basic and applied research, difficult enough to make in the natural sciences, is even harder to make in the behavioral sciences. Yet, for two reasons, the distinction is probably more important in the latter disciplines. First, there is the danger that purely applied social research to support some action program will be so hedged in by popular prejudices and assumptions that it fails to get to the root of the problem and, hence, becomes trivial. For instance, there is considerable research at present in underdeveloped countries designed to get villagers to accept innovations in agricultural practices. A tacit assumption behind much of this research is that the obstacle to acceptance of innovations is amply the wrong attitude, and that the problem is to find the proper educational and propaganda techniques to alter the traditional way of looking at agriculture. The question of whether the innovation is economically profitable and socially rewarding to the villager in economic and social terms is assumed to be answered affirmatively, but that is precisely the question that takes a great deal of systematic research to answer. If the answer is affirmative, very little propaganda, if any, may be required to gain acceptance of the innovation. To assume that the problem is solely a matter of the wrong attitude is an easy way out, because then the knotty problems of the socioeconomic system, with its rewards and costs for die villager, can be ignored. Pure science is a prerequisite to applied sciences Pfaffman, Professor at Brown University and Committee Member of Science and Public Policy 65 (Carl, June 1965, http://books.google.com/books?id=q4wrAAAAYAAJ&pg=PA10&lpg=PA10&dq=%22Basic+research+and+ national+goals%22&source=bl&ots=VDP8907W8k&sig=DLPUL_EeLAIMAUxcTqd13hRI9CE&hl=en&sa=X& ei=i_q9U63_K8PtoASS54GQDw&ved=0CEIQ6AEwBA#v=twopage&q&f=false, “Basic Research and National Goals,” accessed 7-8-14, LK) The second reason for distinguishing between basic and applied work is that the normal aversion to basic research is greater in regard to the social sciences than it is in regard to natural science. One can see the relevance of basic principles in physics and chemistry to achievements in making weapons, television sets, and medicines; but one cannot see so clearly the relevance of special "abstractions" and "jargon" concerning things we know about already, such as taxes, schools, race rdations, and the family. The skepticism is increased by the fact that the layman has his own common-sense views about social matters. He objects when these are placed in question by empirical evidence supporting contrary and usually less sweeping generalizations. This is particularly true if the matter is one to which people attach strong positive or negative values. Okeanos Solves Only NOAA possesses the technology for pure ocean exploration – water column mapping, multi-beam sonar allow for the detection of previously unknown features including gas hydrates Lobecker et al, Physical Scientist with the NOAA, 12 [Elizabeth, 3-12, Oceanography VOL. 25 NO. 1, “Always Exploring,” 7-5-14, FCB] An integral element of Okeanos Explorer’s “Always Exploring” model is the ship’s seafloor and water column mapping capability. The principal mapping sensor, the EM 302 multibeam sonar, is staffed on all transit cruises for 24-hour seabed and water column data collection and processing. As appropriate on a cruise-by-cruise basis, the ship’s Kongsberg EK 60 fisheries sonar and Knudsen 3260 subbottom profiler provide additional data sets. The low resolution of bathymetric data derived from satellite altimetry allows recognition of very large features and the general character of the seafloor. At full ocean depths, the ship’s multibeam bathymetric data are at least 40 times finer resolution than satellite data. This capability allows imaging of previously unknown features and visualizing a truer picture of the seafloor and water column. Since commissioning, the Okeanos Explorer team has collected more than 88,000 linear kilometers of bathymetry stretching from Indonesia’s Sulawesi Sea to the North Atlantic, mapped a number of seamounts not found in existing bathymetry or charts, successfully tested its mapping system to 7,954 m depth over the Mariana Trench, and demonstrated the multibeam sonar’s ability to detect gaseous and physical features in wide areas of the water column. Notably, this ability resulted in the discovery of 1,400 m high plumes, confirmed to be methane gas, off the coast of northern California. Gas hydrate scientists at Monterey Bay Aquarium Research Institute conducted discovery follow-up work in the summer of 2011, and the initial results analyzing the vent source geomorphology were presented at the 2011 fall meeting of the American Geophysical Union (Gwiazda et al., 2011). Ext - Science Pure/Basic Science Key There’s no inherent technological aspect to pure science - that criticism is one that is applicable to applied sciences - the only end goal should be knowledge for knowledge’s sake Carrier, Ph.D. in philosophy at the University of Münster, 2001 (Martin, “Knowledge and Control: On the Bearing of Epistemic Values in Applied Science http://www.uni-bielefeld.de/philosophie/personen/carrier/Knowledge%20and%20ControlPU.pdf, accessed 7/3/14, LLM) On the Relation between Knowledge and Power Underlying these considerations is the notion that pure and applied science differ in nature. Otherwise, the endeavor to clarify the relationship between the two would not make sense. In contrast to this presupposition, it is argued in some quarters that science is intrinsically practical. The only appropriate yardstick of scientific achievement is usefulness or public benefit. In this vein, Philip Kitcher denounces the view that the chief aim of science is to seek the truth as the “myth of purity” and advances the contrasting idea of a “well-ordered science” whose sole commitment is satisfaction of the preferences of the citizens in a society (Kitcher 2001, 85-86, 117-118). “Well ordered science” is an ideal Kitcher wants scientists to pursue, it is not intended as description of reality. Still, his approach squares well with a widely shared feeling that practical use or technology is what science is essentially all about. Given a commitment of this sort, no significant distinction between theory and practice or between knowledge and power can be drawn. It is true, indeed, that claims to the effect that the touchstone of epistemic significance is practical success originate with the Scientific Revolution. However, it is also true that these commitments largely remained mere declarations. Take Christoper Wren who was familiar with the newly discovered Newtonian mechanics when he constructed St. Paul’s Cathedral. The Newtonian laws were deemed to disclose the blueprint of the universe, but they were unsuitable for solving practically important problems of mechanics. Wren had to resort to medieval craft rules instead. Likewise, the steam engine was developed in an endless series of trial and error without assistance from scientific theory (Hacking 1983, 162-163). Thermodynamics was only brought to bear on the machine decades after its invention was completed (see sec. 8). This gap between science and technology is not completely filled today. Theoretical work on cosmic inflation will hardly ever bear technological fruit. Such work is exclusively curiosity-driven; pure knowledge gain is the focus. Conversely, screening procedures in the development of medical drugs possess neither theoretical basis nor theoretical import. In such procedures, cellular or physiological effects of substances are detected and identified by using routine methods. They involve a more sophisticated form of trial and error. I conclude that there have been and still are purely epistemic and purely practical research projects. Neither is science inherently practical, nor is technology inherently scientific. This means that the distinction between basic research and technology development needs to be upheld. And this, in turn, suggests that the relationship between seeking the truth and developing some useful device merits a more thorough consideration. The connection between science and technology becomes manifest only in the 19th century. The now familiar pattern that a technological innovation emerges from the application of scientific theory is an achievement that succeeds the Scientific Revolution by roughly two centuries. The cascade model is intended to capture this more recent relationship between scientific knowledge and practical use. The idea is that technological progress grows out of scientific theorizing. Technology really is applied science. This model can be taken to involve the twofold claim of substantive and causal dependence of technology on science. That is, the operation of some technical device can be accounted for within a relevant theory, and the device was developed by applying the theory. According to the cascade model, the logical and the temporal relations run parallel: theoretical principles are formulated first, technical devices are constructed afterward by spelling out consequences of these principles. Pure science is necessarily objective – focus on repeatable controlled experiments and value given objectively based on the quality of research Shepard, 56 [Herbert A., Jan. 1956,Philosophy of Science, Vol. 23, No. 1 The University of Chicago Press, “Basic Research and the Social System of Pure Science,” http://www.jstor.org/stable/184997, 7-5-14, FCB] The core of the value system of pure science consists of two related beliefs: first, that new knowledge should be evaluated according to its significance for existing theory, and second, that scientists should be evaluated according to their contributions of new knowledge. Highest honors go to those whose work involves radical reformulations or extensions of theory or conceptualization. Next come those who do the pioneer experimental work required by a theoretical reformulation. Next come those who carry out the work logically required to round out the conceptual structure. Next come those who carry out redundant experimental work of a confirmatory nature, or concern themselves largely with relevant data accumulation. Last are the doers of sloppy or dull work. This central evaluative system provides a basis for the derivation of personal and social norms. Around it is organized a system of social control. Metaphorically, to be a scientist is to be a worker engaged in the construction of a great cathedral of knowledge, eternally incomplete, but slowly taking form over the centuries. In Conant's terms: "science [is] a series of interconnected concepts and conceptual schemes arising from experiment and observation and fruitful of further experiments and observations."6 It should be added that the "interconnected concepts and conceptual schemes" are shared by the scientific com- munity, and not the private property of particular scientists. For the conduct of this task of building the cathedral of scientific knowledge, standards of method and ethics are prescribed, criteria of beauty, workmanship, morality and social worth are elaborated. The scientist learns to esteem himself for honesty, humility, objectivity, self-discipline, curiosity, creativity, skepticism, rigor and industriousness. Living up to these virtues is not entirely a matter of the scientist's conscience: external control is exercised through "the weight of scientific opinion." The justice of the decisions of scientific opinion and hence the integrity of scientists and the social system is guaranteed through a special procedure on whose validity as a test all scientists are agreed-the repeatable controlled experiment. Uniqueness - Pure Research Good Status quo scientific focus is fundamentally flawed – applied science for monetary gain has become the most popular trend, leaving basic research behind – pure research without an end goal is key to innovation and serves as the underlying fabric of scientific advancement Oates, PhD in biology and biotechnology; currently deputy director of undergraduate education at the National Science, 13 [Karen, 3-7-14, Huffington Post, “The Importance of Basic Research”, http://www.huffingtonpost.com/karen-kashmanian-oates-phd/science-role-models_b_2821942.html, accessed 7-5-14, TYBG] This is science's newest Golden Age. Young people today are inspired by generational heroes like Steve Jobs and Mark Zuckerberg that were filled in the relative recent past by the likes of Michael Jordan and Mick Jagger. The fact that today's students can dream of emulating role models who achieved their status using their minds and curiosity is a good thing. However, there is one significant drawback. The rock star status of today's scientific celebrities encourages aspiring scientists to focus on the retail possibilities that can result in fast fame and wealth. While understandable, this unwittingly neglects a crucial part of the scientific equation -- basic research. For example, let's look at the way the music industry has changed over the last decade or so. Instead of going to a record story, most people now get their music electronically via MP3 files through an online store like iTunes, and download it to portable MP3 players like iPods. Each of these products -- MP3s, iTunes and iPods -- was created to fill a specific commercial void. Scientists identified a need and developed a product. That is applied research. But these would not exist if not for the anonymous scientists at the Swiss laboratory CERN whose research led to the development of the internet, or the no-name physicists in the 1920s whose abstract discoveries in electronics and sub-particles paved the way for today's computers. These unheralded breakthroughs are products of basic research. Basic research is the foundation on which applied research is built, and feeds the pipeline for the products and services we consume. But too few of today's and tomorrow's scientists are showing interest in laboring unknown in the back labs of basic research. The money and the notoriety, it seems, comes from advancements championed through applied research. Compounding the problem are the funders. America's top companies used to provide significant dollars to basic research, recognizing it is a perquisite for innovation that led to viable commercial products, among them the transistor, nylon and Teflon. But basic research is expensive, time consuming and there are no guarantees of a billion-dollar breakthrough. Without the robust support of private companies like The Bell Labs and Dupont, the home grown pipeline begins to run dry. The financial pressure then falls squarely on government funding and university research. When public dollars are being used, there is frequent pressure to focus on applied research, rather than appropriate revenues for experimentation with no known conclusion. Earlier this week, an advisory panel recommended to federal agencies shutting down the Brookhaven National Laboratory in New York, home of last remaining particle collider in the U.S, because of tight budgets. The collider smashes gold ions and protons together, which enables scientists to study the formation of the universe. Research like this is too important to be penny foolish. On a recent trip to Israel, I met with the head of the Weizmann Institute of Science, the country's leading research institution. Their students and fellows focus almost exclusively on basic research. Weizmann is Israel's smallest university, yet it is one of the top five highest earning institutions in the world because of its patents and their subsequent commercialization. The United States, and its stable of excellent colleges and universities, needs to learn from the Weizmann model. We know basic research is valuable. Weizmann shows us it can be profitable, too. One of my role models is Mary-Claire King. A researcher who spent nearly 20 years studying breast cancer, she faced a barrage of criticism for wasting time and money. Eventually she discovered the breast cancer gene, which has helped tens of millions of people survive breast cancer. Her stubbornness and perseverance in basic research saved lives and resulted in billions of dollars in direct and indirect economic impact. We need more scientists like Mary-Claire King. Yet it is doubtful many students who are planning on careers in science have heard of her or are planning to emulate her. But she, and countless anonymous basic researchers, unquestionably had as great an impact on their future careers as Jobs and Zuckerberg and the other rock stars they one day hope to follow. Curiosity-driven pure science is key to societal advancement, but support for pure science is collapsing in the status quo Padma, Doctorate degree in oceanography at The College of William and Mary and is currently Director of Graduate Diversity Affairs at the University of Rhode Island; recipient of the 2008 SCBWI Magazine Merit Award for Nonfiction, 12 [T.V., 7-11-12, SciDevNet, “Nobel laureate says curiosity-driven science must not be sidelined,” http://scidevnet.wordpress.com/2012/07/12/nobel-laureate-says-curiosity-driven-science-must-not-besidelined/, accessed 7-4-14, TYBG] For developing countries with limited funds for science, there is a perennial debate about whether to support basic science research — which lacks easily discernible social benefits — or applied science. They could pay heed to Jules A. Hoffmann, who won the 2011 Nobel Prize in Physiology or Medicine. Hoffmann began his science career driven by a curiosity to understand how the humble fruit fly avoided contracting fungal infections — that is, pure basic science. This led to the discovery of a group of cells that are key to ‘innate’ immune responses in humans, with implications for vaccines, infectious diseases and allergies. Hoffman’s career began with research into the fruit fly, which led to important discoveries relating to human immune responses “I would like to argue that our society should continue to support, to a significant extent, research which is purely based on curiosity, even in the absence of perspectives of applications at the time when the work is started,” Hoffmann told Euroscience Open Forum (ESOF) delegates at the opening ceremony on Wednesday (11 July). Hoffmann said that as he began his research career in the 1960s, he was fortunate to work during “blissful times”. “We were not asked to indicate which milestones we wanted to provide within which timeframe, what applications we were hoping to generate, what networking we were planning to develop, which industrial partners we had contacted,” he added. “There was a great confidence in science, and a global belief that, whatever the field and the questions, any new scientific knowledge would eventually have positive outcomes for society.” Not so, anymore. As immense amounts of scientific knowledge have accumulated, science has become so complex “that most of our fellow citizens feel overwhelmed or lost,” Hoffmann said. He went on to observe that although science still enjoys a relatively positive image with the general public, a significantly large and vocal group of citizens have developed a marked level of distrust towards scientific research — particularly in Europe — in areas such as genetically modified crops, vaccinations, stem cell research and electromagnetic waves. Regaining the trust of these opponents will not be easy, Hoffman says. He sees a role for the media to help garner public interest in science, while at same time not overselling research results, “which would only feed the distrust.” Basic research is a pre-requisite to the applied sciences and is a necessary component of technological progress Bush, Director of the Office of Scientific Research and Development, 1945 [Vannevar, 7/45, NSF.Gov, “Science The Endless Frontier,” https://www.nsf.gov/od/lpa/nsf50/vbush1945.htm, accessed 07-05-14, TYBG] Basic research is performed without thought of practical ends. It results in general knowledge and an understanding of nature and its laws. This general knowledge provides the means of answering a large number of important practical problems, though it may not give a complete specific answer to any one of them. The function of applied research is to provide such complete answers. The scientist doing basic research may not be at all interested in the practical applications of his work, yet the further progress of industrial development would eventually stagnate if basic scientific research were long neglected. One of the peculiarities of basic science is the variety of paths which lead to productive advance. Many of the most important discoveries have come as a result of experiments undertaken with very different purposes in mind. Statistically it is certain that important and highly useful discoveries will result from some fraction of the undertakings in basic science; but the results of any one particular investigation cannot be predicted with accuracy. Basic research leads to new knowledge. It provides scientific capital. It creates the fund from which the practical applications of knowledge must be drawn. New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science. Today, it is truer than ever that basic research is the pacemaker of technological progress. In the nineteenth century, Yankee mechanical ingenuity, building largely upon the basic discoveries of European scientists, could greatly advance the technical arts. Now the situation is different. A nation which depends upon others for its new basic scientific knowledge will be slow in its industrial progress and weak in its competitive position in world trade, regardless of its mechanical skill. Pure science has been marginalized by commercial interests of applied science - this has affected what research is done for whom and why it’s done Langley, PhD in Neurobiology, and Parkinson, bachelor’s degree in physics and electronic engineering, and a doctorate in climate science, 9 (Chris and Stuart, October “Science and the corporate agenda SGR Promoting ethical science, design and technology The detrimental effects of commercial influence on science and technology” http://www.statewatch.org/news/2009/oct/scientists-for-global-responsibillty-report.pdf, accessed 7/3/14, LLM) ‘Pure’ science (there is not strictly speaking ‘pure’ technology or engineering) usually appears in the R&D statistics of government (or other funders of research) as a category which reflects the open-ended pursuit of knowledge. Pure research tends to be considered as part of curiosity-driven work which is undertaken by scientists in both public and private laboratories – its aim being to provide an ‘understanding’ of a phenomenon. In contrast, ‘applied’ research aims at producing an intervention – such as a drug or new material – to address problems or develop a new approach. ‘Pure’, ‘fundamental’ or ‘basic’ research is defined officially as: “….experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundation of phenomena and observable facts, without any particular application or use in view” (OECD 2002). Universities have been seen historically as institutions in which such predominantly ‘pure’ research was undertaken to discover knowledge for a broadly defined ‘public good’. Such knowledge would be a source of objective information for the public, and could inform policy-makers in areas such as public health or environmental protection. However these goals can be marginalised by the involvement of commercial interests wedded to shortterm economic return (Ravetz 1996; Washburn 2005). A series of profound changes in the UK have altered how people perceive the role and activities of universities in society. These changes have affected what research is undertaken; for whom and why; and the proportion of research that can be described as ‘pure’. In this climate many, especially in government, have begun to regard ‘pure’ research as a luxury. ‘Applied’ research is usually defined as research that has a clear set of narrowly-defined objectives, which guide its programme of activities. There is generally little opportunity to seek data outside this defined set of end-points. ‘Applied’ research frequently has economic gain and profit as its predominant focus – but can also be related to a specific social or environmental goal such as curing a disease, reducing greenhouse gas emissions or increasing crop yields. Superficially then one of the key differences between ‘pure’ and ‘applied’ research is how the goals of the research are defined and who is likely to benefit from the products of that research. The methods and scientific activities in ‘pure’ and ‘applied’ research are essentially the same. The research activity tabled below comprises both ‘applied’ and ‘basic’ SET activities undertaken by the main sectors in the UK. Traditionally the Research Councils predominantly supported the more ‘pure’ form of research – much of which had a broadly defined set of end-points. In addition the Research Councils were expected to provide funding not coloured by the political perspectives of the government of the day – the Haldane principle1. While in the early days of the Research Councils some of the funding they distributed was for technological innovation and hence definable as ‘applied’, the proportion of their funding activities that is directed at economically defined objectives has increased in the last 20 years (see Moriarty 2008). SET has significant potential to provide tools that can be used, through technological development for instance, to contribute to social justice or to help to address issues such as resource depletion, cleaner energy, pollution and environmental degradation (Ravetz 1996). However, there is a large body of research literature which shows that the ability of SET to fulfil that potential – its ultimate role in society – depends upon the social structure and power relationships existing within that society. Profit-driven activities and mechanisms such as intellectual property rights2, patents and funding can often act against the public interest and bring benefit to a very few without increasing the public benefit. SET has a number of mechanisms in place – with associated reliable methods and data – designed to help reduce the influence of special interests with the potential to introduce bias, for example those of the funder. Strict adherence to these mechanisms – which include peer review, free exchange of data and transparency – has traditionally been a prerequisite for practising SET. However, such processes must be observed by all involved in publishing and experimental protocols, for example, so as to permit data to be assessed for its reliability. Status quo science is based on applied scientific methods and technological interests which have dominated and shaped the of knowledge production within current academia Carrier, Ph.D. in philosophy at the University of Münster, 1 (Martin, “Knowledge and Control: On the Bearing of Epistemic Values in Applied Science http://www.uni-bielefeld.de/philosophie/personen/carrier/Knowledge%20and%20ControlPU.pdf, accessed 7/3/14, LLM) The Primacy of Applied Science Among the general public, the esteem for science does not primarily arise from the fact that science endeavors to capture the structure of the universe or the principles that govern the tiniest parts of matter. Rather, public esteem—and public funding—is for the greater part based on the assumption that science has a positive impact on the economy and contributes to securing or creating jobs. Consequently, applied science, not pure research, receives the lion’s share of attention and support. It is not knowledge that is highly evaluated in the first place but control of natural phenomena. The relationship between science and technology is widely represented by the so called cascade model. This model conceives of technological progress as growing out of knowledge gained in basic research. Technology arises from the application of the outcome of epistemically driven research to practical problems. The applied scientist proceeds like an engineer. He employs the toolkit of established principles and brings general theories to bear on technological challenges. The cascade model entails that promoting epistemic science is the best way to stimulating technological advancement. The preference granted to applied science increasingly directs university research at practical goals; not infrequently, it is sponsored by industry. Public and private institutions increasingly pursue applied projects; the scientific work done at a university institute and a company laboratory tend to become indistinguishable. This convergence is emphasized by strong institutional links. Universities found companies in order to market products based on their research. Companies buy themselves into universities or conclude large-scale contracts concerning joint projects. The interest in application shapes large areas of present-day science. This primacy of application puts science under pressure to quickly supply solutions to practical problems. Science is the first institution called upon if advice in practical matters is needed. This applies across the board to economic challenges (such as measures apt to stimulate the economy), environmental problems (such as global climate change or ozone layer depletion), or biological risks (such as AIDS or BSE). The reputation of science depends on whether it reliably delivers on such issues. The question naturally arises, then, whether this pressure toward quick, tangible and useful results is likely to alter the shape of scientific research and to compromise the epistemic values that used to characterize it. There are reasons for concern. Given the intertwining of science and technology, it is plausible to assume that the dominance of technological interests affects science as a whole. The high esteem for marketable goods could shape pure research in that only certain problem areas are addressed and that proposed solutions are judged exclusively by their technological suitability. That is, the dominant technological interests might narrow the agenda of research and encourage sloppy quality judgments. The question is what the search for control of natural phenomena does to science and whether it interferes with the search for knowledge. Pure Science K2 S&T A surge in basic science funding is key to maintain science and technology leadership Hummel, Ph. D in Political Research, 12 [Robert, 2012, Synesis: A Journal of Science, Technology, Ethics, and Policy 2012; 3(1):G14-39, “US Science and Technology Leadership, and Technology Grand Challenges,” http://www.synesisjournal.com/vol3_g/2012_Hummel_G14-39_abstract.html, 7-12-14, FCB] In 1945, Vannevar Bush—then the President’s Director of Scientific Research and Development— outlined a vision for US scientific research activities in the post-war period. In his report, entitled “Science: The Endless Frontier” (1), Bush laid out the importance of basic research to the Nation’s science research enterprise. Basic research—though “performed without thought of practical ends”— was the “pacemaker of technological progress,” and “created the fund from which the practical applications of knowledge must be drawn.” Bush further argued that the “simplest and most effective way” that Government resources could be brought to the service of the nation’s industrial research endeavors would be to “to support basic research and to develop scientific talent.” With this vision, Bush’s “Endless Frontier” resulted in the establishment of the Office of Naval Research, the National Science Foundation, and, later, the National Institutes of Health, the Defense Advanced Research Projects Agency, and NASA—as well as a robust national program of basic research at universities, research centers, laboratories, and institutes and a quadrupling of the number of research scientists dedicated to fundamental science in just a few decades (95). Semiconductors, microelectronics, medical diagnostic technologies CT and MRI, and key developments in computer science all emerged from basic science developments in the post-war period. In short order, American science and engineering advances became the envy of the world and gave rise to technical resources and capabilities that fueled unparalleled economic success. Other nations around the globe aspire to similar economic advances, and are investing heavily in science and the application of science to new technologies and capabilities. China, for example, has launched an effort to become an “innovative nation” by 2020 and a global scientific power by 2050 (96), and has reserved 15% of its science and technology investment for the 973 program that funds basic research (97). Extending the American S&T-driven economic boom will require continued and enhanced American leadership in basic and applied science. For American technological progress to remain at the forefront, we will need to foster more effective and integrative relationships between the basic research community and applied researchers, to decrease the time in which fundamental science discoveries are translated into practical technologies. We need to re-infuse our research communities with the characteristically American spirit of competitiveness to drive our success in a more competitive age. American leadership in the 21st Century requires that American scientists strongly participate in basic research, and stay current with a body of basic science in a globalized research environment. Leadership also requires that we facilitate and expedite the creation of practical applications and knowledge from the fund of basic science. Being first to codify and utilize basic science is more important than being alone in possession of the fund. Accordingly, we need to challenge (and incentivize) our basic science researchers to translate basic science results to application developers with greater speed and intensity. We should increase the availability of “incubators,” where scientists can interact with system developers, to expedite the use of new technology and new concepts in designs and new products. Certain federally-funded research and development centers are particularly effective at supporting research while finding applications and transition potential. Ultimately, Vannevar Bush’s thesis that there is a major government role in the support of basic research remains valid. There is little viable substitute for engaging good people with good technical oversight, which requires a strong and vibrant science and technology enterprise both within government and outside, interoperating for the benefit of both finding solutions to existing problems, and to explore knowledge for applications yet to be discovered. S&T Good – Laundry List S&T cooperation is key to sound policy-making, reliable international benchmarks, good will, strong relations, democracy, civil society, innovation, and solutions to disease and climate change Miotke, subcommittee on research and science education, committee on science and technology, 8 [Jeff, “International Science and Technology Cooperation,” Government Printing Office, 4/2/2008, http://www.gpo.gov/fdsys/pkg/CHRG-110hhrg41470/html/CHRG-110hhrg41470.htm, FCB] Science and science-based approaches make tangible improvements in people's lives. Strategically applied, S&T outreach serves as a powerful tool to reach important segments of civil society. Sound science is a critical foundation for sound policy-making and ensures that the international community develops reliable international benchmarks. Science is global in nature--international cooperation is essential if we are to find solutions to global issues like climate change and combating emerging infectious diseases. International scientific cooperation promotes good will, strengthens political relationships, helps foster democracy and civil society, and advances the frontiers of knowledge for the benefit of all. S&T Good – Leadership More S&T needed to maintain leadership –report shows US will be overcome by Asian S&T NSF, US government agency that supports research and education in science and engineering, 12 [1/17/12, “New Report Outlines Trends in U.S. Global Competitiveness in Science and Technology,” National Science Board, http://www.nsf.gov/nsb/news/news_summ.jsp?cntn_id=122859&, FCB] The United States remains the global leader in supporting science and technology (S&T) research and development, but only by a slim margin that could soon be overtaken by rapidly increasing Asian investments in knowledge-intensive economies. So suggest trends released in a new report by the National Science Board (NSB), the policymaking body for the National Science Foundation (NSF), on the overall status of the science, engineering and technology workforce, education efforts and economic activity in the United States and abroad. "This information clearly shows we must re-examine long-held assumptions about the global dominance of the American science and technology enterprise," said NSF Director Subra Suresh of the findings in the Science and Engineering Indicators 2012 released today. "And we must take seriously new strategies for education, workforce development and innovation in order for the United States to retain its international leadership position," he said. Oceans K2 Science Literacy Ocean research and education is uniquely key to scientific literacy Strang, Associate Director, Lawrence Hall of Science, University of California, Berkeley, 7 [Craig, Annette deCharon, Senior Marine Education Scientist, University of Maine School of Marine Sciences, Sarah Schoedinger, Program Officer, National Oceanic & Atmospheric Administration’s Office of Education, “CAN YOU BE SCIENCE LITERATE WITHOUT BEING OCEAN LITERATE?,” The Journal of Marine Education, Vol. 23, No. 1, TYBG] While marine educators have always known that many important science concepts can be taught through ocean examples, and that the ocean provides an engaging context for teaching general science, a more compelling credo now guides that work: “Teach for Ocean Literacy.” Many ocean sciences concepts are more than engaging examples of general science; they have intrinsic, essential importance. Therefore, one cannot be considered “science literate” without being “ocean literate.” Two of the earliest and most influential documents in the science reform movement, Science for All Americans and Benchmarks for Science Literacy [2,3], state "the science-literate person is familiar with the natural world and recognizes both its diversity and unity." Research consistently affirms the ocean's vital role in maintaining the unity of our world. Without its vast ocean, Earth could be inhospitably cold like Mars or a stifling greenhouse like Venus. On the other hand, the interconnectedness of the ocean and the atmosphere has had negative impacts. Ocean waters absorb airborne industrial chemicals which are carried thousands of miles from their source to the Arctic region. These pollutants are found in the bodies of top predators such as polar bears, which absorb the chemicals through their diet of fish and seals. Whether we live on the coast or inland, eat seafood or not, humans are inextricably tied to the ocean. Thus the scientifically literate citizens we grow in our schools must become familiar with ocean issues that may or may not be happening "in their own backyards." Ocean Exploration Good Ocean exploration is beneficial in both contributing to the base of human knowledge and preventing premature decisions Pederson, 2k [K., 2000, “Exploration of deep intraterrestrial microbial life: current perspectives,” http://onlinelibrary.wiley.com/doi/10.1111/j.1574-6968.2000.tb09033.x/pdf, 7-7-14, FCB] Exploration in whose interests? The interests in investigations of intraterrestrial life are represented by a very diverse array of general social, professional and industrial motives. Some of the current reasons for such investigations are listed below. 1. Microbial activity in oil wells may have both negative (e.g. through corrosion and well souring) and positive (e.g. through surfactant production) effects on oil extraction. The oil industry, therefore, shows an interest in deep oil reservoir microbiology [14^16]. 2. The contamination of groundwater from surface and underground disposal sites, accidental spills, leakage and other human activities has triggered a widespread interest in the possibilities of restoring contaminated underground sites with the help of autochthonous and/or allochthonous microorganisms [17]. 3. Disposal of radioactive wastes and heavy metals in deep geological formations requires in-depth knowledge about the host rock environment, including possible e¡ects of microbes on future repositories [18]. 4. There are enormous reservoirs of energy in the methane gas hydrates that are found globally in subocean floor sediments, possibly twice the amount of energy contained in known oil and gas reservoirs [19]. Most of this methane is believed to have been produced by methanogens living deep below the deposits [20]. 5. An increasing number of scientists argue for an underground origin of life, possibly in the vicinity of hydrothermal systems [21]. If life did originate subterraneously, then it must have been present underground for as long as there has been life on our planet. A diverse and extended underground life on, or within, our planet suggests that life on other planets should be searched for underground rather than on the surface. 6. Last but not least, the increasing knowledge about intraterrestrial life may signicantly expand our knowledge of microbial diversity and, especially, of the metabolic capabilities of living organisms (see, for example, [22]). Public appreciation of ocean exploration is necessary both for the advancement of scientific achievement as well as understanding why we discover Deacon et al, historian specializing in oceanography and fellow @ School of Ocean and Earth Southampton U, 1 [Margaret, Understanding the Oceans: A Century of Ocean Exploration, pg. 1, FCB] This book seeks to assist that process, but also to take it somewhat further. To understand the oceans, we need no only good science, but a good appreciation of science, and we can achieve a better understanding of what current scientific knowledge can tell us about the sea if we also have some information about how it was obtained. In this book recent oceanographic discoveries are presented through accounts of how, as well as why, scientists study the sea, and some of the changes that are taking place in the way they go about it. Two aspects of this process receive special attention. Many chapters deal with how technological improvements during the last fifty years have transformed possibilities in the principal fields of ocean research, leading to important advances in scientists' understanding of what goes on in the sea. Other chapters highlight a different but equally important requirement for scientific progress by showing how, over the last hundred years or so, infrastructures have been developed that are capable of sustaining the degree of scientific activity necessary to observe natural phenomena on an oceanic or even a global scale. Looking at the ways in which the organization of scientific research, as well as research methods, have changed over the years helps us Good Science O/W Bad Science solves the problems it creates – more science key Time Magazine 1971 (Time Magazine, cites Lawrence Lessing, formerly on the editorial board of fortune magazine and an editor and contributor to Scientific American, March 8, http://www.time.com/time/magazine/article/0,9171,904799-1,00.html, JMB, accessed 6-23-11) Spurred on by World War II, then the cold war, then Sputnik, U.S. science rose to an unprecedented level of prestige in the 1960s. Yet even as it is gaining its greatest triumphs—the moon, the green revolution, the ability to control and even change the processes of life—science and scientists have come under increasing attack. Some more reasonable critics argue that the antiscience barrage promises more good than harm for a field that has been enjoying too high a priority for too long. Science Writer Lawrence Lessing, a member of FORTUNE'S board of editors, does not agree. In the magazine's March issue, he argues that if the current "senseless war" on science and its kindred discipline, technology, continues much longer, the U.S. will be a considerably worse place in which to live. Seamless Web. Lessing acknowledges that the "apocalyptic mood has been stirred by some very palpable social miscarriages of science and technology"—notably the Indochina war and the environmental crisis. Still, he cannot accept "the proposition that America needs less growth, less knowledge, less skill, less progress." Scientists and engineers, he says, "are increasingly cast as the villains of this emotional drama. But it should be obvious that science by its nature and structure can offer society only options." Lessing points out that the traditional role of scientists is advisory, and as often as not their advice is ignored. "The height of the new folly," he says, "is the rising call upon scientists and technicians to foresee all the consequences of their actions and to make a moral commitment to suppress work on any discovery that might some day be dangerous, which is to demand that they be not only scientists but certified clairvoyants and saints." There is also danger in the notion that society can choose what it wants of science and destroy what it feels is valueless or threatening. "Science is indivisible," Lessing states, "a seamless web of accumulated knowledge, and to destroy a part would rip the whole fabric. Every discovery or invention of man has this dual aspect"—a potential for both benefit and harm. He warns that it does no good to try to retreat to an earlier century, and he quotes Konrad Lorenz, the famed naturalist and animal behaviorist, who has been warning hostile student audiences that if they tear down knowledge to start afresh, they will backslide 200,000 years. "Watch out!" Lorenz cautions the students. "If you make a clean sweep of things, you won't go back to the Stone Age, because you're already there, but to well before the Stone Age." Nonetheless, inflation, recession and other assorted ills have meant that in the past four years total federal expenditures on U.S. research and development in science and technology have declined in real dollars by more than 20%. "If the decline continues," Lessing predicts, "it will have a delayed, disastrous effect on the economy." Already, he reports, the U.S. lags be hind in a variety of fields. Japan and Europe are far ahead in establishing fast, new train networks and Mexico City has completed a subway system "that is both a great feat of engineering and a work of art." In high-energy physics, Italian scientists using a colliding-beam electron accelerator have come upon "what may be a new phenomenon in the creation of matter from energy, which seems to go beyond present physical theory." France, the Soviet Union and Switzerland are all at work testing the discovery on similar accelerators, but the U.S. has only one such machine, and it is not yet fully ready for operation. In plasma physics, after a significant 1968 Soviet breakthrough in the containment of thermo nuclear power, U.S. scientists ran confirming experiments that suggested that "this almost limitless, pollutionless source of energy may be nearer than was once expected. But the U.S. effort is having funded at a level, cut back again this year, that could put off this development as much as 25 to 50 years." In the life sciences, research funds are still lagging some 20%, or at least $250 million per year, behind research capacity. More than Primitive. Implicit in Lessing's analysis is the belief that man can use increasingly sophisticated science to solve his problems and, at the same time, ensure that science does not turn on its master and destroy him. He suggests that society has little choice other than to press on vigorously in scientific research; he rejects the notion that the only options are to abandon science and become primitive, or continue it and be destroyed. Lessing echoes the warning of Biochemist Philip Handler, president of the National Academy of Sciences: "If we forswear more science and technology, there can be no cleaning up cities, no progress in mass transportation, no salvage of our once beautiful landscape and no control of overpopulation. Those who scoff at technological solutions to these problems have no alternative solutions. Science and technology aren’t bad – people just use their products badly; fundamentalism and prejudice fill in instead Erickson, Science Graduation from the University of Minnesota, 1993 (George, Bachelor of Science and a Doctor of Dental Surgery degree from the U of Minnesota, Humanism Today, vol. 8, p. 63-65, http://www.humanismtoday.org/vol8/erickson.pdf, JMB, accessed 625-11) During the Friday evening of the seventh annual Humanist Weekend airing of modemist/postmodernist viewpoints, science and technology were criticized directly by some and indirectly by others. While science and technology are fair game, it is unfortunate that time was not available to rise in their defense. Love Canal and nuclear weapons were given as examples, as they frequently are. What many forget is that science is the search for knowledge, and technology is the sum of the ways that society provides itself with the objects of civilization. To portray science or technology as villains ignores our own role in determining how their fruits are used. To the contrary, science and technology have not failed; they have been phenomenally successful. The fault lies not with scientists, who are not insensitive to the fragrance and radiance of a rose, and to imply such is more than prejudicial. Again, to the contrary, many of the persons of science whom I have known are deeply attuned to the aesthetic nature of the world in general and of the subjects of their study in particular. To cast blame on science or technology is to join ranks with the hard-core religionists, the fundamentalists in particular, against whom science has fought a long and unrelenting battle, frequently at great cost. No, science is not at fault, nor technology. The fault is ours. It belongs to those of us who treasure opinion more than knowledge. It belongs to those who use technology to gamer profits regardless of the human and environmental costs. It belongs to churches that preach love and tolerate greed, and to the ignorant and indolent who read the sports and the funnies, but rely on astrology tables or the church to show them the way. The famous physicist, Leo Szilard, fought against use of the atom bomb, cabling and sending memoranda to President Roosevelt, urging an international demonstration so that the Japanese could witness its power and surrender. But others, not scientists, determined that the bomb would be used for effect-and not as a mere demonstration. AT: Objectivity Disavowing objective fact is meaningless self-gratification that ignores the real problems of the world that we need science to solve Sokal, Physics Professor and New York University, 1996 (Alan, Prof of Physics @ NYU, “A Physicist Experiments with cultural studies” Lingua Franca May/June JF) The fundamental silliness of my article lies, however, not in its numerous solecisms but in the dubiousness of its central thesis and of the "reasoning" adduced to support it. Basically, I claim that quantum gravity--the still-speculative theory of space and time on scales of a millionth of a billionth of a billionth of a billionth of a centimeter--has profound political implications (which, of course, are "progressive"). In support of this improbable proposition, I proceed as follows: First, I quote some controversial philosophical pronouncements of Heisenberg and Bohr, and assert (without argument) that quantum physics is profoundly consonant with "postmodernist epistemology." Next, I assemble a pastiche--Derrida and general relativity, Lacan and topology, Irigaray and quantum gravity--held together by vague references to "nonlinearity," "flux," and "interconnectedness." Finally, I jump (again without argument) to the assertion that "postmodern science" has abolished the concept of objective reality. Nowhere in all of this is there anything resembling a logical sequence of thought; one finds only citations of authority, plays on words, strained analogies, and bald assertions. In its concluding passages, my article becomes especially egregious. Having abolished reality as a constraint on science, I go on to suggest (once again without argument) that science, in order to be "liberatory," must be subordinated to political strategies. I finish the article by observing that "a liberatory science cannot be complete without a profound revision of the canon of mathematics." We can see hints of an "emancipatory mathematics," I suggest, "in the multidimensional and nonlinear logic of fuzzy systems theory; but this approach is still heavily marked by its origins in the crisis of late-capitalist production relations." I add that "catastrophe theory, with its dialectical emphasis on smoothness/discontinuity and metamorphosis/unfolding, will indubitably play a major role in the future mathematics; but much theoretical work remains to be done before this approach can become a concrete tool of progressive political praxis." It's understandable that the editors of Social Text were unable to evaluate critically the technical aspects of my article (which is exactly why they should have consulted a scientist). What's more surprising is how readily they accepted my implication that the search for truth in science must be subordinated to a political agenda, and how oblivious they were to the article's overall illogic. Why did I do it? While my method was satirical, my motivation is utterly serious. What concerns me is the proliferation, not just of nonsense and sloppy thinking per se, but of a particular kind of nonsense and sloppy thinking: one that denies the existence of objective realities, or (when challenged) admits their existence but downplays their practical relevance. At its best, a journal like Social Text raises important issues that no scientist should ignore--questions, for example, about how corporate and government funding influence scientific work. Unfortunately, epistemic relativism does little to further the discussion of these matters. In short, my concern about the spread of subjectivist thinking is both intellectual and political. Intellectually, the problem with such doctrines is that they are false (when not simply meaningless). There is a real world; its properties are not merely social constructions; facts and evidence do matter. What sane person would contend otherwise? And yet, much contemporary academic theorizing consists precisely of attempts to blur these obvious truths. Social Text's acceptance of my article exemplifies the intellectual arrogance of Theory--postmodernist literary theory, that is--carried to its logical extreme. No wonder they didn't bother to consult a physicist. If all is discourse and "text," then knowledge of the real world is superfluous; even physics becomes just another branch of cultural studies. If, moreover, all is rhetoric and language games, then internal logical consistency is superfluous too: a patina of theoretical sophistication serves equally well. Incomprehensibility becomes a virtue; allusions, metaphors, and puns substitute for evidence and logic. My own article is, if anything, an extremely modest example of this well-established genre. Politically, I'm angered because most (though not all) of this silliness is emanating from the selfproclaimed Left. We're witnessing here a profound historical volte-face. For most of the past two centuries, the Left has been identified with science and against obscurantism; we have believed that rational thought and the fearless analysis of objective reality (both natural and social) are incisive tools for combating the mystifications promoted by the powerful--not to mention being desirable human ends in their own right. The recent turn of many "progressive" or "leftist" academic humanists and social scientists toward one or another form of epistemic relativism betrays this worthy heritage and undermines the already fragile prospects for progressive social critique. Theorizing about "the social construction of reality" won't help us find an effective treatment for AIDS or devise strategies for preventing global warming. Nor can we combat false ideas in history, sociology, economics, and politics if we reject the notions of truth and falsity. The results of my little experiment demonstrate, at the very least, that some fashionable sectors of the American academic Left have been getting intellectually lazy. The editors of Social Text liked my article because they liked its conclusion: that "the content and methodology of postmodern science provide powerful intellectual support for the progressive political project." They apparently felt no need to analyze the quality of the evidence, the cogency of the arguments, or even the relevance of the arguments to the purported conclusion. And why should self-indulgent nonsense--whatever its professed political orientation--be lauded as the height of scholarly achievement? Science never claims perfect objectivity or universal truth Willower, Professors at Pennsylvania State University, and Uline, Professor at Ohio State University, 2001 (Donald, Penn State U, and Cynthia, Ohio State U, Journal of Education Administration Vol. 39.5 Pg. 457459 JF) Related criticisms of science are that it seeks ultimate reality and final, universal truth and since neither has been demonstrated, science is flawed. Such criticisms are misleading because they are so blatantly incorrect. As an open, growing activity, one of the main characteristics and great strengths of science is its self-corrective nature. In science, there are no final or universal truths, only theories that can be assessed using a variety of logical and evidentiary criteria, and subject to modification or replacement at any time. Similarly, the notion of an ultimate reality waiting to be uncovered and revealed by science is long out-of-date, a vestige of nineteenth century scientism. Yet, both of these charges are sometimes laid upon science by critics, with postmodernists/poststructuralists �Derrida, 1973, 1976) but one example. Little more need be said about final, universal truth, except to stress that even the most established theoretical explanations are subject to self-rectifying inquiry which might provide even better established ones. Ultimate reality is an awkward term that conjures up images of scientists pulling away the curtain of ignorance to gaze upon the true world. The terms ultimate reality and universal truth in a sense feed on each other because presumably to see the former is to know the latter. The problem is that such a conception of scientific inquiry is a figment of the postmodern imagination in search of a straw man. What actually happens is that a theory is created and tested and if the evidence fails to reject the theory, it gains in credibility. Theories are genuine constructions, but their assessment depends on judgments of adequacy using criteria developed in scientific communities. Theories supported by logic and cumulative evidence over time achieve relatively high levels of credibility, but words like ultimate reality and final truth are foreign to inquiry, found mainly in the conceptions of science provided by the fantasies of its antagonists. When it comes to truth, we prefer Dewey's �1938) concept of warranted assertibility. It holds that propositions are warranted by logic, and by evidence gathered in the process of inquiry. This view is consistent with scientific practice and is antithetical to universal or final truths. It sees science and knowledge as open, changing, and growing, not as closed, static, and settled. Science has objective truths – provable beyond reasonable doubt Nagel, Professor at New York University, 1998 (Thomas, prof at NYU, The New Republic, Oct 12, pp. 32-38, http://www.physics.nyu.edu/faculty/sokal/nagel.html, JMB, 6-25-11) As Sokal and Bricmont point out, the denial of objective truth on the ground that all systems of belief are determined by social forces is self-refuting if we take it seriously, since it appeals to a sociological or historical claim which would not establish the conclusion unless it were objectively correct. Moreover, it promotes one discipline, such as sociology or history, over the others whose objectivity it purports to debunk, such as physics and mathematics. Given that many propositions in the latter fields are much better established than the theories of social determination by which their objectivity is being challenged, this is like using a ouija board to decide whether your car needs new brake linings. Relativism is kept alive by a simple fallacy, repeated again and again: the idea that if something is a form of discourse, the only standard it can answer to is conformity to the practices of a linguistic community, and that any evaluation of its content or its justification must somehow be reduced to that. This is to ignore the differences between types of discourse, which can be understood only by studying them from inside. There are certainly domains, such as etiquette or spelling, where what is correct is completely determined by the practices of a particular community. Yet empirical knowledge, including science, is not like this. Where agreement exists, it is produced by evidence and reasoning, and not vice versa. The constantly evolving practices of those engaged in scientific research aim beyond themselves at a correct account of the world, and are not logically guaranteed to achieve it. Their recognition of their own fallibility shows that the resulting claims have objective content. Sokal and Bricmont argue that the methods of reasoning in the natural sciences are essentially the same as those used in ordinary inquiries like a criminal investigation. In that instance, we are presented with various pieces of evidence, we use lots of assumptions about physical causation, spatial and temporal order, basic human psychology, and the functioning of social institutions, and we try to see how well these fit together with alternative hypotheses about who committed the murder. The data and the background assumptions do not entail an answer, but they often make one answer more reasonable than others. Indeed, they may establish it, as we say, "beyond a reasonable doubt." That is what scientists strive for, and while reasonable indubitability is not the position of theories at the cutting edge of knowledge, many scientific results achieve it with time through massive and repeated confirmation, together with the disconfirmation of alternatives. Even when the principles of classical chemistry are explained at a deeper level by quantum theory, they remain indispensably in place as part of our understanding of the world. AT: Bias Even if scientists are biased, the community solves Siegel, Professor and Chair of the Department of Philosophy, University of Miami, 1985 (Harvey, Professor and Chair of the Department of Philosophy, University of Miami, Philosophy of Science, Vol. 52, No. 4, p. 531, JSTOR) I have been writing as if science's utilization of SM is perfect; as if every scientist always bases her claims evidentially. This is, of course, not so. As scarcely needs mention, scientists are passionate, human creatures, not automatons who routinely grind out results according to some formula for establishing evidential support. But this fact is perfectly compatible with the view of SM offered here. For science is a communal affair; individual passions and commitments are controlled by community assessment. In cases of dispute, settlement comes on the heels of relevant new evidence- Sometimes such evidence is not available—in which case disputes remain open. Sometimes evidence is at hand but not taken as such; at other times it may denied or distorted. In these last cases SM is not fully in command, and we might well say that those who fail to honor the commitment to evidence forfeit their claim to the title "scientist.wl8 But these considerations do not harm the analysis. For an account of SM should not pretend that the scientific community's reliance on SM is unwavering. It is enough to note that the community strives, ideally, for such reliance. *SM=Scientific Method 2AC Perms Synthetic Analysis The permutation solves – a synthetic analysis of science-based IR with ethical and social processes is the starting point for effective ocean policy Christie, School of Marine Affairs and Jackson School of International Studies, University of Washington, 11 [Patrick, 4/6/11, University of Washington, “Creating space for interdisciplinary marine and coastal research: five dilemmas and suggested resolutions,” https://depts.washington.edu/smea/sites/default/files/u43/Christie%20Multi%20Disc%20Researc.2011. Env%20Conserv.pdf, accessed 7/6/14, TYBG] The predominant environmental policy process has assumed, implicitly or explicitly, that the key knowledge gap to effective policy making is inadequate knowledge of ecological function (Christie et al. 2002; Ruckleshaus et al. 2009. With this construct, the priority has become developing adequate understandings of biology, non-human population dynamics, ecological communities and ecosystem function. Such information has been fed into the policy process, with the expectation that it will provide the key to raising awareness of environmental problems and lead to policy solutions. This has been a generally failed experiment in policy making, resulting in incomplete understandings of scale and interrelationship, inadequate policies and frustrated scientists of various disciplines (including ecologists). As an alternative, if environmental problems are construed as imbalances in coupled socialecological systems, then the role of IR necessarily expands within the policy-making process. A comprehensive, effective and balanced policy process requires detailed empirical understandings of not only ecological, biological and physical processes, but also humanistic, ethical and social processes, derived from both basic and applied research. A review of the predominant discourse surrounding ocean decline is a useful starting point. The decline of ocean resources and ecosystems has received considerable attention within the marine scientific community and popular media (Pauly et al. 1998; New York Times 2006; Worm et al. 2006, 2009; Halpern et al. 2008; Rosenthal 2008). This rapid growth in coverage is likely due to a confluence of conditions, including the worrisome degree of ecosystem decline in many locations and an increased ability to create and analyse global data sets. Active organizations, such as The Communication Partnership for Science and the Sea (COMPASS, URL http://www.compassonline.org) have created influential links between scientists documenting ocean decline and mass media outlets. Until recently, society did not know the extent, both geographic and ecological, of ocean condition decline. It is not an understatement that entire ecosystems, such as coral reefs (Pandolfi et al. 2003), are threatened at previously undocumented levels. There are virtually no pristine areas of the ocean today (Halpern et al. 2008). Such assessments are remarkably important, but incomplete, to inform effective policy responses. The question remains, how best to use this information, and what additional IR and synthetic analysis is necessary to shape societal policy and behavioural response? More importantly, what is likely to succeed in reversing these trends? The second question requires a broad consideration of empirical and ethical information, and is dependent on IR analysis across natural and social science disciplinary lines. It also requires a reconsideration of the role of formal science in defining the discourse surrounding policy formulation. Science Reform The perm solves best - the inadequacies and discrimination of status quo science are because the GOALS and ETHICS of scientists are what’s bankrupt, not the epistemic foundations of science - these can be reformed and corrected Antony, Masters in Sociology and Philosophy from the University of Massachusetts Amherst, 2013 (Louis, March 20th, “Epistemology or Politics? Louise Antony”, http://socialepistemology.com/2013/03/20/epistemology-or-politics-louise-antony/, accessed 7/5/14, LLM) Naomi Scheman calls attention to a number of cases in which science, as it is currently institutionalized in wealthy capitalist societies, neglects human needs or thwarts human values, specifically by neglecting the perspectives of marginalized people, or by disparaging the knowledge they possess. I share Scheman’s indignation about these cases, and about many other outrages perpetrated by the elite classes of the industrialized, capitalist West against subordinated people within and outside the societies they dominate. But I am not convinced by her analysis of the problem. Where Scheman sees a cognitive problem, I see a political one. Scheman believes that the injustices she describes are due, at least in large part, to an inadequate conception of knowledge — one that prescribes and rationalizes epistemic norms that deny epistemic authority to marginalized people (Scheman 2012, 472). She thus sees a role for epistemology to play in redressing the injustices she describes. We need, she says, to develop a “sustainable” epistemology. We must look for “a concept of knowledge, and a set of epistemic norms that ‘work’” (473). “Norms that work” are norms that are, first of all “… appropriate given what we know about ourselves and the world” (473) but also, more importantly, ones that afford “… sustainability, meaning norms that underwrite practices of inquiry that make it more rather than less likely that others, especially those … who are marginalized and subordinated … will be able to acquire knowledge in the future (original emphasis, 473). On Scheman’s account, then, the individuals who are harmed by contemporary forms of inquiry are victims of what Miranda Fricker calls “epistemic injustice.” They have knowledge and methods of knowing that would, if properly respected, increase and enrich our stores of knowledge. It is certainly true that marginalized people are frequently made victims of epistemic injustice — this is, indeed, part of what “marginalization” consists in. And it is also true that members of dominant groups have much to learn from those whom they subordinate. But whereas Scheman’s analysis says that members of the dominant class fail to learn because they harbor defective concepts of knowledge and employ ineffective norms, I say that they fail to learn because they don’t care. In indicting the epistemology of the dominant, Scheman seems to be saying that the methods of inquiry they adopt are inadequate to their epistemic goals. I, on the other hand, insist that their methods of inquiry are all too adequate, and that it is their epistemic goals that are wrong. The founding error is not cognitive; it is moral, and the corrective lies not in philosophy, but in politics. Consider the case of the “pharmaceutical scientists” Scheman asks us to “think about” (474). These are scientists who “ … upon learning of a plant used for healing, snip off a piece of it, collect some basic information about its uses from the people who live with it and take the specimen back to the laboratory in order to “isolate the active ingredient” (474). Scheman offers this as an example of the sort of epistemic practice that arises out of the “paradigm of laboratory science” — a paradigm that — “carries with it an essentialized, idealized, and abstracted object of knowledge …” (474). Perhaps Scheman is right about how these scientists think about their projects. But what does imply about the adequacy of their methods? We’re presumably imagining researchers who are looking for substances to commoditize. Is Scheman suggesting that they’d be more successful in that endeavor if they were attentive to folk practices, or more holistic in their experimental design? I see no reason to think that this is so. If the profitability of the pharmaceutical industry is any indication, the “paradigm of laboratory science” is working just fine, thank you very much. Epistemic injustice certainly can be epistemically costly for its perpetrators. If the corporate treasurehunters ransacking the rain forest are indifferent to the folkways of the people they encounter along the way, it’s possible that they miss, thereby, opportunities to learn things that would increase the profitability of their expeditions. But even if this is so, the epistemic loss they suffer is surely not the problem. The problem is not that profit-driven researchers neglect the epistemic practices of the people they plan to exploit; it’s that they neglect their interests. Were the researchers to adopt a less epistemically arrogant attitude without altering their objectives, they would simply become more effective exploiters. The same point can be made about one of the real-world cases discussed by Scheman: the conflict between the University of Minnesota and the Anishinaabeg people. [1] The Anishinaabeg have objected to research conducted at the University on manoomin, the grain commonly known as “wild rice.” University scientists have developed hybrid strains of the grain that are easy to cultivate in artificial paddies. The resulting increase in production of “wild rice” has caused a drop in demand for genuine manoomin, and an economic loss for the Anishinaabeg. Also, the planting of the modified strains in Minnesota threatens the genetic integrity, and continued survival of the native strains, which play a central role in Anishinaabeg culture. Scheman says that the university’s stance toward the Anishinaabeg is “problematic both ethically and epistemically” (484). The ethical problems are manifest: university researchers have appropriated and facilitated the commoditization of a substance belonging to another people, with disregard for both the cultural practices of and the economic consequences for those people. But what are the epistemic problems here? Scheman (following Jill Doerfler), points to the discourse of agricultural research, and the university’s claims to be engaged in the “improvement” of the native grain. The notion of “improvement,” Doerfler says, is highly relative. From the perspective of “non-indigenous farmers,” it might mean the development of a strain that permits intensive monocropping in a commercially hospitable environment. But: For Anishinaabe, the value of manoomin is in its biodiversity; this diversity has allowed the Anishinaabe to be able to depend on it regardless of disease and weather, because even if one variety is attacked by disease or does not respond favourably to the environmental conditions the other varieties will survive … (Scheman quoting Doerfler, 484). I heartily agree that the notion of “improvement” is interest-relative. Indeed, I insist on it; the observation supports my point. The conflict between the University of Minnesota and the Anishinaabeg is a conflict of interests. The Anishinaabeg have both cultural and economic interests at stake in the preservation of the native strains; the University of Minnesota researchers have a stake in producing strains that are commercially viable, a stake they probably inherit from the agencies and firms that sponsor their research. But there’s nothing in this story to suggest that there’s anything defective about the epistemic practice of the scientists at the University of Minnesota. Presumably they have been successful in finding out what they wanted to find out. Had they not been, the Anishinaabeg would not have needed to complain. Perhaps what Scheman is thinking is that the conception of knowledge inherent in the “paradigm of laboratory science” is one that encourages the idea that “improvement” can be understood apart from particular interests, perhaps by promoting a picture of the researcher as a featureless individual, devoid of both perspective and agenda. Someone in the grip of this picture might attempt to deflect interestbased criticism of a research program by appealing to the imperatives of “pure science” or by citing a researcher’s right, on intellectual grounds, to pursue any research he or she found interesting. But Scheman does not make this charge explicitly, and neither does Doerfler, as far as we know from Scheman’s citations. Of course, if university officials had offered such a defense, it would have been patently disingenuous. But even if such a defense had been offered sincerely, it would only have shown what someone thought was going on in the research plant, not what was actually happening. [2] And it’s the actual practices that must be implicated if Scheman is to show that the problem in this case is epistemic. Ethics Pure research is intrinsically tied to ethics and cultural values – the aff is the first step towards ethical clarity Calvert, Professor of Science and Technology Policy Studies, SPRU, University of Sussex and Martin, Associate Director Professor of Science Policies at Texas State University , 1 [Jane, Ben, 9/2001, SPRU, “Changing Conception of Basic Research,” http://www.oecd.org/science/scitech/2674369.pdf, accessed 7-4-14, TYBG] The importance of the value of this ideal of basic research is shown by the association of basic research with other culturally valued concepts. Basic science was explicitly tied to ethical values by two physicists; one argued that the methods of science themselves engender scrupulous honesty and integrity, claiming that doing science makes one become a better person. Status also often came up in discussions about the relation between basic and applied research. A US biologist noted that: The elitism of basic science still hangs around. There's still a lot of people, mainly older people, who still look on industrial collaborations as being slightly tainted and dirty, and it's 'prostitution' to do applied research. A UK physicist supported this point, saying that some of the research council committees "still think it's undignified to do anything which is useful". The status of basic research is clearly important and has an impact on the ability of scientists to obtain funding. These associations of basic research with ethics and status demonstrate that there are deep cultural values embedded in the notion of basic research. These historically important values should be acknowledged, especially if we are considering altering long-standing terminology. When describing their research as 'basic research*, scientists are implicitly drawing on these attributes associated with the concept. Strong Objectivity Striving for a sense of strong objectivity instead of a utopian ideal of pure objectivity is good in and of itself – it allows for the inclusions of marginalized people and can reverse institution-controlled science into an epistemologically sound advocacy Harding, philosopher of feminist and postcolonial theory, epistemology, research methodology, and philosophy of science at the University of Delaware, 92 (Sandra, After the Neutrality Ideal: Science, Politics, and "Strong Objectivity", http://lifesandlanguages.wikispaces.com/file/view/Harding+-+Strong+Objectivity.pdf, accessed 7/5/14, LLM) THERE ARE TWO kinds of politics with which the new social studies of science have been concerned. One is the older notion of politics as the overt actions and policies intended to advance the interests and agendas of "special interest groups." This kind of politics "intrudes" into "pure science" through consciously chosen and often clearly articulated actions and programs that shape what science gets done, how the results of research are interpreted, and, therefore, scientific and popular images of nature and social relations. This kind of politics is conceptualized as acting on the sciences from outside, as "politicizing" science. This is the kind of relationship between politics and science against which the idea of objectivity as neutrality works best.' However, in a sometimes supportive and at other times antagonistic relation to it is a different politics of science. Here power is exercised less visibly, less consciously, and not on but through the dominant institutional structures, priorities, practices, and languages of the sciences.^ Paradoxically, this kind of politics functions through the "depoliticization" of science through the creation of authoritarian science. As historian Robert Proctor points out: It is certainly true that, in one important sense, the Nazis sought to politicize the sciences ... . Yet in an important sense the Nazis might indeed be said to have "depoliticized" science (and many other areas of culture). The Nazis depoliticized science by destroying the possibility of political debate and controversy. Authoritarian science based on the "Fuhrer principle" replaced what had been, in the Weimar period, a vigorous spirit of politicized debate in and around the sciences. The Nazis "depoliticized" problems of vital human interest by reducing these to scientific or medical problems, conceived in the narrow, reductionist sense of these terms. The Nazis depoliticized questions of crime, poverty, and sexual or political deviance by casting them in surgical or otherwise medical (and seemingly apolitical) terms ... . Politics pursued in the name of science or health provided a powerful weapon in the Nazi ideological arsenal. The institutionalized, normalized politics of male supremacy, class exploitation, racism, and imperialism, while only occasionally initiated through the kind of violent politics practiced by the Nazis, similarly "depoliticize" Western scientific institutions and practices, thereby shaping our images of the natural and social worlds and legitimating past and future exploitative public policies. In contrast to "intrusive politics," this kind of institutional politics does not force itself into a preexisting "pure" social order and its sciences; it already structures both. In this second case, the neutrality ideal provides no resistance to the production of systematically distorted results of research. Even worse, it defends and legitimates the institutions and practices through which the distortions and their exploitative consequences are generated. It certifies as value-neutral, normal, natural, and therefore not political at all the existing scientific policies and practices through which powerful groups can gain the information and explanations that they need to advance their priorities. It functions more through what its normalizing procedures and concepts implicitly prioritize than through explicit directives. This kind of politics requires no "informed consent" by those who exercise it, but only that scientists be "company men," supporting and following the prevailing rules of scientific institutions and their intellectual traditions. This normalizing politics defines the objections of its victims and any criticisms of its institutions, practices, or conceptual world as agitation by special interests that threatens to damage the neutrality of science. Thus, when sciences are already in the service of the mighty, scientific neutrality ensures that "might makes right." This essay pursues a project begun in other places: to strengthen the notion of objectivity for the natural and social sciences after the demise of the ideal of neutrality."* I turn first to the problem of thinking past the epistemological relativism that critics of the neutrality ideal either embrace or commit. Instead, we can begin to discern the possibility of and requirements for a "strong objectivity" by more careful analysis of what is wrong with the neutrality idea. Standpoint epistemologies provide resources for fulfilling these requirements. Finally, I suggest that the usefulness of the notion of truth, like that of epistemological relativism, should be historically relativized; the unnecessary trouble both make in the postneutrality debates originates in their intimate links to the rejected neutrality ideal. The ideal of objectivity as neutrality is widely regarded to have failed not only in history and the social sciences, but also in philosophy and related fields such as jurisprudence.^ The notion contains a number of elements. In the following passage, Peter Novick describes how it appears in the thinking of historians; but with appropriate adjustments this passage expresses objectivist assumptions more generally: The assumptions on which [the ideal of objectivity] rests include a commitment to the reality of the past, and to truth as correspondence to that reality; a sharp separation between knower and known, between fact and value, and, above all, between history and fiction. Historical facts are seen as prior to and independent of interpretation: the value of an interpretation is judged by how well it accounts for the facts; if contradicted by the facts, it must be abandoned. Truth is one, not perspectival. Whatever patterns exist in history are "found," not "made." The objective historian's role is that of a neutral, or disinterested judge; it must never degenerate into that of advocate or, even worse, propagandist. The historian's conclusions are expected to display the standard judicial qualities of balance and evenhandedness. As with the judiciary, these qualities are guarded by the insulation of the historical profession from social pressure or political influence, and by the individual historian avoiding partisanship or bias—not having any investment in arriving at one conclusion rather than another.^ What is left of the objectivity ideal when neutrality is abandoned? Fairness, honesty, and an important kind of "detachment," to start. Thomas Haskell, for example, points out that it is absurd to assume—as Novick does—that in giving up the goal of neutrality one must give up the ideal of objectivity: The very possibility of historical scholarship as an enterprise distinct from propaganda requires of its practitioners that vital minimum of ascetic self-discipline that enables a person to do such things as abandon wishful thinking, assimilate bad news, discard pleasing interpretations that cannot pass elementary tests of evidence and logic, and, most important of all, suspend or bracket one's own perceptions long enough to enter sympathetically into the alien and possibly repugnant perspectives of rival thinkers. All of these mental acts—especially coming to grips with a rival's perspective—require detachment, an undeniably ascetic capacity to achieve some distance from one's own spontaneous perceptions and convictions, to imagine how the world appears in another's eyes, to experimentally adopt perspectives that do not come naturally—in the last analysis, to develop, as Thomas Nagel would say, a view of the world in which one's own self stands not at the center, but appears merely as one object among many.' Notice that the detachment called for here is not impersonality. The observer is not to act as if s/he were not a social person, or to separate even more from those s/he studies (when it is people or institutions that are the object of study), but, instead, critically to distance from the assumptions that shape his or her own "spontaneous perceptions and convictions." Haskell is concerned here with something different from the distorting effects of the intrusion of politics into neutral science. Instead, it is the distorting "politics of the obvious" to which he is drawing attention. Sometimes this can be a matter of idiosyncratic individual assumptions; but these are relatively easily identified by peers who check research designs, sources, and observations. More problematic are the spontaneous perceptions and convictions that are shared by a scientific community and, usually, by the dominant groups in the social order of which the scientists are members by birth and/or achievement. It is refiexivity that is the issue here: self-criticism in the sense of criticism of the widely shared values and interests that constitute one's own institutionally shaped research assumptions. Haskell's kind of retrieval of the concept of objectivity from its "operationalization" as maximizing neutrality is extremely valuable. However, to become more than a mere moral and intellectual gesture—to become a competent program that can guide research practices—we need some procedures or strategies to pursue that could systematically lead away from wishful thinking, refusing to come to terms with bad news, refusing "to enter sympathetically into the alien and possibly repugnant perspectives of rival thinkers," etc. Otherwise, it is perfectly clear that only the already marginal groups will be regarded as engaging in these bad habits by those with the most authoritative voices in the social order and our research disciplines. The latter, with no conscious bad intent, will arrive at such judgments by simply following the normalizing procedures of institutions and conceptual schemes legitimated already as value-neutral. Without strategies to maximize this kind of objectivity, these moral exhortations remain only idle gestures. These "minorities" have additional cause for alarm at the retreat to gestures. In some of the most influential criticisms of objectivism and its assumptions, the effects on historical, sociological, or scientific belief of such macro social structures as the racial order, the class system, imperialism, and the gender order are completely and sometimes even intentionally ignored.^ In others, the contributions of research and scholarship that begins from the lives of people of color and feminists are devalued and even attacked.^ Yet the articulation of the perspective from the lives of just such marginalized peoples as racial minorities in the first world, third-world people, women, and the poor has provided some of the most powerful challenges to the adequacy of objectivism. The gestures of mainstream writers to the value of good intentions coupled with their persistent failures to manage to "be fair" to the most "alien and possibly repugnant" competing claims cannot give much hope to those who have persistently lost the most from the conceptual practices of power. Embracing or committing epistemological relativism has the effect of defending the dominant views against their most telling critics. Does relativism itself need to be relativized? Informed Debates The permutation is best – informed policy debates must take questions of race, class, geopolitics, and identity into account Bolster, Chair of Humanities at the University of New Hampshire, associate professor of history, 6 (W. Jeffrey, “Opportunities in Marine Environmental History,” Environmental History 11 http://fishhistory.org/wp-content/uploads/2010/05/BolsterEH2006.pdf, accessed 7/6/14, LLM) This essay makes a case for the support and development of marine environmental history. We need to better understand many things: how different groups of people made themselves in the context of marine environments, how race, class, fashion, and geo-politics influenced the exploitation and conservation of marine resources, how individual and community identities (and economies) changed as a function of the availability of marine resources, how technological innovation frequently masked declining catches, how fishermen’s knowledge of localized depletions accumulated in the past, how public policy debates revealed historically specific values associated with the ocean, how collaboration between (and then antagonism among) fishermen and scientists affected marine environments, how faith in the certainty of marine science waxed and waned, how different cultures perceived the ocean at specific times, and—when possible— how past marine environments looked in terms of abundance and distribution of important species.18 These are the constituent parts that get to a deeper historical question: the nature of the greatest sea change in human history. Only good marine environmental history can get to the heart of the ecological and cultural transformations that have cast the twenty-first-century ocean as vulnerable rather than eternal. Despite obstacles and problems, preliminary work in this field makes it look immediately relevant, professionally challenging, and intellectually rewarding. Balanced Debates Key Ocean policy must be dictated by a balanced debate between science-based value judgments and competing advocacies Campbell et. Al., Nicholas School of Environment, Duke University, 9 [Lisa M., Noella J. Gray, Elliott L. Hazen, Janna M. Shackeroff, “Beyond Baselines: Rethinking Priorities for Ocean Conservation,” Ecology and Society, Vol. 14, No. 1, TYBG] Third, if our critique of SBS is correct (or even partially so), then the goal of pursuing recovery of natural baselines is questionable both in the normative and practical sense, ecologically and otherwise. We do not oppose the identification of recovery goals, but such goals are neither self-evident nor “natural.” Rather, decisions about the appropriate baseline to target and mechanisms for pursuing it will involve value judgments. Following Lélé and Norgaard (1996) in their critique of sustainability, we suggest that values need to be explicitly acknowledged when setting priorities for ocean conservation. The values SBS writers espouse regarding pristine ecosystems are one set of values that may inform priority setting, but not the only ones. Other stakeholders will promote other values, and these need to be recognized. There is a practical need for such recognition; an argument that rests on the notion that oceans are unpeopled and that characterizes fishers and other resource users as unnatural intrusions will do little to engage these stakeholders. But calls for recognition of competing values are also philosophical and reflect debates about the interactions of science, values, and advocacy that are taking place both in general (e.g., in conservation biology, see Lackey 2007) and in fisheries management specifically (e.g., Jentoft 2006), a point we return to in the conclusion. Humans Good The permutation solves - their totalizing critique of human interaction with nature objectifies the environment and ignores complexities of social and environmental systems – a balance is key Campbell et. Al., Nicholas School of Environment, Duke University, 9 [Lisa M., Noella J. Gray, Elliott L. Hazen, Janna M. Shackeroff, “Beyond Baselines: Rethinking Priorities for Ocean Conservation,” Ecology and Society, Vol. 14, No. 1, TYBG] There are several consequences arising from the separation of humans from marine nature. First, because humans are “naturally” outside of marine nature, when they do enter ecological equations, they are a problem, serving as top-down (e.g., through fishing) or bottom-up (e.g., through direct and indirect pollution) forcers, or as habitat modifiers (e.g., via trawling and dredging effects). More specifically in SBS, fisheries are the problem, reflecting both the etymology of SBS (Pauly is a fisheries ecologist) and that most long-term data available come from fisheries. Although we do not question what is now global concern with declining fish stocks, we do suggest that the emphasis in SBS on human drivers of change overlooks the role of non-anthropogenic variability in marine ecosystems as described above, and (more importantly) reinforces a static vision of nature in equilibrium prior to human exploitation, a nature to which things are done. This belies the complexity of both ecosystems and social systems, and the links between them. Objectifying humans as exploiters of and separate from nature also narrows the scope of research to one aspect of human–environmental relations, suggesting that regardless of human agency, all humans behave in the same way. This overlooks the ways in which individuals, groups, or institutions, not only degrade, but also conserve and restore oceans. Fem – Dialectical Objectivity The permutation solves best - dialectical objectivity can bridge the gap between environmental ethics and feminist epiestemolgies James, PhD in Philosophy from the University of California, San Deigo, 2K (Christine, “Objective Knowledge In Science: Dialectical Objectivity and the History of Sonar Technology”, http://teach.valdosta.edu/chjames/dissword.pdf, accessed 7/4/14, LLM) When I began this dissertation, I hoped to develop a notion of objectivity that would provide promising solutions to a variety of issues in the Philosophy of Science. This is why it opened with a quote from Haraway and Hardingís call for a successor science project. This call pinpoints the kind of conflict I had hoped to ease, the kind of wound in the Philosophy of Science literature that I hoped to heal. Members of the feminist and mainstream branches of Philosophy of Science and Science Studies seem to regard each other with great distrust, yet they seem concerned with many of the same questions: How can we adequately account for the social influences on science, for both their positive and negative implications? How can we acknowledge those social influences without lapsing into relativism? How can science be subject to social influence and still produce standards, still produce success? I believe that a dialectical sense of objectivity can answer those questions. Similarly, questions about objects themselves have been asked by philosophers for centuries: does the object itself produce knowledge, sense data, qualities? Does the way in which an object is represented or presented to the knower influence, or even predetermine, what can be known about that object? I believe that a dialectical sense of objectivity can produce a richer account of the objects role in the creation of knowledge, the account that has been needed for a better understanding of science. One implication of dialectical objectivity, and the answers it can provide, is the possibility of making peace between feminist and mainstream philosophers of science. The interweaving of Kitcherís philosophy of science with the work of Harding and Longino showed that dialectical objectivity is the common ground that can motivate a new understanding of science that transcends previous disagreements and misinterpretations. However, dialectical objectivity is valuable also for other areas in philosophy. One such area is environmental ethics. Many environmental ethicists are deeply concerned with the distinction between an environmental ethics of care and one based on intrinsic value. Dialectical experience may provide a ground for ethical arguments that can move beyond that distinction. In point of fact, environmental ethicists such as Jim Cheney have already begun work in similar directions. In his 1999 article “Environmental Ethics as Environmental Etiquette: Toward an Ethics Based Epistemology” co-written with Anthony Weston, it is argued that a robust environmental ethic emerges not from disengaged epistemological contemplation, but from a deep revisioning of what it is to interact with the world: Our task is not to “observe” at all-- that again is a legacy of the vision of ethics as belief centered-but rather to “participate” (Cheney 1999, 128). This notion of participation is rooted in a dialectical process. A new appreciation of the dialectical nature of that process has a happy result: if we develop a robust sense of participation or interaction, as giving both a new ethics as well as new knowledge of the world and of the self, then we can envision new kinds of arguments for ethical obligations. One such argument could establish ethical responsibility towards those with whom we interact because when we interact they contribute to our mutual constitution and our reciprocal definitions. We can make the case that we have a moral obligation to those who help us define ourselves. These kinds of arguments would also have a positive effect on the relationship of epistemology and ethics: in a concrete way we can show how new knowledge can lead to, and provide a foundation for, better ethics. 2AC Turns Disease Basic science key to solve disease Privitera, Research Associate Professor; University of Cincinnati, 9 (Mary B., “Interconnections of basic science research and product development in medical device design”, http://static.squarespace.com/static/51ed9ee5e4b072e1e9acd667/t/51fabdb2e4b07e1682eb809d/137 5387058278/basic_science_design.pdf, accessed 7/9/14, LLM) According to the AMA, basic science research is the investigation of a subject to increase knowledge and understanding about it. The information gathered from basic science research is essential for “translating” or applying new discoveries to patient care (2). Basic science research has the objective as the advancement of knowledge wherein its beauty is in the quest for understanding of the world as we know it. As conducted, e.g. develop a hypothesis, design an experimental protocol to test the hypothesis, conduct an experiment or survey, and use an appropriate statistical analysis of the data. The process explores the breaking apart of elements in experimentation within a particular environment. That said, often it is in the unexpected results during experimentation that leads to a practical application of the knowledge gained. Traditionally, basic science research is considered as an activity that preceded applied research or translational research, which in turn preceded development into practical applications and most often completed in an academic setting. The reality for medical device design is that the basic fundamental scientific principles applied to a product design to treat a particular disease are constantly being verified and tested through both direct application in patient care and full clinical trials. In essence the science behind the device is under constant review and exploration. These reviews are conducted not only within institutions but also in the industrial entities that have vested interest. Basic science key to working with clinical research to solve disease Sampath, Professor in the Division of Neonatology Pediatrics, and Children’s Research Institute, Medical College of Wisconsin, and Ramchandran, Department of Obstetrics and Gynecology, Developmental Vascular Biology Program, Neonatology Pediatrics and Children’s Research Institute at the Medical College of Wisconsin, 14 (Venkatesh and Ramani, May 22nd, “Translational Research – It Takes Two to Tango!”, Volume 1, Issue 1, http://medcraveonline.com/MOJCSR/MOJCSR-01-00003.pdf, accessed 7/9/14, LLM) Introduction From a basic scientist perspective, translational research is defined as the ability to use knowledge of basic mechanisms responsible for fundamental cellular processes to understand the causative mechanisms underlying human disease. From the perspective of a clinical scientist, translational research implies use of data from human studies and clinical trials to enhance our understanding of human disease and improve health outcomes. A broader definition of translational research would encompass use of all data (clinical or basic research) to understand human disease and improve health outcomes. In this era of the global information explosion, the obstacle in translating research is not primarily in the availability of or the accessibility to information, but rather in the application of information to solve real world problems. Basic scientists have been educated and trained to be minimalist by nature to design simple experiments with hypotheses grounded on solid preliminary data. However, this approach is counterintuitive to the majority of disease processes, which by nature are multi-faceted, and evolve over time. On the other hand, clinical scientists understand complexity but feel challenged in understanding the molecular mechanisms that drive disease pathogenesis; a key to identifying the molecular targets and mechanisms for pharmacological intervention. An amalgamation of both these approaches is needed for science to be translated successfully. On face value, the basic scientist attempts to deconstruct the disease process by simplifying the tangles and approaches it in a stepwise fashion. The clinical scientist on the other hand approaches the problem with investigating trends, correlations and statistical analysis of significance. There lies the basic difference in approaches, which eventually leads to the two worlds that look at the same problem in isolation. Grant writing and funding pressures for basic scientists, and extensive clinical responsibilities with increasing pressures to generate revenue pushes the two worlds further apart into their respective comfort zones. One might then ask why is translational research important? Interestingly, the same pressures that keep the basic scientists and clinical scientists apart are now ironically working actively to make them work together. For the basic scientists, the National Institutes of Health (NIH), the main funding agency has increasingly focused its efforts on the “health,” portion of its acronym. Scientists are no different than other professionals – where the money goes so goes the research priority of basic scientists. The same “health,” push from NIH has concomitantly fueled clinicians who have been frustrated with minimal options in the clinic for their patients to get into research. Institutions and administrations have increasingly seized upon this opportunity and pressure has increased at all levels for both basic scientists and clinicians to perform “translational research.” Unfortunately, this economic push has not coincided with an important factor that is necessary to achieve success in this arena, which is the availability of a workforce that is ready to take on this challenge. Although MD/PhD dual degree clinical scientists are ideally poised to take advantage of this critical push, the numbers of such investigators are not at a critical mass to take full advantage of the opportunities in medicine. Importantly, the range of their professional options is broad, and only 39% devote more than 75% effort to research [1]. Therefore, we are left with the breed of PhD’s and MD’s who, understanding the limitations of their own science, become cheerfully “uncomfortable” to get the job done. Unless we find a way to work together, and understand each other’s language and code, the road ahead is likely to be challenging. Elias Zerhouni, MD (former Director of NIH) [2] summarized this elegantly- “At no other time has the need for a robust, bidirectional information flow between basic and translational (clinical) scientists been so necessary”. Basic research key to successful applied disease prevention and curing techniques OSC, Educational Firm Licensed Under Rice University, 14 (OpenStax College, February 20th, “The Process of Science”, http://cnx.org/content/m45421/latest/content_info#cnx_cite_header, accessed 7/9/14, LLM) Basic science or “pure” science seeks to expand knowledge regardless of the short-term application of that knowledge. It is not focused on developing a product or a service of immediate public or commercial value. The immediate goal of basic science is knowledge for knowledge’s sake, though this does not mean that in the end it may not result in an application. In contrast, applied science or “technology,” aims to use science to solve real-world problems, making it possible, for example, to improve a crop yield, find a cure for a particular disease, or save animals threatened by a natural disaster. In applied science, the problem is usually defined for the researcher. Some individuals may perceive applied science as “useful” and basic science as “useless.” A question these people might pose to a scientist advocating knowledge acquisition would be, “What for?” A careful look at the history of science, however, reveals that basic knowledge has resulted in many remarkable applications of great value. Many scientists think that a basic understanding of science is necessary before an application is developed; therefore, applied science relies on the results generated through basic science. Other scientists think that it is time to move on from basic science and instead to find solutions to actual problems. Both approaches are valid. It is true that there are problems that demand immediate attention; however, few solutions would be found without the help of the knowledge generated through basic science. One example of how basic and applied science can work together to solve practical problems occurred after the discovery of DNA structure led to an understanding of the molecular mechanisms governing DNA replication. Strands of DNA, unique in every human, are found in our cells, where they provide the instructions necessary for life. During DNA replication, new copies of DNA are made, shortly before a cell divides to form new cells. Understanding the mechanisms of DNA replication enabled scientists to develop laboratory techniques that are now used to identify genetic diseases, pinpoint individuals who were at a crime scene, and determine paternity. Without basic science, it is unlikely that applied science would exist. Another example of the link between basic and applied research is the Human Genome Project, a study in which each human chromosome was analyzed and mapped to determine the precise sequence of DNA subunits and the exact location of each gene. (The gene is the basic unit of heredity; an individual’s complete collection of genes is his or her genome.) Other organisms have also been studied as part of this project to gain a better understanding of human chromosomes. The Human Genome Project (Figure 6) relied on basic research carried out with non-human organisms and, later, with the human genome. An important end goal eventually became using the data for applied research seeking cures for genetically related diseases. Economy Basic science is key to sustainable economic growth Usher, Science Development.Net Reporter, 13 (O., July 31st, “Basic science linked to faster economic growth”, http://www.scidev.net/global/rd/news/basic-science-linked-to-faster-economic-growth.html, cites to the cited study: Jaffe K, Caicedo M, Manzanares M, Gil M, Rios A, et al. (2013) Productivity in Physical and Chemical Science Predicts the Future Economic Growth of Developing Countries Better than Other Popular Indices, June 12th, 2013, accessed 7/9/14, LLM) Middle-income countries that focus on basic sciences, such as physics and chemistry, grow their economies faster than nations that invest in applied sciences, such as medicine or psychology, according to a paper by Venezuelan researchers. They say that "investing in basic scientific research seem[s] to be the best way a middle-income country can foment fast economic growth", although they found no direct cause and effect between basic science and economic development. Instead, they believe that investment in basic sciences — as indicated by the proportion of published articles in these fields — reveals a rational, decision-making atmosphere within a country and among its leaders, as well as promoting economic growth. Klaus Jaffe, lead author of the paper and coordinator of the Centre for Strategic Studies of Simón Bolívar University in Venezuela, tells SciDev.Net that the correlation between scientific productivity and economic growth "has always been suspected, but there has been very little evidence that supports this idea. "We had been observing that poor or middle-income countries were growing at a different pace than the developed ones and we wanted to know why." The study, published in PLOS One last month (12 June), set out to investigate if some areas of science promote development more than others, and if applied sciences are better at advancing economic development than basic sciences. The researchers examined the correlation between World Bank data on the growth of GDP (gross domestic product) per capita and the proportion of scientific publications in different scientific fields . They found scientific productivity in basic science, including physics, chemistry and material sciences, correlated strongly with countries' economic growth over the following five years. And preferential investment in technology, without investment in basic sciences, achieved little economic development, the say. "Thus, technology without science is unlikely to be sustainable." They also discovered that scientific productivity was a much better predictor of economic wealth and the Human Development Index — a composite of life expectancy, education and income indices used to rank countries' development — than other commonly used indices, such as indices of competitiveness or globalisation. "The results of our paper demonstrate that the most important thing [for sustainable development] is to invest in basic sciences. Those who try to skip this step fail," says Jaffe. Education Basic science is the only reliable way to improve education Kaysen, MIT Political Economy Professor, JFK advisor, 65 [Carl, Basic Research and National Goals: A Report to the Committee on Science and Astronautics, U.S. House of Representatives, pg. 147 – 167, FCB] Second, there is the intimate relation between the conduct of research and the provision of higher education in science and technology. Trained scientists, engineers, and doctors are needed in increasing numbers to operate the apparatus of society in defense, industry, and health, as well as to continue the stream of improvements in that apparatus that we have experienced in the past and expect in the future. The training of these specialists is increasingly carried on in close connection with the conduct of both basic and applied research. There is wide agreement among both the consumers and producers of specialized scientific and technical training that an intimate relationship between research and teaching in these areas is necessary, and that the best centers for training are those that provide this connection. This is a requirement for the support of research that would exist even in the absence of a useful application of the knowledge that the research produced (2). Democracy Basic research is essential to science education and functioning democratic societies Verhoogen, Berkley Geology Professor, 65 [John, Basic Research and National Goals: A Report to the Committee on Science and Astronautics, U.S. House of Representatives, pg. 147 – 167, FCB] THIS CARD HAS BEEN MODIFIYED TO REMOVE GENDERD LANGUAGE, Letters in parentheses have been added Even in our pragmatic culture, usefulness is not the sole criterion of merit. Basic research has a much broader justification in that the quest for knowledge is one of(the Hu)man's most characteristic and vital urges; the desire to know is perhaps what most sharply separates him(us) from beast. Most of human history can be read as an incessant query, the search for answers to unceasing questions: What is the stuff of the universe, and why; what is life, and how did it start? It is properly (hu)mankind's heritage that knowledge is an essential aspiration—to give insight into the circumstances of our existence, and to give us freedom from fear of natural forces. To put it simply: Human beings want bread, and they want freedom, and some of them want to know. At this point it is not inappropriate to consider the close relationship of science to a free society. Is it accidental that the 18th century produced at the same time the first great burst of basic science and the first great step toward free democratic societies? Is it mere coincidence that the American Constitution and the French Declaration of Human Rights arc contemporaneous with the great mathematicians, physicists, chemists, geologists, on whose work all of our modern science still rests? Many have considered the relationship between a free society and the scientific spirit to be fundamental. A democratic society, it is said, is one that is uniquely favorable to the scientific spirit; conversely, a society is more likely to prosper and remain free if it fosters in all its citizens the spirit of free inquiry, the desire to know, the search for new and better ideas, and the curiosity that are basic ingredients of science. Even though science has occasionally been misused, and scientists have supported undemocratic philosophies, it remains true that allowing the scientific mind free play is a means of strengthening the individual freedom of mind without which a democracy may find it hard to survive. It should be pointed out, in fairness to other aspects of culture, that science is not unique in promoting democratic welfare: Philosophy and the arts are just as indispensable as science. The study of history is surely a better guide to political wisdom than is quantum mechanics. It has been said again and again that science cannot flourish when divorced from the humanities, and to that view we subscribe. Support of science must entail support of the liberal arts. A full discussion of this matter should properly find its place in a report on governmental support of education, which is not the subject of this paper; let it suffice at the moment to remind the reader that good science requires good education, in the broadest acceptation of that term. Environment (General) Lack of public scientific literacy dooms projects to cause environmental destruction only new methods of analysis can solve Baptista et al, Ph. D in Civil Engineering from MIT and director of NSF Sci & Tech center, 8 [Antonio, 2008, “Scientific exploration in the era of ocean observatories,” http://vgc.poly.edu/~juliana/pub/cmop-cise2008.pdf, 7-6-14, FCB] Society’s critical and urgent need to better understand the world’s oceans is amply documented and has led to a unique convergence of operational and scientific interests in the US, organized around the concept of ocean observatories: cyber--facilitated integrations of observations, simulations, and stakeholders. In particular, programs are emerging aimed at creating an operational Integrated Ocean Observing System (IOOS)1 to address broad society needs and an open, ocean--observing research infrastructure (the Ocean Observatories Initiative [OOI]).2 Perhaps no part of the ocean is in more need of observatories than coastal margins, which are among the most densely populated and developed regions in the world. Coastal margins sustain highly productive ecosystems and resources, are sensitive to many scales of variability, and play an important role in global elemental cycles. But natural events and human activities place stresses on these margins, rendering the development of sustainable resources and ecosystems challenging and contentious, with policy decisions often based on insufficient scientific understanding of the causes and consequences of natural and anthropogenic impacts. Effective scientific exploration thus requires the ability to generate a wide variety of analyses for a broad audience in an ad hoc manner. In this article, we introduce an observatory in evolution, offer a futuristic vision of observatory--enabled scientific exploration, and discuss how provenance is essential to make this vision a reality. Focus on pure research produces new methodologies to solve ecological crises Box and Barker, chairs of the influential Urban Forum of the UK-Man and the Biosphere Committee, 14 (John and George, An Agenda for Urban Biodiversity - Green Grids, Design Codes and Fiscal Incentives,” www.researchgate.net%2Fpublication%2F259763013_An_agenda_for_urban_biodiversity__green_grids _design_codes_and_fiscal_incentives&ei=9BfAU6mdDsvpoASn74LoDg&usg=AFQjCNFordGDmnc3RceI0Y sKF39LEa0VdQ&sig2=ND97xFrBhiEqstvZyMjr5Q, Accessed 7-10-2014, LK) It is reasonable for businesses, Local Government and Government-funded agencies to pay for research and review which helps tackle particular practical issues, but pure research is needed as well. Without the development of understanding which this will bring, we may be – and probably often are – addressing the wrong issues in programmes of applied research. To focus solely on problem-solving, risks continually narrowing the field of view. Pure research broadens the perspective and sheds fresh light. Universities and research institutes should reassess the balance of their programmes of research into ecology - and related disciplines including ecosystem functioning, sociology and psychology - to give more weight to pure research which will bring benefits to all in the longer term. In the face of serious environmental challenges accurate scientific knowledge is lacking – that creates a disconnect between the public and scientific reality Deacon et al, historian specializing in oceanography and fellow @ School of Ocean and Earth Southampton U, 01 [Margaret, Understanding the Oceans: A Century of Ocean Exploration, pg. 1, FCB] This is a time of well-founded concern about the ocean, and how well we understand its processes. Specific, problems such as the depletion of fisheries, the effects of marine pollution and the impact of mineral extraction, as well as broader questions such as the role of the ocean in controlling the Earth's climate, not only pose challenges to scientific understanding but also represent factors that could influence both the lives and livelihoods of many, and potentially alter the earth as we know it. In appreciating what is involved in such issues, let alone what might be done to avoid unwelcome consequences, accurate scientific knowledge of the sea is imperative. This is the province of Oreanographers, but it is not only scientists who are concerned about these matters and a wider knowledge of what they are doing should form part of the public debate. This is not as straightforward as it might seem. For one thing, late twentieth-century oceanography is a dynamic and therefore rapidly changing discipline. Developments during recent decades have transformed our scientific knowledge of the sea, with the result that present-day understanding of the ocean differs significantly from what was possible even fifty, let alone a hundred years ago. This is in many respects due to advances in technology, which have made possible new mode* of exploring an environment notoriously hard to study. At the same time, wider concerns about the ocean, and growing awareness of its significance in the Earth's past, present and future, and in die origin and maintenance of life, have helped to create a more general interest in such work, beyond the immediate scientific community actively engaged in research. Too often, this wider audience lacks works able to bridge the gap between fragmented and specialized scientific literature and the needs of a broader readership, a function that textbooks only partially fulfil. Older general sources (such as Herring and Clarke 1971) continue to be valuable, but the subject has altered considerably since those days. Some books outstandingly help to change the outlook of a generation. Rachel Carson's The sea around us, first published in 1950, was such a one; perhaps Sylvia Earie's Sea Change (1995) may be another. There is, however, a continuing need for works, such as Summerhayes and Thorpe (1996), which are neither popular accounts on the one hand, nor written exclusively for those working in the same field on the other, and which try to present authoritative accounts of scientific understanding of the ocean at the close of the twentieth century to a wider audience, whether scientists or non- scientists. Heg Basic research underpins all aspects of American leadership Coletta, PhD in Political Science at Duke University, 9 (Damon, Masters in Public Policy @ Harvard, Assoc Prof of Geopolitics & National Security Policy @ US Air Force Academy, September 2009, http://www.usafa.edu/df/inss/Research%20Papers/2009/09%20Coletta%20Science%20and%20Influenc eINSS(FINAL).pdf, accessed 7/9/14, LLM) The social science literature recognizes that the best practical solution is somewhere in between, and anticipating Zakaria, that the dilemma is less acute if the professionals develop Adam Smith‘s moral sentiments, that is, if the expert advisers see themselves as officers with a stake in the larger system. The more seriously professionals take this moral code to serve the principal and not game the system by exploiting asymmetric knowledge for their individual benefit, the more autonomy they can be granted, and the more the republic can gain from expertise in military affairs, intelligence analysis, or economic strategy. Particularly after the U.S. government‘s dramatic expansion of patronage for science through the Office of Naval Research in 1946, science is home to one of those professions vital for maintaining national power and position in the international system. Furthermore, a familiar principal-agent dilemma confounds democratic attempts to strike the balance between technocratic virtuosity and public accountability.86 At present, the difficulties mission-oriented bureaucracies like ONR have in detecting and nurturing Nobel quality work in the basic sciences suggest that democratic constraints are set too tight. To regain the reputation abroad for outstanding American Science, government sponsors will have to grant scientists more autonomy at home, especially in the field of basic research. Program directors and scientist beneficiaries at university will garner more freedom from politicians and policymakers if they can embrace a professional ethos both patriotic and moral. If these professionals internalize social benefits to science, to mankind, and to America‘s international influence from fulfilling the public trust, American democracy can scale back its regulations. It can also subdue 33 debilitating demands for timely material results without fretting over the loyalty of experts serving on the remote frontiers of science. Congress should set aside a percentage of executive agency budgets, not just for Science & Technology or Research & Development as broad categories, but for basic research, what Defense calls 6.1 in particular. Politicians understandably worry that with fewer strings attached to this money, science experts will unavoidably have greater temptation to defraud the public or substitute their preferences for those of political masters in the mission agencies. Nevertheless, more progress reports, more assessment rubrics, and tighter integration with technology demands increase accountability only at the cost of enervating the national effort to expand the frontiers of knowledge. Zakaria had it correct: in the long run no system, certainly no democracy, can retain the lead internationally in scientific, economic, or political development if its professionals will not hew to duty, especially when no one is looking. Basic research is key to US leadership, current spending priority’s ensure collapse MIT Technology Review, 5 [Massachusetts Institute of Technology Technology Review, 3-1-05, “Follow the Money,” http://www.technologyreview.com/featuredstory/403763/follow-the-money/, FCB] For many in the technology community, the threat of crisis became much more vivid in early December when President Bush signed off on the fiscal year 2005 U.S. federal budget. While this year’s budget increases spending for research and development by 4.8 percent to $132.2 billion, most of that increase – 80 percent – goes to defense R&D, and most of that to new weapons development, according to the American Association for the Advancement of Science (AAAS). In fact, defense-related R&D reached a record high $75 billion this year. One winner was the U.S. Department of Homeland Security, which gets a 19.9 percent increase in its R&D budget, to $1.2 billion. The big loser is the National Science Foundation (NSF), which had its R&D budget cut by .3 percent, to $4.1 billion; it was the first cut in NSF’s budget since 1996. Meanwhile, R&D funding for the National Institutes of Health (NIH) increased by just 1.8 percent to $27.5 billion; it was NIH’s smallest percentage increase in years, and well below the rate of inflation. “Defense and homeland security are very important. I can’t criticize funding increases per se in those areas,” says Shirley Ann Jackson, president of Rensselaer Polytechnic Institute in New York and the 2004 AAAS president. “But the bigger issue is sustaining focus and support for funding of basic research across broad fronts. We have to have a robust base of basic research. We’re talking about potentially eroding that base.” Jackson adds, “Other places will innovate. The question is, are we going to be a leader? If we don’t pay attention to the warning signs, 15, 20 years from now, we could find ourselves in a relatively disadvantageous position in terms of global leadership.” Experts also worry that the federal R&D budget has become too skewed toward relatively mature technologies. “A lot of the federal funding has gotten a little more conservative and risk averse. The government needs to put a bigger percentage in radical innovation and more-exploratory research – technology that’s going to be transformational,” says Deborah Wince-Smith, president of the Council on Competitiveness, a group of industry, university, and labor leaders based in Washington, DC. Amar Bose, professor emeritus at MIT and founder of the Bose audio company in Framingham, MA, puts it more bluntly: “Research funding is going downhill, and I don’t see it turning around. We’re going to have trouble.” Budget talks in the US are only focus on applied science, basic or pure science is key to international leadership Coletta, PhD in Political Science at Duke University, 9 (Damon, Masters in Public Policy @ Harvard, Assoc Prof of Geopolitics & National Security Policy @ US Air Force Academy, September 2009, http://www.usafa.edu/df/inss/Research%20Papers/2009/09%20Coletta%20Science%20and%20Influenc eINSS(FINAL).pdf, accessed 7/9/14, LLM) Based on how U.S. constitutional democracy is structured, we should observe a recurring tension between society‘s desire to benefit from professional expertise and its demand for accountability. In the U.S. case, scientific advice and scientific research sponsored by the state have been articulated across mission-oriented agencies serving an urgent governmental function—defense, commerce, health, agriculture. With few exceptions, even the National Science Foundation is not entirely immune, research sponsors and laboratories within the U.S. Government feel enormous pressure. Operational branches of the Executive agencies, massive in terms of budget and personnel compared to R&D, as well as Congressional representatives on key authorizing committees, push U.S. Science to be technologically relevant. In the language of the Defense Department‘s framework, there is a steep downhill slope running from 6.1 (basic research) to 6.2 (applied research). We have seen evidence of the tendency to slip away from pure science sponsorship in the budget hearings of Congressional committees on Science and Technology and Armed Services, as well as the evolution of the nation‘s first post-World War II science agency—the Office of Naval Research—away from basic research. The question remains whether democratic pressure to harvest superior technology, to the point of neglecting what one chair of the Projection Forces Subcommittee, House Armed Services called the seed corn of innovation, levies costs on U.S. foreign policy serious enough to hamper the superpower‘s bid for sustainable hegemony. In this regard, Brazil provides an interesting case study. While no single country can constitute a representative sample that would reveal an average score for how well the United States leverages scientific leadership to sustain international hegemony, successes and challenges of the Brazil case tell about the likelihood that the United States can optimize its resources and maintain its leadership role in other regions of the world.56 Laundry List Pure science is a moral and esthetic necessity which determines social health Kaysen, MIT Political Economy Professor, JFK advisor, 65 [Carl, Basic Research and National Goals: A Report to the Committee on Science and Astronautics, U.S. House of Representatives, pg. 147 – 167, FCB] The argument so far has been couched entirely in instrumental terms. The value of basic research has been assessed in terms of other goods, for which it is a necessary input: military strength, health, economic growth. This is a narrow view: scientific research can be viewed as itself a desired end-product in at least two different ways. First, it may be a significant separate component of national power in our nationalistic, competitive, less-than-orderly world of many nations. Second, it is an esthetically and morally desirable form of human activity, and the increase in this activity is itself a proper measure of social and national health. I myself—as might be expected of an academic— share the second view. I am skeptical of the first, since I believe that the politically significant element of prestige which rests on excellence in science is related to the military and economic significance imputed to scientific leadership. Nonetheless, I think it is unnecessary to debate the merits of either of these views, since the investment or instrumental aspects of basic research are in my judgment of sufficient importance to provide a basis for policy judgment independently. None of the arguments above that justify Federal support for basic scientific research provide in themselves a measure of what level of expenditure is necessary or desirable. Indeed the nature of the arguments themselves is such as to make it impossible for any precise payoff calculation to be made. In sum, they say expenditure on basic science b investment in a special kind of social overhead— knowledge and understanding—that contributes directly and indirectly to a wide variety of vital social purposes. It b in the very nature of an over- head that a nice calculation of the "right" amount to expend on it b difficult While we could conceive a level of research activity so small that education and applied research began visibly to suffer, and equally, we can conceive a flow of funds so generous that they would obviously be wastcfully employed, the limits between the two are very wide. Pure science pays dividends, adds to the stock of knowledge and must be federally funded Kaysen, MIT Political Economy Professor, JFK advisor, 65 [Carl, Basic Research and National Goals: A Report to the Committee on Science and Astronautics, U.S. House of Representatives, pg. 147 – 167, FCB] The fundamental justification for expending large sums from the Federal budget to support basic research is that these expenditures are capital investments in the stock of knowledge which pay off in increased outputs of goods and services that our society strongly desires. However, the nature of the payoff is such that we can appropriately view these investments as social capital, to be provided in substantial part through the Government budget, rather than private capital to be provided through the mechanism of the market and business institutions. Broadly, the payoff of basic research in the aggregate to the whole of society is clear, as we shall argue in some detail below. However, the fruits of any particular piece of research are so uncertain in their character, magnitude, and timing as to make reliance on the market mechanism to provide an adequate flow inappropriate. The market mechanism operates on the principle that he who pays the costs gets the benefits, and vice versa, and relies on an anticipation of benefits that is certain enough to justify the outlays required to realize them. The benefits of the kind of knowledge that basic research seeks are usually difficult or impossible to keep for a particular firm or individual. Indeed, the knowledge is often useful as it can be added to the general stock of scientific knowledge that is held in common by the community of those technically proficient in the relevant discipline. Thus a business firm which paid for a particular piece of basic research work could not, in general, prevent its result from being used by others. Further, the uncertainty as to just what would result, and when, and as to whether the useful purpose to which it could be applied would in fact be one that was relevant to the activities of the firm, would in general make expenditure in support of this work an unattractive investment. Finally, several of the kinds of payoffs from basic research relate to outputs that are already the product of Government activity, rather than of business operating through the market mechanism. We can distinguish at least four different kinds of benefits to the community that flow from basic research. Pure science is independently key to heg, medicine and quality of life Kaysen, MIT Political Economy Professor, JFK advisor, 65 [Carl, Basic Research and National Goals: A Report to the Committee on Science and Astronautics, U.S. House of Representatives, pg. 147 – 167, FCB] We can distinguish at least four different kinds of benefits to the community that flow from basic research. First, it is a major input to the advance of applied science and technology, from which there flows continuing growth in our military capability, our health, and our productive capacity. This point is obvious and needs little elaboration. But it is worth reminding ourselves that the relation between input and output is an elastic one. The relationship of the whole revolution in military technology, which began in World War II and is still continuing, to advances in basic science of the preceding generation has been discussed at length and frequently. In medicine, we can mention the practical therapeutic fruits of research on vitamins and hormones carried on by physicians, biochemists and physiologists. In industry, we can compare the history of transistors, on the one hand, with that of neon and fluorescent lighting on the other. In the first case, the passage from basic research to wide industrial application was unusually rapid; in the second, more than 50 years passed between the first systematic scientific examination of the phenomena of electric discharge tubes and fluorescence, and their practical applications in lighting. An even longer gap, and a much less predictable set of applications, is exemplified by the period that lay between Cayley's development of matrix algebra, and its use in such diverse fields as aerodynamics and the analysis of communication networks. The only barrier to solving global problems is an insufficient amount of pure research, the aff more permanently solves all of your impacts Kaysen, MIT Political Economy Professor, JFK advisor, 65 [Carl, Basic Research and National Goals: A Report to the Committee on Science and Astronautics, U.S. House of Representatives, pg. 147 – 167, FCB] Third, experience shows that an applied research and development effort, undertaken with the purpose of solving specific practical problems, benefits from a close relation with basic research. This is true both in general and in the individual research laboratory. The whole body of scientists and engineers in applied research establishments— whether in defense or industry or medicine, private business or government—do their job of problem-solving more effectively when they are in contact with scientists undertaking basic research in areas that under- lie their particular problems. Many industrial laboratories have found this to be true by experience, and either incorporate basic research groups or try to achieve the same effect by visiting and consulting arrangements with university scientists. In overall terms, there appear to be no exceptions to the proposition that nations with strong capabilities in applied science and technology have strong capability in basic research ; though the association does not necessarily hold in the reverse direction. Finally, the corps of scientists working on basic research represent an important reserve of capability in applied research and development that can be drawn upon when national needs dictate. Our experience in World War II showed the tremendous reliance placed on so-called scientists in military research and development This was true not only in nuclear weapons, but also in radar, sonar, proximity fuses, and other critical fields of military research. The talent of the superior scientist lies to a large extent in his/her ability to make conceptual inventions and in nuclear weapons, but also in radar, sonar, proximity fuses, and other functioning devices. These are precisely the talents required to make large forward strides in technology in a short time. Indeed, but for the stimulus to American science created directly and indirectly by the inflow of refugees from Europe in the 1930's, it would not have been possible for us to do all that we did do during the war. If we allowed basic research to sink to the level represented by what might be paid for by business and educational institutions out of their own funds, we would be deprived of much of this reserve. In the future, we could envision circumstances in which we might wish to draw on this reserve capability for other purposes than military needs; indeed, in the space field, and to some extent in connection with problems of civilian technology and assistance to developing countries, we can see some examples of this kind already. Warming Science is good because it is self-correcting – without it we won’t and can’t act on global warming Johnson, Graduate of UW-Madison in Computer Science, writes about urban planning, energy policy, science and entrepreneurship, 6 (Tad, November 14, 2006, “Science,” http://www.tadfad.com/2006/11/14/science/, accessed 7-12-14, LK) Science is one of the few places one can find Truth. It is not based on conjecture, opinion, hearsay, myth, or faith. Science is not politics. Science is not journalism. Science is certainly not religion. Science is built exclusively on truths that combine to make Truth. One of the most important facets of science is the (aptly named) scientific method. The scientific method requires measurable, repeatable, documented observations that together prove or disprove a hypothesis. By following the scientific method, scientists can remove the human element from the end result. Any properly trained and equipped scientist could repeat an experiment to confirm some assertion. Before any scientific study is published, it must endure the scrutiny of peer review to ensure its merit. I cannot stress this last point enough. Unlike politics, religion, journalism, etc., this ensures that the end result of the scientific method is entirely separated from the scientist. There is no room for spin, interpretation, bias, or opinion. This is why I capitalize Truth. Science is the closest thing to Truth that we will ever know. A common counter-argument to the merits of science is that scientific Truth changes. That is, what was once considered Truth is now rejected as a flawed theory. To the contrary, the ability for science to correct prior errors makes it all the more powerful. The continuous search for Truth is what makes science so important. Now the hook: the Bush administration has been very hostile towards science. They have cut funding at the EPA and NASA (among others) to the point where important studies cannot be done. They have stifled reports and attempted to discredit important findings by countering with opposing “studies”. Case in point: global warming. There are over one thousand peer reviewed studies in print that conclude that humans are drastically altering the composition of the earths atmosphere and, therefore, climate. There are zero peer reviewed studies that conclude otherwise. Yet by finding a handful of scientists to go on national television and refute this conclusion, the Bush administration has convinced the American public that the issue is still up for debate. You might notice an important distinction: there may be plenty of scientists that think global warming is a myth. You will find exactly zero peer reviewed scientific studies that conclude the same. Unfortunately, most people are not familiar enough with science to understand this fundamental difference. 2AC AT: DAs AT: China DA Science isn’t zero sum – US-China scientific cooperation is mutually beneficial Chu, Seceretary for the DoE, 11 (Steven, January, “U.S.-China Clean Energy Cooperation”, http://www.us-chinacerc.org/pdfs/US_China_Clean_Energy_Progress_Report.pdf, accessed 7/13/14, LLM) Science is not a zero-sum game. In my experience as a scientist, collaborations with other research groups greatly accelerated our progress. Similarly, cooperation between the United States and China can greatly accelerate progress on clean energy technologies, benefiting both countries. As the world’s largest producers and consumers of energy, the United States and China share many common challenges and common interests. Our clean energy partnership with China can help boost America’s exports, creating jobs here at home, and ensure that our country remains at the forefront of technology innovation. At the U.S. Department of Energy, we are committed to working with Chinese partners to promote a sustainable energy future. Working together, we can accomplish more than acting alone. The United States and the People’s Republic of China have worked together on science and technology for more than 30 years. Under the Science and Technology Cooperation Agreement of 1979, signed soon after normalization of diplomatic relations, our two countries have cooperated in a diverse range of fields, including basic research in physics and chemistry, earth and atmospheric sciences, a variety of energy-related areas, environmental management, agriculture, fisheries, civil industrial technology, geology, health, and natural disaster planning. More recently, in the face of emerging global challenges such as energy security and climate change, the United States and China entered into a new phase of mutually beneficial cooperation. In June 2008, the U.S.-China Ten Year Framework for Cooperation on Energy and the Environment was created and today it includes action plans for cooperation on energy efficiency, electricity, transportation, air, water, wetlands, nature reserves and protected areas. In November 2009, President Barack Obama and President Hu Jintao announced seven new U.S.- China clean energy initiatives during their Beijing summit. In doing so, the leaders of the world’s two largest energy producers and consumers affirmed the importance of the transition to a clean and low-carbon economy—and the vast opportunities for citizens of both countries in that transition. Science isn’t zero sum Plumer, Political Analyst for the Washington Post, 13 (Brad, October 17th, “Why China isn’t likely to overtake the U.S. in science anytime soon”, http://www.washingtonpost.com/blogs/wonkblog/wp/2013/10/17/despite-congress-best-efforts-the-us-isnt-about-to-lose-its-top-spot-in-science/, accessed 7/13/14, LLM) But is China really about to pass the United States in science? Probably not. At least, probably not within the next decade. Gwynn Guilford has a great report in Quartz today noting that the Chinese don't seem to be getting nearly as much value as one might think from all that R&D spending. Among other things, she points to a new policy brief (pdf) from Guy de Jonquières of the European Centre for International Political Economy that makes a few key points: — China isn't getting as much value for its R&D. In 2012, China spent $300 billion on R&D, more than Japan and Germany combined and second only to the United States. But that's not as impressive as it sounds. "R&D spending is, at best, no more than a crude measure of input: it says nothing about output," notes de Jonquières "A 2008 study by Duke University found that engineering degrees in the U.S. were generally of a higher standard than those in China. ... Indeed, calculations by Ernst & Young, the accountancy firm, find that China has not been moving closer to the 'technology frontier' — defined as the performance level achieved by the world's most advanced and efficient economies — but slipping steadily further away from it." — Too many of China's patents are low-quality. Although China now puts out roughly as many patents as the United States does, that's also a misleading indicator. U.S. patents tend to be much higher quality on the whole. "In 2011, fewer than a third of applications to China's patent office were classified as 'innovation' patents, and these accounted for only one tenth of all patents granted between 1985 and 2010," Jonquières writes. "The remainder were lower-quality design and utility-model patents that need to meet far less demanding standards — so much so that some Chinese experts have said that they risk bringing the whole patent system into disrepute." — Churning out journals doesn't always lead to better research. China churns out more scientific papers than anyone but the United States — and has its sights set on the top spot. But here again, the quality is uneven. "The ultimate say in content falls not to peer scientists but to the Communist Party secretary of the Chinese institution sponsoring the journal," reports Guilford. "Intense competition to achieve quick results and thereby improve personal promotion prospects has led to widespread academic plagiarism." While China is putting out nearly as many journal articles as the United States, the latter's still get cited far, far more often, the UK Royal Society found. ----That said, none of the above is great news — not even for the United States. Scientific research isn't typically a zero-sum game. New discoveries can have positive spillover benefits for the entire world. If Chinese scientists advance our understanding of how to fight cancer, that's good for everyone. If China's inventions are mostly low-quality, that's a loss. AT: Politics DA Reed and Carcieri will push the aff Chicoine, Political Analyst for the Off New Digest, 7 (C. A., January 26th, “RHODE ISLAND’S QUONSET POINT/DAVISVILLE FACILITY BEING EVALUATED AS HOMEPORT FOR FIRST OCEAN EXPLORATION SHIP”, http://newenglandpride.blogspot.com/, accessed 7/13/14, LLM) Jan. 19, 2007 — NOAA is evaluating Quonset Point/Davisville, R.I., as the future homeport of the Okeanos Explorer—the nation’s first federal ship dedicated solely to ocean exploration—as part of an environmental assessment to be completed this spring. “Okeanos Explorer will break the mold for the way the nation conducts at-sea research in the future. We have better maps of Mars and the far side of the moon than of the deep and remote regions of Earth,” said retired Navy Vice Admiral Conrad C. Lautenbacher Jr., Ph.D., undersecretary of commerce for oceans and atmosphere and NOAA administrator. “Senator Reed and Governor Carcieri have been outspoken champions of the oceans. Their support combined with the wealth of academic and oceanographic institutions in New England would lead to many exciting collaborations in ocean exploration. ”The Okeanos Explorer is a former Navy surveillance ship (USS Capable) that was transferred to NOAA in 2004 with the bipartisan support of Congress. The full conversion is expected to be complete in the spring of 2008. The ship will conduct research and discovery expeditions in support of the NOAA Office of Ocean Exploration. Using sophisticated ocean mapping, deepwater remote-operated vehicles, and real-time data transmission, the ship will unlock clues to the world’s oceans—of which 95 percent remains unexplored. Quonset Point/Davisville is in close proximity to many labs and universities associated with the ship’s ocean exploration mission. The site was identified as best able to facilitate and enhance critical ocean research partnerships and to spur technological innovation in ocean research. Homeporting Okeanos Explorer at Quonset Point/Davisville also would support NOAA’s efforts to increase regional collaboration, leverage existing resources of NOAA and its partners, and generate an observational capacity greater than the sum of its parts. Quonset Point/Davisville also is in close proximity to a new telecommunications center to be constructed on the University of Rhode Island’s Narragansett campus. Called the Inner Space Center, it will be the ocean equivalent to NASA’s space command center in Houston, Texas. The Inner Space Center would be able to link to Okeanos Explorer via a high bandwidth satellite system and make it possible for scientists and educators to participate in ocean exploration cruises real-time without ever stepping foot on the ship. “I am pleased NOAA has identified Quonset Point/Davisville as an ideal place to homeport Okeanos Explorer. This is an exciting announcement for Rhode Island and the field of ocean exploration,” said Senator Jack Reed. “Rhode Islanders value the ocean. It shapes our culture, economy and the health of our planet. URI and other local institutions are at the forefront of studying and exploring our oceans. Their unique academic and communications resources will significantly enhance the value of Okeanos.” "I'm very pleased that NOAA has agreed to seriously consider basing the Okeanos Explorer in the Ocean State," Rhode Island Governor Donald L. Carcieri said. "I have long argued that Rhode Island can and should be one of America's leading centers of oceanic research. To further that goal, I worked with Senator Reed and Admiral Lautenbacher to bring the Okeanos Explorer to Rhode Island. Doing so will enable our state to build on the research capacity we've already developed at URI, while also exploiting the potential of Quonset Point/Davisville as a launching point for exploring the ocean's untapped and largely unknown resources. I especially want to thank NOAA and Admiral Lautenbacher for recognizing Rhode Island's potential." “It would be very fitting for the Ocean State to serve as the homeport for the first NOAA ship focused exclusively on ocean exploration,” said Rear Admiral Samuel P. De Bow Jr., director of the NOAA Commissioned Officer Corps and the NOAA Office of Marine and Aviation Operations, which manages the NOAA fleet. A team of oceanographers from across the country are already helping to plan the ship’s first voyage of exploration that will be launched from Hawaii in 2008 to explore the Pacific Ocean, the world’s largest and least explored ocean. As part of the NOAA fleet, Okeanos Explorer will be operated, managed and maintained by the NOAA Office of Marine and Aviation Operations. Its crew will consist of technical specialists, wage mariners, scientists, and commissioned officers of the NOAA Corps—the nation’s seventh uniformed service. The Corps is composed of scientists and engineers who provide NOAA with an important blend of operational, management and technical skills that support the agency’s environmental programs at sea, in the air and ashore. A NOAA Corps officer will command Okeanos Explorer. NOAA, an agency of the U.S. Commerce Department, is celebrating 200 years of science and service to the nation. From the establishment of the Survey of the Coast in 1807 by Thomas Jefferson to the formation of the Weather Bureau and the Bureau of Commercial Fisheries in the 1870s, much of America's scientific heritage is rooted in NOAA. NOAA is dedicated to enhancing economic security and national safety through the prediction and research of weather and climate-related events and information service delivery for transportation, and by providing environmental stewardship of the nation's coastal and marine resources. Through the emerging Global Earth Observation System of Systems (GEOSS), NOAA is working with its federal partners, more than 60 countries and the European Commission to develop a global monitoring network that is as integrated as the planet it observes, predicts and protects. It’s bipartisan or it’s small enough it would get tacked on with a laundry list bill like CJS Reed, Rhode Island Senator for the US Congress, 13 (Jackson, June 18th, “Reed Helps Advance Plan to Boost Job Creation, Public Safety, & Scientific Research Senate Appropriations Committee approves FY 2014 CJS Bill with bipartisan support”, http://www.reed.senate.gov/news/releases/reed-helps-advance-plan-to-boost-job-creation-publicsafety-and-scientific-research, accessed 7/13/14, LLM) “The bipartisan CJS bill is a smart investment in keeping our communities safe, boosting innovation, and growing economic opportunities. I am pleased we were able to get strong bipartisan support in committee and I hope the full Senate will work together to pass this bill and strengthen our economy,” said Reed. Highlights of the bill that Senator Reed supported include: JOBS, RESEARCH, & INNOVATION $7.4 billion for the National Science Foundation (NSF), an increase of $186 million over fiscal year 2013. The increase will provide 510 more competitive grants in fiscal year 2014. This includes $163.5 million for the Experimental Program to Stimulate Research (EPSCoR). The report also directs NSF to fund the Academic Fleet, which includes URI Graduate School of Oceanography’s research vessel Endeavor, at no less than the 2012 level $948 million for the National Institute of Standards and Technology (NIST), $141 million above the fiscal year 2013 enacted level. This funding enables a set of initiatives that will catalyze innovations, develop measurements, and provide technical resources to promote the global competitiveness of U.S. manufacturers and aspiring start-ups. This includes $153 million for the Hollings Manufacturing Extension Partnership (MEP), as well as funding for Advanced Manufacturing Technology Consortia (AMTech), which will help manufacturers accelerate development and adoption of cutting edge manufacturing technologies for making new, globally competitive products. $18 billion for the National Aeronautics and Space Administration (NASA) and their research partners at universities to continue science, aeronautics, technology, and human space flight breakthroughs. This includes $18 million in funding for the NASA EPSCoR program and $40 million of NASA space grants. $276 million for the Economic Development Assistance (EDA), $56 million above fiscal year 2013. The bill includes $100 million for public works projects, $25 million for the Regional Innovation Program to help more than 250 communities plan regional strategies for long-term growth, leverage billions in private investment, and generate thousands of jobs. Since 2009, EDA has invested in approximately $34 million in funding project in Rhode Island, including more than $3 million this year to repair the pier at the Port of Galilee. The measure also includes $15.8 million to in Trade Adjustment Assistance for Firms programs, which helps U.S. companies that are affected by overseas competition. $500 million for the International Trade Administration (ITA), $27 million more than the fiscal year 2013 enacted level, to help U.S. farmers, manufacturers, and service providers sell their products overseas. The bill also supports the Interagency Trade Enforcement Center to aggressively tackle unfair trade practices hurting American businesses. PUBLIC SAFETY $8.4 billion for the Federal Bureau of Investigation (FBI), $368 million above the fiscal year 2013 enacted level. This increase in funding will allow the FBI to conduct 1,500 more terrorism, cyber intrusion, and violent crime investigations. This includes a $100 million increase in funds for the FBI to double capacity of the National Instant Criminal Background Check System (NICS) to ensure the FBI has the capacity to manage existing requirements to perform necessary background checks on prospective firearms purchasers. $2.4 billion for the Drug Enforcement Administration (DEA), $68 million above the fiscal year 2013 enacted level, to target and dismantle criminal narcotics activities and regulate and combat prescription drug abuse. $1.23 billion for the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), $100 million above the fiscal year 2013 enacted level, to reduce violent crime and enforce federal firearms and explosives laws. $2.8 billion for the U.S. Marshals Service, $63 million above the fiscal year 2013 enacted level, to apprehend dangerous fugitives, protect the Federal courts and the judiciary, and transport prisoners for court proceedings. $2 billion for the U.S. Attorneys, $79 million above the fiscal year 2013 enacted level, to prosecute cases in international and domestic terrorism, mortgage fraud and financial crime, human trafficking, child exploitation, firearms and violent crime, gangs and organized crime, and complex fraud committed in health care, identity theft, public corruption, and drug enforcement. $417 million for Violence Against Women Act (VAWA) programs to help prevent domestic violence and hold offenders accountable. $394 million for COPS grants and $385 million for Byrne Justice Assistance Grants to help local law enforcement put more cops on the beat and ensure police officers have the resources they need to protect our communities. $279 million for juvenile justice and mentoring grants. $129 million for research and evaluation initiatives on the best prevention and intervention strategies, including $35 million to Delinquency Prevention Grants, of which $5 million goes to Gang and Youth Violence Education and Prevention. $50 million for states to improve the quality of criminal and mental health records so interstate background checks are more effective. $150 million through the COPS Office to allow communities to hire school safety personnel, conduct school safety assessments, and fill gaps in school safety plans. $15 million to help train local police how to respond to active shooter situations. $2 million to encourage developments in innovative gun safety technology. WEATHER & OCEANS: $5.6 billion for the National Oceanic and Atmospheric Administration $150 million for fisheries disaster funding could provide millions of dollars to support Rhode Island's groundfishing fleet. $29.1 million for Ocean Exploration, including the Okeanos Explorer home-ported at Quonset Point and the Ocean Exploration office at URI. It’s bipartisan- new reforms prove and have shielded it from controversy Jones, Government Relations Division at the American Institute of Physics, 14 (Richard M., 4/4/14, “House Passes Bipartisan Bill to Reorganize NOAA’s Weather Resources” http://aip.org/fyi/2014/house-passes-bipartisan-bill-reorganize-noaa%E2%80%99s-weather-resources, accessed 7/13/14, LLM) On Tuesday the House of Representatives passed the Weather Forecasting Improvement Act. Approved by voice vote, this bipartisan bill aligns the R&D activities of the National Oceanic and Atmospheric Administration’s Office of Oceanic and Atmospheric Research (OAR) with the National Weather Service. “Saving lives and protecting property should be the National Oceanic and Atmospheric Administration’s top priority. This bill codifies that priority,” said Rep. Jim Bridenstine (R-OK) as he explained the objectives of this legislation to his House colleagues. Bridenstine introduced this bill, H.R. 2413, in June 2013. The month before Moore, Oklahoma was hit by a massive tornado that killed 24 people and injured 377. Joining Bridenstine in cosponsoring this bill and reflecting its bipartisan nature were twenty other representatives, including the chairman and ranking minority member of the House Science, Space, and Technology Committee. The Science Committee voted to send this legislation to the full House after a quick markup session in early December. Then, as was true during Tuesday’s floor action, the bipartisan nature of the bill was emphasized. The bill has now been sent to the Senate Committee on Commerce, Science, and Transportation. Selections from the House floor debate, in the order of presentation, follow: House Science Committee Chairman Lamar Smith (T-TX): “Severe weather routinely affects large portions of the United States. This past year has been no different. The United States needs a world-class weather prediction system that helps protect American lives and property. “Our leadership has slipped in severe weather forecasting. European weather models routinely predict America’s weather better than we can. We need to make up for lost ground. H.R. 2413 improves weather observation systems and advances computing and next generation modeling capabilities. The enhanced prediction of major storms is of great importance to protecting the public from injury and loss of property. and advances computing and next generation modeling capabilities. The enhanced prediction of major storms is of great importance to protecting the public from injury and loss of property.” Rep. Suzanne Bonamici (D-OR): “Members on both sides of the aisle can be assured that this bill represents a truly bipartisan effort and is built on extensive discussions with and advice from the weather community.” “We drew on expert advice from the weather enterprise and from extensive reports from the National Academy of Sciences and the National Academy of Public Administration. Experts told us that, to improve weather forecasting, the research at the Office of Oceans and Atmospheric Research, or OAR, and the forecasting at the National Weather Service had to be better coordinated. This legislation contains important provisions to improve that coordination. This bill encourages NOAA to integrate research and operations in a way that models the successful innovation structure used by the Department of Defense. “The bill we are considering today also creates numerous opportunities for the broader weather community to provide input to NOAA, and their insights as well. At every opportunity, we charge the agency to consult with the American weather industry and researchers as they develop research plans and undertake new initiatives. We also press NOAA to get serious about exploring private sector solutions to their data needs. “The bill makes clear that we expect the historical support for extramural research to continue. The engine of weather forecasting innovation has not always been found within NOAA, but is often found in the external research community and labs that work with NOAA. That collaboration must continue and will continue under this legislation. “I can assure Members on both sides of the aisle that weather research is strengthened in this bill but not at the expense of other important work at NOAA. During the committee process, we heard from witness after witness who stressed that weather forecasting involves many different scientific disciplines. This integrated multidisciplinary approach reflects an understanding that we cannot choose to strengthen one area of research at OAR without endangering the progress in the other areas because they are all interconnected. Physical and chemical laws do not respect OAR’s budgetary boundaries of climate, weather, and oceans, and this bill only addresses organizational issues in weather at NOAA.” Rep. Bridenstine: “Mr. Speaker, on May 20 of last year, a massive tornado struck Moore, Oklahoma, with very little warning. The Moore tornado killed 24 Oklahomans, injured 377, and resulted in an estimated $2 billion worth of damage. A warning was issued only 15 minutes before the tornado touched down, just 15 minutes. In fact, 15 minutes is the standard in America. Mr. Speaker, America can do better than 15 minutes. “Mr. Speaker, this bill is about priorities. When America is over $17 trillion in debt, the answer is not more spending, but to prioritize necessary spending toward its best uses. Saving lives and protecting property should be the National Oceanic and Atmospheric Administration’s top priority. This bill codifies that priority. H.R. 2413 directs NOAA to prioritize weather-related activities and rebalances NOAA’s funding priorities to bring weather-related activities to a higher amount. The bill completes this reprioritization in a fiscally responsible manner. H.R. 2413 does not increase NOAA’s overall authorization. I would like to repeat that. H.R. 2413 does not increase NOAA’s overall authorization. It doesn’t spend one more dime. “Mr. Speaker, this bill helps get weather research projects out of the lab and into the field, thereby speeding up the development and fielding of lifesaving weather forecasting technology. By requiring coordination and prioritization across the range of NOAA agencies, H.R. 2413 will help get weather prediction and forecasting technologies off the drawing board and into the field. This bill authorizes dedicated tornado and hurricane warning programs to coordinate research and development activities. It directs the Office of Oceanic and Atmospheric Research to prioritize its research and development. And it codifies technology transfer between OAR - the researchers – and the National Weather Service the operators - a vital link that ensures next-generation weather technologies are implemented. “Mr. Speaker, perhaps most importantly, H.R. 2413 enhances NOAA’s collaboration with the private sector and with universities. Oklahoma is on the cutting edge of weather research, prediction, and forecasting with absolutely world-class institutions such as the National Weather Center and the National Severe Storms Laboratory at the University of Oklahoma.” “Mr. Speaker, the imbalance of NOAA’s resources is leaving America further behind our international competitors. The Science Committee received compelling testimony showing that the European Union has better capabilities in some areas of numerical weather prediction, forecasting, and risk communication, and other countries, such as Britain and Japan, are closing in fast. Misallocating resources can have terrible consequences, as my constituents and the people of Oklahoma understand all too well every tornado season. The Weather Forecasting Improvement Act is a first step toward rebalancing NOAA’s priorities, moving new technologies from the lab bench to the field, and leveraging formidable capabilities developed in the private sector and at universities.” Ocean research has popular support NAS 3 (National Academy of Sciences, http://explore.noaa.gov/sites/OER/Documents/nationalresearch-council-voyage.pdf, accessed 7-5-14, LK There has been continued support for and success from oceanographic research in the United States, and a large-scale international exploration program could rapidly accelerate our acquisition of knowledge of the world's oceans. The current ocean-research-funding framework does not favor such exploratory proposals. Additional funding for exploration without a new framework for management and investment is unlikely to result in establishment of a successful exploration program. A new program, however, could provide the resources and establish the selection processes needed to develop ocean exploration theme areas and pursue new research in biodiversity, processes, and resources within the world's oceans. The current effort of the Office of Ocean Exploration at NOAA should not be expected to fill this role. After weighing the issues involved in oversight and funding, perhaps the most appropriate placement for an ocean exploration program is under the auspices of the interagency NOPP, provided that the problems with routing funds to NOPP-sponsored projects is solved. This solution has the best chance of leading to major involvement by NOAA, NSF, and other appropriate organizations such as the Office of Naval Research. The committee is not prepared to support an ocean exploration program within NOAA unless major shortcomings of NOAA as a lead agency can be effectively and demonstrably overcome. A majority of the committee members felt that the structural problems limiting the effectiveness of NOAA's current ocean exploration program are insurmountable. A minority of the committee members felt that the problems could be corrected. If there is no change to the status quo for NOPP or NOAA, the committee recommends that NSF be encouraged to take on an ocean exploration program. Although a program within NSF would face the same difficulties of the existing NOAA program in attracting other federal (and nonfederal) partners, NSF has proven successful at managing international research programs as well as a highly-regarded ocean exploration program that remained true to its founding vision. AT: Info Sharing DA Information Sharing by the US key to credible international scientific cooperationcreates necessary goodwill to build mutual trust USGPO, Governmental Printing Office Transcript on the hearing before the SUBCOMMITTEE ON RESEARCH AND SCIENCE EDUCATION COMMITTEE ON SCIENCE AND TECHNOLOGY HOUSE OF REPRESENTATIVES, 2008 (April 2nd, “INTERNATIONAL SCIENCE AND TECHNOLOGY COOPERATION”, http://www.gpo.gov/fdsys/pkg/CHRG-110hhrg41470/html/CHRG-110hhrg41470.htm, accessed 7/13/14, LLM) The exchange of scientific information and the cooperation in international scientific research activities were identified by the first NSF Director, Alan Waterman, as two of the major responsibilities that Congress had given the agency. NSF embraced those responsibilities in its first cycle of grants, supporting international travel and the dissemination of scientific information originating overseas. NSF recognized that a two-way flow of information and individuals between nations resulted in both better science and improved international goodwill. In 1955, NSF took a comprehensive look at the role of the Federal Government in international science, and warned that it was important that ``activities of the U.S. Government in the area of science not be tagged internationally as another weapon in our cold war arsenal.'' NSF concluded that international scientific collaboration, based on considerations of scientific merit and the selflessness of the United States, could help ease international tensions, improve the image of the United States abroad, and help raise the standard of living among less-developed nations. NSF has long embraced multilateral projects as an essential aspect of its portfolio, beginning with the International Geophysical Year of 1957, and continuing with such activities as the International Biological and Tropical Oceans-Global Atmosphere programs, and, more recently, the International Continental Drilling Program, Gemini Observatory, Rice Genome Sequencing Project, and International Polar Year. The agency has also fostered bilateral partnerships in all parts of the world. These overarching partnerships, most of which involve extensive interagency collaboration on the U.S. side, have generated thousands of cooperative research projects on multiple scales. As you know, the Office of Science and Technology Policy (OSTP) guides and oversees the administration's international science and technology strategies and portfolio. Through OSTP, the National Science and Technology Council (NSTC) has a pivotal role in setting priorities for and coordinating interagency collaborations, including those that are international in nature. International cooperation is integrated throughout the four committees of the NSTC, and NSF participates in this work on many levels. I currently co-chair the Committee on Science and serve as the NSF representative on the Committee on Homeland and National Security. NSF Deputy Director Kathie Olsen serves as the NSF representative on the Committee on Environment & Natural Resources and Committee on Technology. NSF is involved in most of NSTC's subcommittees and working groups, and leads many. For example, Dr. Jim Collins, the Assistant Director of the Directorate of Biological Sciences, chairs the Biotechnology Subcommittee, and Dr. Jeannette Wing, the Assistant Director for Computer and Information Sciences and Engineering, co-chairs the Networking and Information Technology Research and Development. Information sharing through science solves nuclear proliferation Lowenthal, director of the Committee on International Security and Arms Control of the National Academy of Sciences, 2011 (Micah D., “Science Diplomacy for Nuclear Security”, http://www.usip.org/sites/default/files/SR_288.pdf, accessed 7/13/14, LLM) Ironically, part of the impetus for the Black Sea Experiment was a response to bad sci- ence. Professor Sagdeev explained that a Soviet institute had claimed that neutrons emitted by nuclear weapons containing plutonium would create argon-42 in the air around them and that detectors one hundred kilometers away could detect the radioactive argon carried by the wind. Although this claim is demonstrably false (the neutron flux is too small, the argon concentration is too small, and air transport over even a relatively small distance would dilute the argon to levels indistinguishable from background), the idea had captured the imagination of some people in leadership positions. The Black Sea Experiment put to test several detectors—germanium, sodium iodide, helium-3—positioned in different locations relative to the nuclear weapon. Some were handheld and operated on the ship, some were in a helicopter, some were on a nearby boat, and some were on the dock farther away. Only the nearest detectors (on the ship and in the helicopter) detected the weapon and acquired sufficient data to verify the number of weapons on the ship. As a one-time experiment with limited measurements and no opportunity to evaluate the sensitivity of the results or to follow up with additional experiments, the Black Sea Experiment is not a model for scientific research. However, as a model of science diplomacy, it was a success. Dr. Avrorin described the Soviet scientists' concerns that gamma-detector measurements could reveal information about the design of the nuclear explosive. The chief designer of Soviet nuclear weapons personally oversaw the experiment and confirmed that the detectors could verify the presence of the weapon but could not show technical details or the configuration of the nuclear explosive. As noted, this is important because it opened the way to transparent dismantlement of nuclear weapons, demonstrating verification by nonintrusive technology. Information sharing now is critical to Science diplomacy solves a laundry list of nuclear impacts Lowenthal, director of the Committee on International Security and Arms Control of the National Academy of Sciences, 2011 (Micah D., “Science Diplomacy for Nuclear Security”, http://www.usip.org/sites/default/files/SR_288.pdf, accessed 7/13/14, LLM) Some important lessons were learned from the practice of science diplomacy in difficult times between the United States and the Soviet Union/Russia over the past twenty-five years. Although the issues faced today are more complex, these lessons are still pertinent The Cold War may be over, but the variety of threats has grown. Science diplomacy is needed now more than ever to address terrorism, the proliferation of nuclear and other potentially dangerous technologies, regional rivalries and conflicts, and a set of other critical matters. Some of these topics are quite sensitive and officials and scientists today may wonder how the topics can be discussed in bilateral or multilateral settings, but they have to remem- ber what has already been accomplished. Because of nuclear weapons' terrible destructive power, nations consider information about them and their potential use to be highly sen- sitive. But it is precisely this terrible destructive power that makes discussion including sharing of information and analyses—and that makes science diplomacy—so important. Indeed, this destructive power is what motivated those practitioners quoted in this report to succeed. This inspires practitioners of science diplomacy to continue to work together on the critical issues for nuclear security today and to find ways to reduce the threats that the world faces. AU parties owe that to future generations. Science diplomacy played such a key role in helping to bridge important gaps to bring an end to the Cold War; it is time to call upon this powerful tool to address the new and vexing security challenges the world faces in the twenty-first century. 2AC AT: CPs AT: China CP – Japan DA Chinese exploration is perceived as competition with Japan – tanks solvency CCTV News, Chinese news source, 13 [8/3/13, CCTV, “China opposes Japanese suggestions on ocean gas exploration,” http://english.cntv.cn/program/china24/20130803/101942.shtml, accessed 7/13/14, TYBG] Japan’s Liberal Democratic Party on Thursday asked the Japanese government to counter China’s gas exploration in the East China Sea. China has responded by saying it will not accept Japan’s unreasonable request. China has also lodged solemn representations to the United States after the US Senate passed a resolution expressing concern over Chinese actions in the East China Sea and the South China Sea. Recent frictions between China and its neighbors in east and south-east Asia mostly originate from maritime disputes. Of these countries, Japan has been the most active when it comes to voicing its opposition to China. On Thursday, Japanese Prime Minister Shinzo Abe said his country would adopt a firm stance on the issue of China’s oil and gas exploration in the East China Sea. This was after his Liberal Democratic Party submitted to the Japanese government some tough new proposals on the matter. The LDP recommended that the Japanese Government ask the Chinese side to remove construction materials for the new facility China is building in the area. Tokyo had lodged a protest with Beijing early in July over the building of new oil and gas development facilities at the Chunxiao gas field in the vicinity of the "medium line" between the two countries. The protest was rejected by China, as it has never accepted the medium line unilaterally claimed by Japan. The LDP said in its proposals that China and Japan should start talks immediately to discuss how to develop fields not covered by a 2008 bilateral agreement。 In response, Chinese Foreign Ministry spokeswoman Hua Chunying said that since Japan caused the current difficult situation in bilateral relations, it was incumbent upon it to correct its mistakes and make substantial efforts to get rid of the obstacles in the way of the development of bilateral relationship. It’s the perception that triggers escalation – Japan won’t give concessions to China Harner, writer for Forbes, 13 [Stephen, 7/8/13, Forbes, “China's East China Sea Gas Exploration Latest Flare-Up In Japan-China Senkaku/Diaoyu Island Dispute,” http://www.forbes.com/sites/stephenharner/2013/07/08/chinas-eastchina-sea-gas-exploration-latest-flare-up-in-japan-china-senkakudiaoyu-island-dispute/, accessed 7/13/14, TYBG] The serious and increasingly dangerous rupture with China gained a new dimension–and regained the headlines–on July 5 when Abe, appearing on a Fuji Television program, expressed “deep regret” that China was moving undersea gas field exploration equipment into an area of the East China Sea “in violation of a bilateral agreement.” “I must ask China to honor our agreement,” said Abe. Abe’s criticism produced a brief flutter of comment in the Japanese media, but was quickly passed over. In Beijing, however, there was a multi-day thunderstorm. In this instance, as in so many others affecting Japan’s foreign relations–including with the U.S.–we are again witnessing from Abe a maladroitness bordering on incompetence. What is going on here? The specific issue is Chinese exploration in a section of the East China Sea close to but not over a notional mid-point line (illustrated in the graphic above) that can be drawn longitudinally (roughly north to south) through a large area of ocean and seabed that both China and Japan claim as falling within their respective 200 mile “exclusive economic zones” (EEZ). After Abe’s statement, Chinese official media published maps and diagrams documenting that activity was taking place on the Chinese side of the “mid-point line,” and that China was perfectly within its rights. A July 4 Nihon Keizai Shimbun article reported that on June 27 the second raking official in Japan’s foreign ministry delivered a formal diplomatic protest of China’s action to the Chinese ambassador to Japan. On July 3, Japan’s cabinet secretary, Suga Yoshihide, stressed to the press that “we will not recognize any unilateral development activities in sea areas where the two countries have overlapping claims.” Anyone with a knowledge of Chinese negotiating style could have guessed what was coming next. In response to Suga, on the same day, July 3, China’s deputy foreign ministry spokesman, Ms. Hua Chunying, announced to the press that “we are are conducting exploration activities in sea area under our own administration.” Further, she continued, China has never agreed to and does not recognize any so-called “mid-point line” (my italics). Therefore, “China rejects Japan’s protest.” Ms. Hua was stating facts in denying that China had ever formally accepted the concept of a “mid-point line.” Formal acceptance would mean recognizing Japan’s EEZ claims. This will never happen, just as China will never formally recognize or accept Japan’s claims to the Senkaku/Diaoyu islands. Hence Abe was telling a highly provocative untruth when he mentioned an “agreement” with China over exploration in the East China Sea. But of course there is something more. In June 2008, Japanese and Chinese government negotiators reach tentative agreement that a Chinese gas field development project in the East China Sea called “Shirakaba” by Japan and “Chunxiao” by China should proceed based on Chinese law, but with capital provided by Japanese corporations. Agreement in principle was also reached to establish a joint development zone in a northern area that extended across the notional “mid-point line.” In these discussions both sides set aside the issue of their respective “exclusive economic zones.” As it happened, the above tentative agreements were scheduled to be formalized in signed agreements in September 2010. However, when Japan-China political relations soured over the collision of a Chinese fishing vessel with a Japanese Coast Guard vessel, China asked for an indefinite postponement of the joint exploration agreement signing. Since then, and particularly as the Senkaku/Diaoyu island dispute has escalated, China has reverted to strict interpretation of and insistence on its EEZ rights. It is classic Chinese negotiating style to escalate rhetoric (sometimes combined with histrionic gestures) and to elaborately link otherwise seemingly unrelated issues, to bring maximum pressure on the counterparty to make concessions. Subtlety is practiced only in obfuscating sources, not in the message or desired effect. A vivid example of the style was an article penned by “scholars” in the People’s Daily a few weeks ago that called into question Japan’s sovereign claim over Okinawa. What is going on in the East China Sea is really about the Senkaku/Diaoyu island dispute and China’s determination not to de-escalate pressure for concessions from the Abe government. Abe is also under pressure from the Obama administration to show initiative in trying to resolve the issue, so that the U.S. can continue improving relations with China. That causes miscalc, military escalation and US-draw-in – tensions distort decisionmaking Smith, Senior Fellow for Japan Studies, 13 [Sheila A., April, Council on Foreign Relations, “A Sino-Japanese Clash in the East China Sea,” http://www.cfr.org/japan/sino-japanese-clash-east-china-sea/p30504, accessed 7/13/14, TYBG] Sino-Japanese tensions in the East China Sea have been building steadily since 2010, when a Chinese fishing trawler rammed two Japan Coast Guard (JCG) vessels in waters near the Senkaku/Diaoyu Islands and Japan detained the captain. Although the crisis was eventually defused, the territorial dispute came to a head again in September 2012, when Japanese prime minister Yoshihiko Noda announced his government's decision to purchase three of the five islands. The islands were privately owned, but a new wave of activism, including Chinese attempts to land on the islands and a public campaign by the Tokyo governor to purchase them himself, prompted Noda to attempt to neutralize nationalist pressures. The decision triggered widespread anti-Japanese demonstrations in China, resulting in extensive damage to Japanese companies operating there. Eventually China dampened the popular response, but it has since repeatedly stated its intent to assert its own administrative control over the disputed islands. China's Marine Surveillance agency intensified its patrols of the waters in and around the islands, and China's Bureau of Fisheries patrols followed suit. The JCG in turn increased its patrols and put them on 24/7 alert. The danger of escalation to armed conflict increased when the two militaries became directly involved. On December 13, 2012, a small Chinese reconnaissance aircraft entered undetected into Japanese airspace above the islands. The JCG alerted Japan's Air Self-Defense Force (ASDF), which scrambled fighter jets based in Naha, Okinawa; however, they were too late to intercept. In January, China sent its reconnaissance aircraft back toward the islands accompanied by fighter jets, but stopped short of entering Japan's airspace, and no direct aerial confrontation occurred. Japan's Maritime Self-Defense Force (MSDF) reported that a Chinese frigate locked its firing radar on the Japanese destroyer Yudachi on January 30, 2013. Chinese authorities instigated an investigation into the incident in response to Japan's protest, leading to speculation that Beijing was unaware of the ship captain's actions. Although China's Ministry of Defense later denied that the incident took place, it did acknowledge the danger such an act posed. Given current circumstances in the East China Sea, three contingencies are conceivable: first, an accidental or unintended incident in and around the disputed islands could trigger a military escalation of the crisis; second, either country could make a serious political miscalculation in an effort to demonstrate sovereign control; and third, either country could attempt to forcibly control the islands. Accidental/Unintended Military Incident Although recent incidents have sensitized China and Japan to the risk of accidental and unintended military interactions, the danger will persist while emotions run high and their forces operate in close proximity. In stressful and ambiguous times, when decision-making is compressed by the speed of modern weapons systems, the risk of human error is higher. The 2001 collision between a U.S. reconnaissance aircraft and a Chinese fighter jet near Hainan Island is a case in point, as was the intrusion of a Chinese Han submarine in Japanese territorial waters in 2004. So-called rules of engagement (ROEs), intended to guide and control the behavior of local actors, are typically general in scope and leave room for personal interpretation that may lead to actions that escalate a crisis situation. Compounding the risk of unintended escalation between Chinese and Japanese air and naval units is the unpredictable involvement of third parties such as fishermen or civilian activists who may attempt to land on the islands. Their actions could precipitate an armed response by either side. Political Miscalculation in an Effort to Demonstrate Sovereign Control Political miscalculation of either country's intent or resolve, as well as miscalculation of the U.S. position, could lead to armed conflict. First, Japan and China are already finding it difficult to read each other's actions. Past Japanese government leasing of the Senkaku/Diaoyu Islands effectively kept nationalist activists—Japanese as well as Chinese and Taiwanese—at bay. In mid-2012, however, rising nationalist sentiments during leadership transitions inflamed the dispute. This stimulated heated debate in Tokyo over how to consolidate Japanese sovereignty and was a factor in the December 2012 election of conservative prime minister Shinzo Abe, who advocated inhabiting the islands. This escalation in asserting sovereignty claims through the use of patrols, populating the islands, and perhaps even military defense of the territory could lead to heightened tensions between the two countries and whip up nationalist sentiments, potentially limiting the capacity of leaders to peacefully manage the dispute. Second, China could miscalculate U.S. interests and intentions. Since last year, U.S. policymakers have sought to lessen tensions but have also taken steps to clarify the U.S. role in deterring any coercive action by China. U.S. and Japanese forces have conducted regular exercises to strengthen defense of Japan's southwestern islands and maritime surveillance capabilities. Both former secretary of state Hillary Clinton and former secretary of defense Leon Panetta clearly stated that the United States will defend Japan against any aggression, and on November 29, 2012, the U.S. Senate passed a resolution accompanying the 2013 National Defense Authorization Act to demonstrate congressional support for the Obama administration's commitment to Japan's defense. As tensions escalated late last year, Washington increased its deployments in and around Japan. Early this year, as military interactions raised the potential for conflict, Clinton restated the U.S. position that it would not accept any unilateral attempt to wrest control of the islands. Still, Beijing could miscalculate Washington's commitment to defend Japan and/or seek to test that commitment. Finally, U.S. assurances could lead Tokyo to overestimate Washington's response and to act in a manner that would increase the chance for confrontation. To date, however, Tokyo has tended to err on the side of caution in planning and exercises with U.S. forces, and it is unlikely Japan would act without evidence of U.S. assistance. Deliberate Action to Forcibly Establish Control Over Islands Although this seems highly unlikely today, either party could take military action to assert sovereignty over the disputed islands. Rising domestic pressures or an unexpected opportunity for a fait accompli could lead to a decision by either government to establish military control over the territory. AT: China CP – NB Link China relies on the NSF for ocean funding – still links to the net benefit Kashyap, Senior Editor at the International Business Times, 14 [Arjun, 1-27-14, “China-Led International Ocean Exploration Mission To Look For Oil In South China Sea, Including In Disputed Regions”, http://www.ibtimes.com/china-led-international-ocean-explorationmission-look-oil-south-china-sea-including-disputed, 7-13-14, FCB] In a first-of-its-kind exercise for the world’s second-largest economy, an international scientific expedition to look for oil in the South China Sea will set sail from Hong Kong on Tuesday, according to the South China Morning Post. The trip is part of the latest edition of the decade-long International Ocean Discovery Program that will run from 2013 to 2023. The IODP was launched by the U.S. in the 1960s, and its latest effort will include 31 scientists from 10 countries drilling at three different sites for two months. "Oil and gas fields lie close to the coast, but the key is to open the treasure box buried beneath the basin," Wang Pinxian, a marine geologist and member of the Chinese Academy of Sciences, told the Post Monday. The IODP invited proposals from 26 member nations and, while a proposal to drill in the controversial South China Sea -- first proposed by China in 2008 -- was not the most popular one, it was reportedly mainly chosen because the Chinese government agreed to pick up 70 percent, or $6 million, of the mission’s tab. The NSF, which used to contribute 70 per cent of the Joides Resolution's expenses, cut its annual ocean drilling budget to $50 million last year, David Divins, director of the IODP’s ocean drilling program. The expedition will sail aboard the American scientific drill ship, Joides Resolution, operated by the National Science Foundation, or NSF, the Post reported, adding that the voyage will take the team to waters claimed variously by China, the Philippines and Vietnam. So far, the ship has received permission from the Philippines and Beijing but is waiting for a response from the Vietnamese government to drill at a site in the southwest part of the South China Sea, the Post reported, citing Divins, adding that the expedition may have to opt for an alternative site. Tensions stemming from China's energy interests are a constant undercurrent to the region's geopolitics. For instance, in May 2012, China began drilling to new depths in the South China Sea, 200 miles southeast of Hong Kong, with the launch of its first deep-water oil drilling rig, triggering tensions between Manila and Beijing. In December 2012, China had asked Vietnam to stop exploring for oil in disputed areas of the South China Sea and demanded that the latter not harass Chinese fishing boats. However, findings of the IODP expedition, which includes 13 scientists from mainland China, nine from the U.S. and one from Taiwan, will reportedly be shared around the world, including with countries that are not part of the program. AT: China CP - Perm The permutation solves – US-China scientific cooperation is mutually beneficial Chu, Seceretary for the DoE, 11 (Steven, January, “U.S.-China Clean Energy Cooperation”, http://www.us-chinacerc.org/pdfs/US_China_Clean_Energy_Progress_Report.pdf, accessed 7/13/14, LLM) Science is not a zero-sum game. In my experience as a scientist, collaborations with other research groups greatly accelerated our progress. Similarly, cooperation between the United States and China can greatly accelerate progress on clean energy technologies, benefiting both countries. As the world’s largest producers and consumers of energy, the United States and China share many common challenges and common interests. Our clean energy partnership with China can help boost America’s exports, creating jobs here at home, and ensure that our country remains at the forefront of technology innovation. At the U.S. Department of Energy, we are committed to working with Chinese partners to promote a sustainable energy future. Working together, we can accomplish more than acting alone. The United States and the People’s Republic of China have worked together on science and technology for more than 30 years. Under the Science and Technology Cooperation Agreement of 1979, signed soon after normalization of diplomatic relations, our two countries have cooperated in a diverse range of fields, including basic research in physics and chemistry, earth and atmospheric sciences, a variety of energy-related areas, environmental management, agriculture, fisheries, civil industrial technology, geology, health, and natural disaster planning. More recently, in the face of emerging global challenges such as energy security and climate change, the United States and China entered into a new phase of mutually beneficial cooperation. In June 2008, the U.S.-China Ten Year Framework for Cooperation on Energy and the Environment was created and today it includes action plans for cooperation on energy efficiency, electricity, transportation, air, water, wetlands, nature reserves and protected areas. In November 2009, President Barack Obama and President Hu Jintao announced seven new U.S.- China clean energy initiatives during their Beijing summit. In doing so, the leaders of the world’s two largest energy producers and consumers affirmed the importance of the transition to a clean and low-carbon economy—and the vast opportunities for citizens of both countries in that transition. AT: Japan CP – China DA Japanese exploration is perceived as competition with China – tanks solvency CCTV News, Chinese news source, 13 [8/3/13, CCTV, “China opposes Japanese suggestions on ocean gas exploration,” http://english.cntv.cn/program/china24/20130803/101942.shtml, accessed 7/13/14, TYBG] Japan’s Liberal Democratic Party on Thursday asked the Japanese government to counter China’s gas exploration in the East China Sea. China has responded by saying it will not accept Japan’s unreasonable request. China has also lodged solemn representations to the United States after the US Senate passed a resolution expressing concern over Chinese actions in the East China Sea and the South China Sea. Recent frictions between China and its neighbors in east and south-east Asia mostly originate from maritime disputes. Of these countries, Japan has been the most active when it comes to voicing its opposition to China. On Thursday, Japanese Prime Minister Shinzo Abe said his country would adopt a firm stance on the issue of China’s oil and gas exploration in the East China Sea. This was after his Liberal Democratic Party submitted to the Japanese government some tough new proposals on the matter. The LDP recommended that the Japanese Government ask the Chinese side to remove construction materials for the new facility China is building in the area. Tokyo had lodged a protest with Beijing early in July over the building of new oil and gas development facilities at the Chunxiao gas field in the vicinity of the "medium line" between the two countries. The protest was rejected by China, as it has never accepted the medium line unilaterally claimed by Japan. The LDP said in its proposals that China and Japan should start talks immediately to discuss how to develop fields not covered by a 2008 bilateral agreement。 In response, Chinese Foreign Ministry spokeswoman Hua Chunying said that since Japan caused the current difficult situation in bilateral relations, it was incumbent upon it to correct its mistakes and make substantial efforts to get rid of the obstacles in the way of the development of bilateral relationship. It’s the perception that triggers escalation – China won’t give concessions to Japan Harner, writer for Forbes, 13 [Stephen, 7/8/13, Forbes, “China's East China Sea Gas Exploration Latest Flare-Up In Japan-China Senkaku/Diaoyu Island Dispute,” http://www.forbes.com/sites/stephenharner/2013/07/08/chinas-eastchina-sea-gas-exploration-latest-flare-up-in-japan-china-senkakudiaoyu-island-dispute/, accessed 7/13/14, TYBG] The serious and increasingly dangerous rupture with China gained a new dimension–and regained the headlines–on July 5 when Abe, appearing on a Fuji Television program, expressed “deep regret” that China was moving undersea gas field exploration equipment into an area of the East China Sea “in violation of a bilateral agreement.” “I must ask China to honor our agreement,” said Abe. Abe’s criticism produced a brief flutter of comment in the Japanese media, but was quickly passed over. In Beijing, however, there was a multi-day thunderstorm. In this instance, as in so many others affecting Japan’s foreign relations–including with the U.S.–we are again witnessing from Abe a maladroitness bordering on incompetence. What is going on here? The specific issue is Chinese exploration in a section of the East China Sea close to but not over a notional mid-point line (illustrated in the graphic above) that can be drawn longitudinally (roughly north to south) through a large area of ocean and seabed that both China and Japan claim as falling within their respective 200 mile “exclusive economic zones” (EEZ). After Abe’s statement, Chinese official media published maps and diagrams documenting that activity was taking place on the Chinese side of the “mid-point line,” and that China was perfectly within its rights. A July 4 Nihon Keizai Shimbun article reported that on June 27 the second raking official in Japan’s foreign ministry delivered a formal diplomatic protest of China’s action to the Chinese ambassador to Japan. On July 3, Japan’s cabinet secretary, Suga Yoshihide, stressed to the press that “we will not recognize any unilateral development activities in sea areas where the two countries have overlapping claims.” Anyone with a knowledge of Chinese negotiating style could have guessed what was coming next. In response to Suga, on the same day, July 3, China’s deputy foreign ministry spokesman, Ms. Hua Chunying, announced to the press that “we are are conducting exploration activities in sea area under our own administration.” Further, she continued, China has never agreed to and does not recognize any so-called “mid-point line” (my italics). Therefore, “China rejects Japan’s protest.” Ms. Hua was stating facts in denying that China had ever formally accepted the concept of a “mid-point line.” Formal acceptance would mean recognizing Japan’s EEZ claims. This will never happen, just as China will never formally recognize or accept Japan’s claims to the Senkaku/Diaoyu islands. Hence Abe was telling a highly provocative untruth when he mentioned an “agreement” with China over exploration in the East China Sea. But of course there is something more. In June 2008, Japanese and Chinese government negotiators reach tentative agreement that a Chinese gas field development project in the East China Sea called “Shirakaba” by Japan and “Chunxiao” by China should proceed based on Chinese law, but with capital provided by Japanese corporations. Agreement in principle was also reached to establish a joint development zone in a northern area that extended across the notional “mid-point line.” In these discussions both sides set aside the issue of their respective “exclusive economic zones.” As it happened, the above tentative agreements were scheduled to be formalized in signed agreements in September 2010. However, when Japan-China political relations soured over the collision of a Chinese fishing vessel with a Japanese Coast Guard vessel, China asked for an indefinite postponement of the joint exploration agreement signing. Since then, and particularly as the Senkaku/Diaoyu island dispute has escalated, China has reverted to strict interpretation of and insistence on its EEZ rights. It is classic Chinese negotiating style to escalate rhetoric (sometimes combined with histrionic gestures) and to elaborately link otherwise seemingly unrelated issues, to bring maximum pressure on the counterparty to make concessions. Subtlety is practiced only in obfuscating sources, not in the message or desired effect. A vivid example of the style was an article penned by “scholars” in the People’s Daily a few weeks ago that called into question Japan’s sovereign claim over Okinawa. What is going on in the East China Sea is really about the Senkaku/Diaoyu island dispute and China’s determination not to de-escalate pressure for concessions from the Abe government. Abe is also under pressure from the Obama administration to show initiative in trying to resolve the issue, so that the U.S. can continue improving relations with China. That causes miscalc, military escalation and US-draw-in – tensions distort decisionmaking Smith, Senior Fellow for Japan Studies, 13 [Sheila A., April, Council on Foreign Relations, “A Sino-Japanese Clash in the East China Sea,” http://www.cfr.org/japan/sino-japanese-clash-east-china-sea/p30504, accessed 7/13/14, TYBG] Sino-Japanese tensions in the East China Sea have been building steadily since 2010, when a Chinese fishing trawler rammed two Japan Coast Guard (JCG) vessels in waters near the Senkaku/Diaoyu Islands and Japan detained the captain. Although the crisis was eventually defused, the territorial dispute came to a head again in September 2012, when Japanese prime minister Yoshihiko Noda announced his government's decision to purchase three of the five islands. The islands were privately owned, but a new wave of activism, including Chinese attempts to land on the islands and a public campaign by the Tokyo governor to purchase them himself, prompted Noda to attempt to neutralize nationalist pressures. The decision triggered widespread anti-Japanese demonstrations in China, resulting in extensive damage to Japanese companies operating there. Eventually China dampened the popular response, but it has since repeatedly stated its intent to assert its own administrative control over the disputed islands. China's Marine Surveillance agency intensified its patrols of the waters in and around the islands, and China's Bureau of Fisheries patrols followed suit. The JCG in turn increased its patrols and put them on 24/7 alert. The danger of escalation to armed conflict increased when the two militaries became directly involved. On December 13, 2012, a small Chinese reconnaissance aircraft entered undetected into Japanese airspace above the islands. The JCG alerted Japan's Air Self-Defense Force (ASDF), which scrambled fighter jets based in Naha, Okinawa; however, they were too late to intercept. In January, China sent its reconnaissance aircraft back toward the islands accompanied by fighter jets, but stopped short of entering Japan's airspace, and no direct aerial confrontation occurred. Japan's Maritime Self-Defense Force (MSDF) reported that a Chinese frigate locked its firing radar on the Japanese destroyer Yudachi on January 30, 2013. Chinese authorities instigated an investigation into the incident in response to Japan's protest, leading to speculation that Beijing was unaware of the ship captain's actions. Although China's Ministry of Defense later denied that the incident took place, it did acknowledge the danger such an act posed. Given current circumstances in the East China Sea, three contingencies are conceivable: first, an accidental or unintended incident in and around the disputed islands could trigger a military escalation of the crisis; second, either country could make a serious political miscalculation in an effort to demonstrate sovereign control; and third, either country could attempt to forcibly control the islands. Accidental/Unintended Military Incident Although recent incidents have sensitized China and Japan to the risk of accidental and unintended military interactions, the danger will persist while emotions run high and their forces operate in close proximity. In stressful and ambiguous times, when decision-making is compressed by the speed of modern weapons systems, the risk of human error is higher. The 2001 collision between a U.S. reconnaissance aircraft and a Chinese fighter jet near Hainan Island is a case in point, as was the intrusion of a Chinese Han submarine in Japanese territorial waters in 2004. So-called rules of engagement (ROEs), intended to guide and control the behavior of local actors, are typically general in scope and leave room for personal interpretation that may lead to actions that escalate a crisis situation. Compounding the risk of unintended escalation between Chinese and Japanese air and naval units is the unpredictable involvement of third parties such as fishermen or civilian activists who may attempt to land on the islands. Their actions could precipitate an armed response by either side. Political Miscalculation in an Effort to Demonstrate Sovereign Control Political miscalculation of either country's intent or resolve, as well as miscalculation of the U.S. position, could lead to armed conflict. First, Japan and China are already finding it difficult to read each other's actions. Past Japanese government leasing of the Senkaku/Diaoyu Islands effectively kept nationalist activists—Japanese as well as Chinese and Taiwanese—at bay. In mid-2012, however, rising nationalist sentiments during leadership transitions inflamed the dispute. This stimulated heated debate in Tokyo over how to consolidate Japanese sovereignty and was a factor in the December 2012 election of conservative prime minister Shinzo Abe, who advocated inhabiting the islands. This escalation in asserting sovereignty claims through the use of patrols, populating the islands, and perhaps even military defense of the territory could lead to heightened tensions between the two countries and whip up nationalist sentiments, potentially limiting the capacity of leaders to peacefully manage the dispute. Second, China could miscalculate U.S. interests and intentions. Since last year, U.S. policymakers have sought to lessen tensions but have also taken steps to clarify the U.S. role in deterring any coercive action by China. U.S. and Japanese forces have conducted regular exercises to strengthen defense of Japan's southwestern islands and maritime surveillance capabilities. Both former secretary of state Hillary Clinton and former secretary of defense Leon Panetta clearly stated that the United States will defend Japan against any aggression, and on November 29, 2012, the U.S. Senate passed a resolution accompanying the 2013 National Defense Authorization Act to demonstrate congressional support for the Obama administration's commitment to Japan's defense. As tensions escalated late last year, Washington increased its deployments in and around Japan. Early this year, as military interactions raised the potential for conflict, Clinton restated the U.S. position that it would not accept any unilateral attempt to wrest control of the islands. Still, Beijing could miscalculate Washington's commitment to defend Japan and/or seek to test that commitment. Finally, U.S. assurances could lead Tokyo to overestimate Washington's response and to act in a manner that would increase the chance for confrontation. To date, however, Tokyo has tended to err on the side of caution in planning and exercises with U.S. forces, and it is unlikely Japan would act without evidence of U.S. assistance. Deliberate Action to Forcibly Establish Control Over Islands Although this seems highly unlikely today, either party could take military action to assert sovereignty over the disputed islands. Rising domestic pressures or an unexpected opportunity for a fait accompli could lead to a decision by either government to establish military control over the territory. AT: Japan CP – Perm The permutation solves – US-Japan cooperation is key to effective ocean research and data-sharing OPRF, Ocean Policy Research Foundation, 9 [4/17/9, OPRF, “United States-Japan Seapower Alliance for Stability and Prosperity on the Oceans,” http://www.sof.or.jp/en/report/pdf/200906_seapower.pdf, accessed 7/13/14, TYBG] To provide against shortages of resources, energy, and food supplies likely to occur on a global scale, the major seafaring nations of the United States and Japan should play leading roles in the development of living and non-living resources in the seabed and continental shelves, as well as in the development of ocean energy resources and seawater potential. Both countries can and should help battle the global economic crisis by demonstrating their commitment to a “Blue New Deal” policy based on these precepts and by promoting development of the oceans on the condition of sound environmental stewardship in the maritime domain as well as increasing job creation. The United States and Japan need to cooperate with each other where possible in the development of technologies and funding for the exploration and exploitation of seabed resources and marine energy development in order to bring these industries into active production. Research on the oceans, the accumulation of data, its use and sharing, and human resource exchanges are important for the effective promotion and development of technology. To facilitate this, the establishment of a joint data center and R&D center for research and development of marine resources, as well as joint construction and use of a marine scientific survey ship and platform for exploration and exploitation, are desirable. Furthermore, opportunities for the exchange and publicizing of technologies between the two countries should be created in maritime industries, which support such research and development. The perm solves – US-Japan scientific cooperation is effective and solves Ministry of Foreign Affairs of Japan, 14 [4/23/14, Ministry of Foreign Affairs of Japan, “Extention of the Agreement between Japan and the US on Cooperation in Research and Development in Science and Technology,” http://www.mofa.go.jp/press/release/press22e_000015.html, accessed 7/13/14, TYBG] On April 23, in Tokyo, “Protocol extending the Agreement between the Government of Japan and the Government of the United States of America on Cooperation in Research and Development in Science and Technology” was signed between Mr. Fumio Kishida, Minister for Foreign Affairs, on the Japanese side and Her Excellency Caroline Bouvier Kennedy, Ambassador Extraordinary and Plenipotentiary to Japan, on the U.S. side. Since this Agreement was concluded on June 20, 1988 and successively extended, the cooperation in science and technology between both sides has been progressing smoothly. The effective period of this Agreement, which was extended in 2004 and expires this July 19, is to be extended for ten years from this July 20 by the signing of the Protocol. At the signing ceremony, Minister Kishida stated that cooperation between Japan and the U.S., both of which being leading countries in the field of science and technology, has a significant value because it had contributed to strengthen the Japan-U.S. Alliance and further development of society, economy, and humankind in the whole world. Ambassador Kennedy stated that the both countries’ world class cooperation in the field of science and technology has the broadest scope on complicated issues, and many innovated technologies today which used to be mere sketches in labs 25 years ago when the U.S. and Japan first signed the Agreement have been developed in various fields ranging from deep ocean to space. Through these statements, they both expressed their hope for further development of science and technology cooperation between Japan and the U.S. in the future. 2AC AT: Ks 2AC – FW – Science Good Science is beneficial for everyone, and student participation is key to personal skills and real world applications - it allows us to keep existential powers in check and puts individual agency as a priority - engaging these questions is important for everyday life Colarusso, High school physics teacher and public defender in Massachusetts, 8 (David, “The Birthright of Science”, http://www.davidcolarusso.com/educator/, accessed 7/12/14, LLM) When I took a Fulbright teacher exchange to Edinburgh, Scotland, I left behind an astronomy course I had created. At the request of my replacement, I wrote an open letter to the class answering the question "Why Astronomy?" It was intended as an introduction to the course. In reality, however, it was my answer to the general question, "why should everyone study science?" The highlights, however, come down to this: (1) it can save the world; (2) it keeps us honest; and (3) it is among the most human of activities--it is our birthright. The letter started by recounting a conversation I had in college while manning an open house at the observatory. A visitor expressed her belief that investments in astronomy amounted to a waste. "Wouldn't we be better off taking care of problems here on earth before worrying about all this space stuff?" It was a fair question considering the expense of modern astronomy, and it is a question often asked of pure research in general. We no longer use the sun to tell time. The stars no longer signal the harvest, nor do we use them to navigate... I was tempted to talk about "spin-offs," perhaps mentioning the role of artificial satellites in everything from GPS to TV. However, a poster of the planets caught my eye, and there we sat between Venus and Mars--two cautionary tales that may yet save humanity. I explained how studying the atmosphere of Mars helped alert us to the danger of nuclear winter. Since the inception of nuclear weapons, a protracted nuclear war had always threatened immense horrors, but before the discovery of nuclear winter, it wasn't clear that it could mean the end of the world. This realization helped drive much of the work done towards arms control in the latter half of the twentieth century. Likewise, the Venuasian atmosphere helped underline the danger of global climate change, offering an example of a runaway greenhouse effect. In this way, astronomy and pure research helped alert us to two of humanity's most pressing threats. It is not yet clear if these warnings will be enough to save humanity, but at least, we now know of the dangers that come with certain choices. In short, science can save the world. This is a fine argument for why someone should study science, but it falls short of why everyone should. For that we have to dig deeper, and for my students in a public high school, it had to do with their preparation for entering the electorate. Shortly after 9/11, I noted anthrax improperly cited as a virus in two national news outlets. In the same reports the use of antibiotics was mentioned as a treatment, which is odd considering antibiotics don't kill viruses. When a panicked public is actually threatened by a viral pathogen, who wants to bet they demand antibiotics? It happens all the time with the flu--a viral infection. Yet patients still demand antibiotics. Such improper use can spur the development of drug resistant bacteria, a big problem for modern medicine. However, doctors have been known to prescribe unnecessary antibiotics. What if their patients went to someone else? Maybe this doesn't seem like a big deal, but at the time of my letter, the US was funding the development of a missile defense "shield" for shooting down ballistic missiles from rogue states, despite the fact that the technology had never been proven and the obvious loophole that bombs don't have to be delivered via missile. Is this a question of antibiotics working on viruses, of politicians worried their constituents might vote for someone else? How can we tell? Who am I to say that they can't shoot down those missiles? It doesn't seem too hard. Heck we can land a man on the moon... We live in a world deeply dependent on science and technology. As members of a deliberative democracy, we have a duty to help answer the questions that such a world presents. Science is not so much a collection of knowledge as a method for gaining it. The study of any science teaches you how to think. It hones your mental tool kit, strengthening a set of skills necessary not only for the continuance of the republic but for the betterment of your self-interest. It's good to put things to the test, and the same skills you use to question nature can be put to use buying a used car or picking a president. Admittedly, the misuse of science and technology are responsible for many of the problems of the modern world, but it is because of this that science is needed to help address them, and historically it has been the scientists themselves who have alerted us to the potential problems. The methods of science work to keep us honest. They're not perfect, but then again, we're only human. So maybe you're willing to accept that on some practical level everyone should care about science. Sure, it's saved the world several times over, its methods can help us make decisions, and its tools can be used to hold those in power accountable, but it seems like going to the dentist or the gym--something that's "good for you" but not your first choice for fun. Some people even look upon science as dehumanizing. It places limits on what is and isn't possible; it reminds us that we can't always trust ourselves and that all the wishing in the world doesn't make something so. This may seem at first an odd tangent, but consider for a moment what it is to be human. We could muse over this for ages, but I'm brought back to the work of Douglas Hofstadter, who sees the existence of self-reference and model making as key. The great discovery of astronomy is that the heavens and earth are made out of the same stuff. The only difference between a pile of simple elements and a person is the pattern. So how do we explain what makes us alive, what makes us human? One extreme suggests that we cannot, that life is the realm of the divine and that we are ensouled with something we can never hope to understand, something different in kind from normal matter. The other extreme posits that we are nothing more than the collective interactions of innumerable atoms. Both fall short of experience, either ignoring the input of experimental evidence or the verisimilitude of the human condition. Hofstadter's concern is consciousness, and his answer is deceptively simple and a tad reminiscent of Descartes. To over simplify, a conscious entity is one who models the world in which it lives and who finds it necessary in constructing these models to postulate the existence of self. If you're really interested, you should check out his Pulitzer winning work Godel, Escher, Bach. However, what's important for us is that such a definition puts model making at the center of consciousness, and it is consciousness which separates us for other collections of atoms. What is science if not the explicit construction, evaluation, and application of descriptive and predictive models of our world? The undertaking of science is inescapably human. Along with the production of emotive models of the world (art), science is humanity made tangible. Scientific thinking may save the world, it may make modern technology and societies possible, it may provide an essential component for democratic governance, but more than anything, it is your birthright. 2AC – AT: Cap The aff resolves the underlying root causes of capitalism – endorsing a shift away from profit-driven applied science towards pure research is key Langley, PhD in Neurobiology, and Parkinson, bachelor’s degree in physics and electronic engineering, and a doctorate in climate science, 9 (Chris and Stuart, October “Science and the corporate agenda SGR Promoting ethical science, design and technology The detrimental effects of commercial influence on science and technology” http://www.statewatch.org/news/2009/oct/scientists-for-global-responsibillty-report.pdf, accessed 7/3/14, LLM) ‘Pure’ science (there is not strictly speaking ‘pure’ technology or engineering) usually appears in the R&D statistics of government (or other funders of research) as a category which reflects the open-ended pursuit of knowledge. Pure research tends to be considered as part of curiosity-driven work which is undertaken by scientists in both public and private laboratories – its aim being to provide an ‘understanding’ of a phenomenon. In contrast, ‘applied’ research aims at producing an intervention – such as a drug or new material – to address problems or develop a new approach. ‘Pure’, ‘fundamental’ or ‘basic’ research is defined officially as: “….experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundation of phenomena and observable facts, without any particular application or use in view” (OECD 2002). Universities have been seen historically as institutions in which such predominantly ‘pure’ research was undertaken to discover knowledge for a broadly defined ‘public good’. Such knowledge would be a source of objective information for the public, and could inform policy-makers in areas such as public health or environmental protection. However these goals can be marginalised by the involvement of commercial interests wedded to shortterm economic return (Ravetz 1996; Washburn 2005). A series of profound changes in the UK have altered how people perceive the role and activities of universities in society. These changes have affected what research is undertaken; for whom and why; and the proportion of research that can be described as ‘pure’. In this climate many, especially in government, have begun to regard ‘pure’ research as a luxury. ‘Applied’ research is usually defined as research that has a clear set of narrowly-defined objectives, which guide its programme of activities. There is generally little opportunity to seek data outside this defined set of end-points. ‘Applied’ research frequently has economic gain and profit as its predominant focus – but can also be related to a specific social or environmental goal such as curing a disease, reducing greenhouse gas emissions or increasing crop yields. Superficially then one of the key differences between ‘pure’ and ‘applied’ research is how the goals of the research are defined and who is likely to benefit from the products of that research. The methods and scientific activities in ‘pure’ and ‘applied’ research are essentially the same. The research activity tabled below comprises both ‘applied’ and ‘basic’ SET activities undertaken by the main sectors in the UK. Traditionally the Research Councils predominantly supported the more ‘pure’ form of research – much of which had a broadly defined set of end-points. In addition the Research Councils were expected to provide funding not coloured by the political perspectives of the government of the day – the Haldane principle1. While in the early days of the Research Councils some of the funding they distributed was for technological innovation and hence definable as ‘applied’, the proportion of their funding activities that is directed at economically defined objectives has increased in the last 20 years (see Moriarty 2008). SET has significant potential to provide tools that can be used, through technological development for instance, to contribute to social justice or to help to address issues such as resource depletion, cleaner energy, pollution and environmental degradation (Ravetz 1996). However, there is a large body of research literature which shows that the ability of SET to fulfil that potential – its ultimate role in society – depends upon the social structure and power relationships existing within that society. Profit-driven activities and mechanisms such as intellectual property rights2, patents and funding can often act against the public interest and bring benefit to a very few without increasing the public benefit. SET has a number of mechanisms in place – with associated reliable methods and data – designed to help reduce the influence of special interests with the potential to introduce bias, for example those of the funder. Strict adherence to these mechanisms – which include peer review, free exchange of data and transparency – has traditionally been a prerequisite for practising SET. However, such processes must be observed by all involved in publishing and experimental protocols, for example, so as to permit data to be assessed for its reliability. Pure science comes first, especially in the context of profit-driven research Oates, PhD in biology and biotechnology; currently deputy director of undergraduate education at the National Science, 13 [Karen, 3-7-14, Huffington Post, “The Importance of Basic Research”, http://www.huffingtonpost.com/karen-kashmanian-oates-phd/science-role-models_b_2821942.html, accessed 7-5-14, TYBG] This is science's newest Golden Age. Young people today are inspired by generational heroes like Steve Jobs and Mark Zuckerberg that were filled in the relative recent past by the likes of Michael Jordan and Mick Jagger. The fact that today's students can dream of emulating role models who achieved their status using their minds and curiosity is a good thing. However, there is one significant drawback. The rock star status of today's scientific celebrities encourages aspiring scientists to focus on the retail possibilities that can result in fast fame and wealth. While understandable, this unwittingly neglects a crucial part of the scientific equation -- basic research. For example, let's look at the way the music industry has changed over the last decade or so. Instead of going to a record story, most people now get their music electronically via MP3 files through an online store like iTunes, and download it to portable MP3 players like iPods. Each of these products -- MP3s, iTunes and iPods -- was created to fill a specific commercial void. Scientists identified a need and developed a product. That is applied research. But these would not exist if not for the anonymous scientists at the Swiss laboratory CERN whose research led to the development of the internet, or the no-name physicists in the 1920s whose abstract discoveries in electronics and sub-particles paved the way for today's computers. These unheralded breakthroughs are products of basic research. Basic research is the foundation on which applied research is built, and feeds the pipeline for the products and services we consume. But too few of today's and tomorrow's scientists are showing interest in laboring unknown in the back labs of basic research. The money and the notoriety, it seems, comes from advancements championed through applied research. Compounding the problem are the funders. America's top companies used to provide significant dollars to basic research, recognizing it is a perquisite for innovation that led to viable commercial products, among them the transistor, nylon and Teflon. But basic research is expensive, time consuming and there are no guarantees of a billion-dollar breakthrough. Without the robust support of private companies like The Bell Labs and Dupont, the home grown pipeline begins to run dry. The financial pressure then falls squarely on government funding and university research. When public dollars are being used, there is frequent pressure to focus on applied research, rather than appropriate revenues for experimentation with no known conclusion. Earlier this week, an advisory panel recommended to federal agencies shutting down the Brookhaven National Laboratory in New York, home of last remaining particle collider in the U.S, because of tight budgets. The collider smashes gold ions and protons together, which enables scientists to study the formation of the universe. Research like this is too important to be penny foolish. On a recent trip to Israel, I met with the head of the Weizmann Institute of Science, the country's leading research institution. Their students and fellows focus almost exclusively on basic research. Weizmann is Israel's smallest university, yet it is one of the top five highest earning institutions in the world because of its patents and their subsequent commercialization. The United States, and its stable of excellent colleges and universities, needs to learn from the Weizmann model. We know basic research is valuable. Weizmann shows us it can be profitable, too. One of my role models is Mary-Claire King. A researcher who spent nearly 20 years studying breast cancer, she faced a barrage of criticism for wasting time and money. Eventually she discovered the breast cancer gene, which has helped tens of millions of people survive breast cancer. Her stubbornness and perseverance in basic research saved lives and resulted in billions of dollars in direct and indirect economic impact. We need more scientists like Mary-Claire King. Yet it is doubtful many students who are planning on careers in science have heard of her or are planning to emulate her. But she, and countless anonymous basic researchers, unquestionably had as great an impact on their future careers as Jobs and Zuckerberg and the other rock stars they one day hope to follow. No link - pure science endorses a separation of knowledge and capital – only applied science operates within the sphere of capitalism Lucier, Department of History, Brown University, 12 [Paul, September, 2012, “The Origins of Pure and Applied Science in Gilded Age America,” Isis, Vol. 103, No. 3, pg. 527-536, TYBG] "Pure science" and "applied science" were both products of Gilded Age America and thus they were often conjoined—"pure" and "applied." But they were also distinct concepts whose respective proponents held very different visions for the future. An appeal to "pure science" bespoke a pessimism about the corrupting influence of money and materialism. Rowland and his ilk feared for American science, and his "Plea" was a passionate proposal for reform. To create a science of physics—or, more generally, to create any science—Americans had first to fund and equip "first class" universities, with well-paid and light-teaching-load professorships for the very best researchers. Such scientists would be judged (and, ideally, admired) for the quality of their research as much as for the content of their character. The public would benefit by the advancement of knowledge and, in time, by the "applications of science." Nonetheless, "pure science" envisioned an opposition of interests, a moral economy in which knowledge and com-merce should not mix. "Applied science" bespoke an optimism about the ability of individuals to manage money and its allure. Bell and his cohort believed that research could be genuine and useful; patents were emblems of good science and material goods. A combination of interests was possible and even encouraged. This kind of moral economy was also evident in government agencies, where "applied science" meant a selflessness or duty to others, before a singular and selfish pursuit of one's own interests. "Pure" and "applied" thus represented an essential tension in the relations between the search for knowledge and the pursuit of profit in a capitalist society. Applied science is heavily dominated by capitalism in academia – the aff endorses pure science as an alternative Mazzolini, Assistant Professor, Department of English at Virginia Tech, 3 [Elizabeth, “REVIEW OF ACADEMIC CAPITALISM: POLITICS, POLICIES AND THE ENTREPRENEURIAL UNIVERSITY,” Workplace, No. 10, pg. 196-198, TYBG] 2. Academic capitalism is as sweeping as the globalization to which it has been a compulsory response. The term describes the phenomenon of universities' and faculty's increasing attention to market potential as research impetus. According to Slaughter and Leslie, globalization has efficiently linked prestige to research funding to marketability. Slaughter and Leslie point out that federal research and development policies have, especially since World War II, emphasized the technological as being key for global competitiveness, so that academic capitalism is most visible in applied science and technology departments. There is a trickle-down effect for the humanities, in an increasing reliance on communication training, valuable in corporate settings. In other words, the humanities are useful only insofar as they support the most marketable research coming out of the university. 3. Academic Capitalism's geographic scope, encompassing four English-speaking countries (the United States, the United Kingdom, Canada, and Australia), supports its global-scale argument, and moreover reinforces arguments about governmental policies from a non-Americo-centric point of view. This combined with its broad temporal scope makes the book's argument about globalization dauntingly convincing. Oddly, much of the support for its case comes in the extremely local form of faculty interviews at Australian universities. Using these individual perceptions as evidence perhaps costs Slaughter and Leslie something in terms of prescriptive foothold, and relegates them to a rage-orresignation logic that belies what they gain in documenting a policy trend. 4. The history of the present that Slaughter and Leslie illustrate in Academic Capitalism is a compelling picture of a less-than-ideal form for higher education. Without detracting from the force of that illustration, Slaughter and Leslie rely heavily on the idealistic model of research assumed by their study before it even began. If the global economy brought a flood of new and more intense kinds of investments in competition between nations, Slaughter and Leslie's antediluvian university was one characterized by scholars whose work was animated by pure love for knowledge, unfettered by the cynicizing bonds of market application. On this view, the university was a bastion of pure inquiry, independent and protected from the nasty outside world where people do things in order to make money. Certainly, the book seems to assume, there would be no such self-interest in an academy if left to its own devices, supported by the plenitude of unconditional public support, without the dynamic introduced by the encroaching global economy and its governmental responses. In order to compete in the global marketplace, Academic Capitalism points out that governments must ensure that their countries develop applicable and marketable goods. Universities have become the less expensive surrogates of corporate R & D departments for those goods. FW Cards Ocean Exploration Good The scientific epistemology of ocean exploration is essential to proper social understandings of ocean phenomena. The sea is not a metaphor. Steinberg, Professor of Geography at Florida State, 13 (Philip E. Steinberg is Professor of Geography at Florida State University and Marie Cure International Incoming Fellow at Royal Holloway, University of London, “Of other seas: metaphors and materialities in maritime regions”, Atlantic Studies, 2013, Vol. 10, No. 2, http://dx.doi.org/10.1080/14788810.2013.785192) The sea is not a metaphor. So asserts Hester Blum in the first sentence of her agenda-setting article, ‘‘The Prospect of Oceanic Studies.’’1 Blum goes on to identify a fundamental flaw in the bulk of ocean-themed literature, maritime history, analytical work on cultural attitudes toward the ocean, and a raft of scholarship in cultural studies in which the fluvial nature of the ocean is used to signal a world of mobilities, betweeness, instabilities, and becomings. While all of these perspectives on the sea serve a purpose in that they suggest ways for theorizing an alternative ontology of connection, Blum cautions that they fail to incorporate the sea as a real, experienced social arena. Instead, she argues for a perspective that ‘‘draws from the epistemological structures provided by the lives and writings of those for whom the sea was simultaneously workplace, home, passage, penitentiary, and promise’’ and that is thereby ‘‘attentive to the material conditions and praxis of the maritime world.’’2 I applaud Blum’s aversion to those who would reduce the ocean to a metaphorical space of connection; indeed, in the first part of this article I amplify her comments in this regard. At the same time, however, I find her alternative the study of works that emerge from the actual, material encounters of humans with the sea somewhat wanting. While the sea is a social (or human) space a ‘‘social construction’’ it is not just a social construction.3 Indeed, human encounters with the sea are, of necessity, distanced and partial. The encounter from the shore, from the ship, from the surface, or even from the depths, while laden with affective feelings, captures only a fraction of the sea’s complex, fourdimensional materiality.4 To be certain, the combination of emotional intensity with material distance that characterizes our understanding of the sea has made for some excellent literature.5 Art, after all, thrives on the distance between affective and cognitive understandings. 6 This tension also happens to have led to some relatively enlightened environmental management practices.7 But the partial nature of our encounter with the ocean necessarily creates gaps, as the unrepresentable becomes the unacknowledged and the unacknowledged becomes the unthinkable. To that end, following a discussion of some of the problems with the way that the maritime is often considered in literary, historical, cultural, and geographical studies, I suggest three, related alternative perspectives that directly engage the ocean’s fluid mobility and its tactile materiality. To be clear, my aim is not to deny the importance of either the human history of the ocean or the suggestive power of the maritime metaphor. Rather, I am asserting that in order to fully appreciate the ocean as a uniquely fluid and dynamic space we need to develop an epistemology that views the ocean as continually being reconstituted by a variety of elements: the non-human and the human, the biological and the geophysical, the historic and the contemporary. Only then, can we think with the ocean in order to enhance our understanding of and visions for the world at large. The K overtheorizes the ocean, and leads to a misunderstanding of human interactions with the ocean. The plan is necessary to balance social theory with oceanographic practice to best understand the ocean as spatial phenomenon. Steinberg, Professor of Geography at Florida State, 13 (Philip E. Steinberg is Professor of Geography at Florida State University and Marie Cure International Incoming Fellow at Royal Holloway, University of London, “Of other seas: metaphors and materialities in maritime regions”, Atlantic Studies, 2013, Vol. 10, No. 2, http://dx.doi.org/10.1080/14788810.2013.785192) The late twentieth century saw the ocean rise to the forefront of the humanities from two different perspectives. Since Fernand Braudel’s classic work on the Mediterranean, scholars have sought to replace the terrestrial bias in historical and literary studies with one that focuses on ocean regions.8 Land-based regionalizations, whether centered on the community, the nation-state, or the continent, typically privilege settlements, place-based identities, and the development of stable social institutions, most notably those associated with state power. By contrast, advocates of an ocean basin-based (or maritime) regionalization contend that their alternate perspective gives greater prominence to the cultural and economic interchange between societies that is the hallmark of historical and modern political economy.9 The trend toward ocean basin-based regionalizations has accelerated in recent decades, with numerous interdisciplinary conferences, working groups, books, and journals (including, of course, Atlantic Studies) focusing on the study of one or another maritime region. Typically, the geographic scope of each region is defined by a central sea and depending on the disciplinary focus of the conference, working group, book, or journal its limits are those of that sea’s historical, cultural, economic, or geopolitical watershed.10 While this is a welcome trend, it is also problematic. All too often, the ocean that binds the societies of the ocean region is undertheorized: reduced in the scholarly literature to a surface, a space of connection that merely unifies the societies on its borders. Thus, when Arif Dirlik asks, ‘‘What is in a rim?’’ with reference to the Pacific basin, his response inadvertently reinterprets the question as ‘‘What is on a rim?’’ or ‘‘What passes through the space in the middle of the rim?’’ He states: ‘‘The material basis [of the Pacific rim] is defined best not by physical geography but by relationships (economic, social, political, military, and cultural) that are concretely historical, . . . [by] motions of people, commodities, and capital.’’11 The ocean region thus comes to be seen as a series of (terrestrial) points linked by connections, not the actual (oceanic) space of connections. The material space in the middle what is actually in the rim drops off the map. If this turn toward ocean region studies which broadly can be associated with historically informed political economy undertheorizes the ocean, the second foundation for the rise of ocean region studies which can be associated with poststructuralist critical theory overtheorizes the ocean. For scholars in this second group, the ocean is an ideal medium for rethinking modernist notions of identity and subjectivity and the ways in which these are reproduced through land-centered divisions and representations of space. Thus, for Deleuze and Guattari the ocean is the ‘‘smooth space par excellence,’’ a space that lies apparently, if provisionally, apart from the striations that make difference calculable and amenable to hierarchy.12 Similarly, in his unpublished but oft-cited essay ‘‘Of Other Spaces,’’ Michel Foucault calls the ship at sea the ‘‘heterotopia par excellence,’’ a space of alternate social ordering.13 These assertions, in turn, are frequently reproduced by scholars who pay little attention to the actual lives of individuals who experience and interact with the sea on a regular, or even occasional, basis. The disconnect between the idealized sea of poststructuralist theorists and the actual sea encountered by those who engage it is captured in David Harvey’s response to Foucault’s declaration that ‘‘in civilizations without boats, dreams dry up, espionage takes the place of adventure and police take the place of pirates.’’ ‘‘I keep expecting these words to appear on commercials for a Caribbean Cruise,’’ writes Harvey. ‘‘. . .And what is the critical, liberatory and emancipatory point of that? . . . I am not surprised that [Foucault] left the essay unpublished.’’14 For scholars in this second, poststructuralist, group, the ocean is not so much ignored as it is reduced to a metaphor: a spatial (and thereby seemingly tangible) signifier for a world of shifting, fragmented identities, mobilities, and connections. While metaphors provide powerful tools for thought, spatial metaphors can be pernicious when they detract attention from the actual work of construction (labor, exertions of social power, reproduction of institutions, etc.) that transpires to make a space what it is.15 Thus, the overtheorization of ocean space by poststructuralist scholars of maritime regions is as problematic as its undertheorization by political economy-inspired scholars. In this light, it is interesting to compare Dirlik’s Pacific Rim with Paul Gilroy’s The Black Atlantic.16 At first glance, Gilroy seems to cover the material (and the space) ignored by Dirlik. Whereas the distance and materiality of the ocean inside Dirlik’s Pacific Rim are seamlessly transcended by the circuits of multinational capital, the space in the middle (the Atlantic) and the frictions encountered in its crossing are central for Gilroy. The Black Atlantic is primarily a book about the connections that persist among members of the African diaspora and the ungrounded, unbounded, and multifaceted identities that result, and the trope of the Middle Passage is deployed throughout the book to reference the travel of African-inspired ideas and cultural products, as well as bodies, that continues to this day. Nonetheless, even as Gilroy appears to reference the ocean, the ultimate target of these references is far removed from the liquid space across which ships carrying Africans historically traveled. In fact, the geographic space of the ocean is twice removed from the phenomenon that captures Gilroy’s attention: it is used to reference the Middle Passage which in turn is used to reference contemporary flows, and by the time one connects this chain of references the materiality of the Atlantic is long forgotten. Venturing into Gilroy’s Black Atlantic, one never gets wet. 158 The problem, then, is not that studies that reference an oceanic center lack empirical depth. Rather, the problem is that the experiences referenced through these studies typically are partial, mediated, and distinct from the various non-human elements that combine in maritime space to make the ocean what it is. This then leads us back to Blum’s call for a turn to actual experiences of the sea, as have been chronicled by anthropologists, labor historians, and historical geographers, as well as in maritime or coastal-based fiction. Unfortunately, a scholar of (Western) literature or history who pursues this agenda soon runs into methodological limits. As John Mack notes, Western accounts of ‘‘life at sea,’’ whether fictional or historical, are typically about ‘‘life on ship,’’ as they fail to attend to the surface on which the ship floats, let alone what transpires beneath that surface.17 And yet, contrary to Dirlik’s dismissal, the physical geography of the ocean does matter. How we interact with, utilize the resources of, and regulate the oceans that bind our ocean regions is intimately connected with how we understand those oceans as physical entities: as wet, mobile, dynamic, deep, dark spaces that are characterized by complex movements and interdependencies of water molecules, minerals, and nonhuman biota as well as humans and their ships. The oceans that unify our ocean regions are much more than surfaces for the movement of ships (or for the movement of ideas, commodities, money, or people) and they are much more than spaces in which we hunt for resources. Although these are the perspectives typically deployed in humancentered sea stories (i.e. the ones advocated by Blum), such perspectives only begin to address the reality of the sea that makes these encounters possible. Rather, the oceans that anchor ocean regions need to be understood as ‘‘more-than-human’’ assemblages, 18 reproduced by scientists,19 sailors,20 fishers,21 surfers,22 divers,23 passengers, 24 and even pirate broadcasters25 as they interact with and are co-constituted by the universe of mobile non-human elements that also inhabit its depths, including ships, fish, and water molecules.26 Although the actions and interests of humans around the ocean’s edges and on its surface certainly matter, a story that begins and ends with human ‘‘crossings’’ or ‘‘uses’’ of the sea will always be incomplete. The physical boundaries of a maritime region are indeed human-defined as Dirlik asserts, but the underlying, and specifically liquid nature of the ocean at its center needs to be understood as emergent with, and not merely as an underlying context for, human activities. Ocean exploration is essential to transforming our understanding of the oceans. The plan enables a fundamental ontological shift in our understanding of space. Mathematical and scientific oceanographic techniques are a precondition to the alternative. Steinberg, Professor of Geography at Florida State, 13 (Philip E. Steinberg is Professor of Geography at Florida State University and Marie Cure International Incoming Fellow at Royal Holloway, University of London, “Of other seas: metaphors and materialities in maritime regions”, Atlantic Studies, 2013, Vol. 10, No. 2, http://dx.doi.org/10.1080/14788810.2013.785192) Of course, land is also, in a geological sense, mobile. Doreen Massey points this out as she uses the geological mobility of land to undermine modernist notions of place as static and amenable to development along a single trajectory.29 However, I would assert that the mobility of water is qualitatively different because its fluidity is inevitably experienced by anyone who actually encounters its physicality (as opposed to observing its representation on a map). It is readily apparent to the untrained observer that water is constituted by moving molecules and by forces that push these molecules through space and time. By contrast, the invisibility of plate tectonic movement endows terrestrial space with an aura of stability that is expressed in an idealization of place that transcends the vicissitudes of time and movement; indeed, it is the power of this image on land that prompts Massey to destabilize place by turning to the hidden mobilities of plate tectonics. To develop ways for understanding the ocean as a uniquely mobile and dynamic space, as well as one with depth, it is useful to turn to the tools of oceanography, a discipline rarely engaged by humanities-oriented scholars (or, for that matter, social scientists) who adopt a regional seas perspective. In particular, I turn here to the distinction that oceanographers make between Eulerian and Lagrangian modeling techniques.30 Oceanographers who work from a Eulerian perspective measure and model fluid dynamics by recording the forces that act on stable buoys. Eulerian researchers compare the presence and characteristics of these forces at different points in an effort to identify general patterns across space and time. Eulerian research remains dominant in oceanography, perhaps because it mimics the terrestrial spatial ontology wherein points are fixed in space and mobile forces are external to and act on those points, or perhaps because the alternative is both costlier and mathematically more complex.31 From the Eulerian perspective, as in the modernist ontology that tends to inform our understanding of regions (whether they are defined by a central continent or by a central ocean), matter exists logically prior to movement. The fixed points of geography, represented in the world of Eulerian oceanography by buoys, would persist even in the absence of the forces of movement that cross the space between and beyond these points. Likewise, from this perspective, London and New York would exist as points on a map and, if they were settled, they would have social dynamics and institutions, even if they did not have centuries of linkages as nodes in a trans-Atlantic economy. The alternative is to adopt a Langrangian perspective wherein movement, instead of being subsequent to geography, is geography. Oceanographers working from this perspective trace the paths of ‘‘floaters’’ that travel in three-dimensional space, with each floater representing a particle, the fundamental unit in Lagrangian fluid dynamics. Movement is defined by the displacement across space of material characteristics within mobile packages, not abstract forces, and these characteristics are known only through their mobility.32 In other words, objects come into being as they move (or unfold) through space and time. Conversely, space ceases to be a stable background but a part of the unfolding. The world is constituted by mobility without reference to any stable grid of places or coordinates. From this perspective, movement is the foundation of geography.33 To return to the previous example, London and New York exist as they are only in their continual reconstruction through flows of connectivitity. These connections (and the space central to these connections the ocean) can be seen only as constitutive parts/processes of the cities, not as manifestations of their external functions. Although not specifically referencing oceanographic research, Manuel DeLanda elaborates on the conceptual links between, on the one hand, Deleuzian philosophy and, on the other hand, the Riemannian differential geometry that forms the mathematical basis for Lagrangian fluid dynamics.34 In both cases, there is an ‘‘absence of a supplementary (higher) dimension imposing an extrinsic coordinatization, and hence, an extrinsically defined unity.’’35 Space, from this perspective, is less a thing or a stationary framework than a medium that is constantly being made by its dynamic, constitutive elements. My point in introducing this strand of fluid dynamics is not to suggest that the world of ocean-basin regions can be ‘‘modeled’’ in Lagrangian fashion. Rather, I discuss it to suggest an alternate route for developing decentered ontologies of connection. This is, after all, the explicit goal of the poststructuralist cultural studies wing of ocean region studies and it is even implicit among political economists who seek to denaturalize the assumed primacy of the (re)productionoriented terrestrial region (e.g. the territorial nation-state). However, as I noted in the previous section, all too often this agenda is pursued by scholars who reduce to a metaphor the ocean that lies at the center of the ocean region or, worse yet, who simply ignore it. Following, but also going beyond, Blum’s provocation, I propose that, as part of the process of incorporating actual, lived experiences of the ocean into the studies of maritime regions, we need also to bring the ocean itself into the picture, not just as an experienced space but as a dynamic field that through its movement, through our encounters with its movement, and through our efforts to interpret its movement produces difference even as it unifies. A Lagrangian-inspired ontology may well provide a means for doing this. The K leads to an incomplete understanding of human interactions with the ocean. The focus on the ocean as solely a site of transportation of people or commodities prevents us from recognizing the ocean’s fluid ontology. The permutation solves best. Steinberg, Professor of Geography at Florida State, 13 (Philip E. Steinberg is Professor of Geography at Florida State University and Marie Cure International Incoming Fellow at Royal Holloway, University of London, “Of other seas: metaphors and materialities in maritime regions”, Atlantic Studies, 2013, Vol. 10, No. 2, http://dx.doi.org/10.1080/14788810.2013.785192) In her review of recent ocean-related scholarship in social and cultural geography, Kimberley Peters asks, ‘‘Oceans and seas are three-dimensional, fluid and liquid, yet they are also undulating surfaces; how does the texture, the currents and the substance of the water impact contemporary social and cultural uses of that space?’’46 Others have raised similar points. For instance, Elizabeth DeLoughrey asserts, ‘‘Unlike terrestrial space, the perpetual circulation of ocean currents means that as a space, [the sea] necessarily dissolves local phenomenology and defracts the accumulation of narrative.’’47 In a similar vein, Lambert, Martins, and Ogborn write, ‘‘Clearly, climatic, geophysical, and ecological processes belong in work on the sea . . . .Overemphasis on human agency, especially in accounts of the Atlantic, makes for a curiously static and empty conception of the sea, in which it serves merely as a framework for historical investigations, rather than being something with a lively and energetic materiality of its own.’’48 Yet even those who advocate a ‘‘more-than-human’’ approach have difficulty incorporating the ocean’s geophysicality, not just as a force that impacts humans but as part of a marine assemblage in which humans are just one component. Thus, Lambert, Martins, and Ogborn discuss narratives of the White Atlantic (European migration), Black Atlantic (postcolonial connections), and Red Atlantic (the Atlantic as a space of labor) but curiously leave out a Blue Atlantic (a geophysical space of dynamic liquidity), and their example of the North Atlantic circular system supporting the ‘‘triangular trade’’ culminates in a distinctly human set of patterns and interrelations in which, as with all maritime trade, the underlying water is idealized as absent.49 Despite their best intentions, the ocean environment, although recognized as being more complex than a mere surface, is still treated as ‘‘a framework for historical investigations.’’ A more systematic attempt to integrate geophysicality into our understanding of human activities in the sea can be seen in recently published works by Kimberley Peters and by Jon Anderson. Peters focuses on pirate radio broadcasters who are continually thwarted in their attempts to idealize the ocean as an abstract, extra-legal, extra-national space. Reflecting on the affective interaction between the maritime broadcaster and the sea, she conceptualizes a ‘‘hydromateriality’’ that incorporates 164 P.E. Steinberg Downloaded by [Pennsylvania State University] at 05:27 30 April 2013 mobile biota (both human and non-human) as well as technologies and objects.50 The geophysical properties of the ocean take on an even more profound role in Anderson’s research on surfing. He uses the relationship between the surfer and the wave to explore how the assemblage perspective can be expanded (or modified) to interpret fleeting moments of socio-biological-geophysical convergence. This ontology of convergence may well characterize all moments in time, but its applicability is particularly profound in the ocean because of the ocean’s underlying dynamism.51 Peters and Anderson propose just two of the many ways in which we can take the ocean seriously as a complex space of circulations. These circulations are comprised not just of the people, ideas, commodities, and ships that move across its surface or the fish who swim in its water. Rather, in a more fundamental way, the ocean is a space of circulation because it is constituted through its very geophysical mobility. As in Lagrangian fluid dynamics, movement is not something that happens between places, connecting discrete points on a ‘‘rim.’’ Rather, movement emerges as the very essence of the ocean region, including the aqueous mass at its center. From this perspective, the ocean becomes the object of our focus not because it is a space that facilitates movement the space across which things move but because it is a space that is constituted by and constitutive of movement. This perspective not only enables us to understand the ocean in its entirety; it disassembles accepted understandings of relations between space and time, between stasis and mobility, and between human and non-human actants like ships, navigational aids, and water molecules. This perspective suggests an ambitious agenda, and one that goes well beyond more established goals in the ocean-region studies community, such as highlighting exchange over production or emphasizing the hybrid nature of cultural identities. And yet, it is only through engaging with the ocean in all its material complexity that we can develop the fluid perspective that allows us to use the sea to look beyond the sea. Enviro Reps Good Our representations are good - framing society as both the perpetrator of environmental degradation and the advocate for sustainability is key to effective and sustainable environmental policy Christie, School of Marine Affairs and Jackson School of International Studies, University of Washington, 11 [Patrick, 4/6/11, University of Washington, “Creating space for interdisciplinary marine and coastal research: five dilemmas and suggested resolutions,” https://depts.washington.edu/smea/sites/default/files/u43/Christie%20Multi%20Disc%20Researc.2011. Env%20Conserv.pdf, accessed 7/6/14, TYBG] Providing a more pluralistic form of research to guide coastal and marine policy will require reconceptualization of environmental problems and solutions. A fully social ecological conceptualization of problem and solution implies equal attention is paid to both social and ecological aspects. Environmental frameworks and policies, which are social constructs of which ecological conditions are only one of many considerations, are most effective when grounded in reliable and detailed understandings. For ethical, theoretical and practical reasons, the human dimension should not be reduced to mainly economic calculations of, albeit important, ecosystem services or quantified general principles (Campbell et al. 2009). Just as robust ecological research must span the breadth of natural history, population dynamics and genetics, social research should include, at a minimum, attempts to understand the social context over time, the environmental management process, institutional design principles, human adaptation and social impacts of policy with a consideration for justice (Campbellet al. 2009; Jones 2009). Recasting the position of society within social ecological research will create opportunities for balanced IR. In the predominant narrative of ocean decline and global policy response, society is generally reduced to the role of perpetrator of environmental degradation, with humans located outside of nature (Campbell et al. 2009). The conclusion that ocean resources are in a state of decline in many places is important and accurate, and has generated considerable impetus to alter ocean policy. Relatively little is known about the conditions and mechanisms through which society either prevents environmental degradation or actively restores environment. For example, until the seminal work of political scientist Ostrom (1990) and others, the seemingly inevitable ‘tragedy of the commons’ was a predominant explanation for why much of the non-private environment was in decline (Hardin 1968). Casting society as both perpetrator of environmental degradation and advocate of environmental sustainability will allow for more meaningful research, theory and policy. Historicization Good Historicizing the ocean is important to understanding how human regulations and interactions affect marine life Bolster, Chair of Humanities at the University of New Hampshire, associate professor of history, 6 (W. Jeffrey, “Opportunities in Marine Environmental History,” Environmental History 11 http://fishhistory.org/wp-content/uploads/2010/05/BolsterEH2006.pdf, accessed 7/6/14, LLM) The ocean may be the next frontier for environmental historians. People have depended on the ocean for centuries and quietly reshaped it. Recently the tragic impact of overfishing, habitat destruction, and biological invasions has become apparent. Yet the history of human interactions with marine environments remains largely uninvestigated, in part because of the enduring assumption that the ocean exists (or existed) outside of history. Historians should take seriously the challenge to historicize the ocean. That will include investigating its changing nature and peoples’ historically specific assumptions about using and regulating it. Arguing that marine environmental history can complement on-going research in historical marine ecology, this essay invokes recent scientific work while staking out distinct terrain for historians. Historicizing the oceans is the key to understanding core issue revolving around oceans themselves Bolster, Chair of Humanities at the University of New Hampshire, associate professor of history, 6 (W. Jeffrey, “Opportunities in Marine Environmental History,” Environmental History 11 http://fishhistory.org/wp-content/uploads/2010/05/BolsterEH2006.pdf, accessed 7/6/14, LLM) This essay makes a case for the support and development of marine environmental history. We need to better understand many things: how different groups of people made themselves in the context of marine environments, how race, class, fashion, and geo-politics influenced the exploitation and conservation of marine resources, how individual and community identities (and economies) changed as a function of the availability of marine resources, how technological innovation frequently masked declining catches, how fishermen’s knowledge of localized depletions accumulated in the past, how public policy debates revealed historically specific values associated with the ocean, how collaboration between (and then antagonism among) fishermen and scientists affected marine environments, how faith in the certainty of marine science waxed and waned, how different cultures perceived the ocean at specific times, and—when possible— how past marine environments looked in terms of abundance and distribution of important species.18 These are the constituent parts that get to a deeper historical question: the nature of the greatest sea change in human history. Only good marine environmental history can get to the heart of the ecological and cultural transformations that have cast the twenty-first-century ocean as vulnerable rather than eternal. Despite obstacles and problems, preliminary work in this field makes it look immediately relevant, professionally challenging, and intellectually rewarding. Ocean Policymaking Ocean policies and a discussion of the human effect on oceans are key to resolving issues of environmental degradation Bolster, Chair of Humanities at the University of New Hampshire, associate professor of history, 6 (W. Jeffrey, “Opportunities in Marine Environmental History,” Environmental History 11 http://fishhistory.org/wp-content/uploads/2010/05/BolsterEH2006.pdf, accessed 7/6/14, LLM) No matter the sources they use, would-be marine environmental historians will need to address headon the quandary of disciplinary boundaries. Deeply rooted assumptions concerning the typology of knowledge, specifically what is of interest to whom in scholarly or scientific circles, has circumscribed the development of marine environmental history. Environmental historians (terrestrial ones, mind you) faced an uphill challenge convincing colleagues in history departments that aquifers, earthworms, forest succession, and bioregionalism were germane to history, even though every village and city throughout time relied on biological and geophysical resources, and affected its non-human natural surroundings. It goes without saying that humans’ reliance on, affection for, and intimacy with the ocean has been but a fraction of that of the land. Moreover, the results of humans’ environmental impact on the ocean have essentially remained invisible, hidden below an inscrutable surface. To be accepted, much less to flourish, marine environmental historians will need to constantly reiterate how abalone, oyster reefs, Bluefin Tuna (formerly referred to derisively as “horse mackerel”), invasive jellyfish, and marine foodwebs are the stuff of history; how, in other words, humans and the living ocean share a common destiny. The problems posed by the overstressed ocean today are not yet insurmountable according to some optimistic marine scientists, even though depletion of the oceans’ living resources is clearly worsening.64 If policies and enforcement don’t encourage conservation soon, however, the species composition of the oceans will change forever, impoverishing marine ecosystems, human economies, and cultural traditions. Questions are already begging for answers: “how long have people been making an impact on the ocean,” “when did warning signs first appear,” “what constellation of assumptions and policies led to a virtually unrestrained plunder of oceanic resources and the cascading effects that followed”? Those concerns, along with a desire to better understand the sociology of past maritime communities and a passion to tell a new generation of sea stories, provide a template for marine environmental history. Done well, it can add materially to our understanding of the interactions between human culture and non-human nature in the early modern and modern world. Lord Byron was wrong when he wrote in the early nineteenth century, “Man marks the earth with ruin, his control Stops with the shore.” It is up to historians in the early twenty-first century to explain what happened. Collaboration on ocean exploration policies is necessary to understand oceanic habitats and solve ecological destruction Costello et al 10 (Mark, Marta Coll, Institute of Marine Science (ICM-CSIC), Barcelona, Roberto Danovaro, Department of Marine Sciences, Polytechnic University of Marche, Ancona, Italy, Pat Halpin, Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America, Henn Ojaveer, Estonian Marine Institute, University of Tartu, Pa¨rnu, Estonia, Patricia Miloslavich, Departamento de Estudios Ambientales and Centro de Biodiversidad Marina, Universidad Simo´n Bolı´var, Caracas, Venezuela, “A Census of Marine Biodiversity Knowledge, Resources, and Future Challenges” Published: August 02, 2010, http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.00121 10&representation=PDF, accessed 7/6/14, LLM) The lack of a clear species-area relationship across the regions was indicative of the lack of sampling in major areas and habitats of the oceans, and insufficient species identification guides and taxonomic expertise. The more developed countries had more marine research laboratories and ships. However, they also suffered from insufficient knowledge for many taxonomic groups and declining taxonomic expertise [5,23,25]. That the number of experts did not correlate with any metrics of diversity, resources, or knowledge (except the number of endemic species) may indicate the variable distribution of expertise globally and even within a region, but may also have been influenced by the difficulty of defining who is an expert. Most undiscovered species are likely to be found in the tropics, deep seas, and seas of the Southern Hemisphere, including many developing countries. It is unlikely that every country needs expertise in every taxonomic group or large research facilities, so collaboration between countries, as already occurs informally, is critical to developing knowledge on all species. There is potential for further benefits, cost-efficiencies, and quality control in taxonomy, ecology, and resource management through collaboration between countries and international organisations. There appear to be roles here for organisations such as the Intergovernmental Oceanographic Commission of UNESCO and the Global Biodiversity Information Facility (GBIF) to coordinate cooperation between countries (reflecting their national memberships); the International Association for Biological Oceanography as part of the International Union of Biological Sciences and thus the International Council of Scientific Unions, which represent the national academies; and grass-roots taxonomic societies involved in networking through conferences and online databases (e.g., the Society for the Management of Electronic Biodiversity Databases, Crustacean Society). Ocean Knowlege Despite some understanding of the world’s oceans, there are still many untapped pools of knowledge to be attained Webb, Department of Animal and Plant Sciences, University of Sheffield, et al 10 Thomas J. and Edward Vanden Berghe, Ocean Biogeographic Information System, Institute of Marine and Coastal Sciences, Rutgers University, New Brunswick, New Jersey, United States of America, and Ron O'Dor, Census of Marine Life, Consortium for Ocean Leadership, Washington, D. C., United States of America, “Biodiversity's Big Wet Secret: The Global Distribution of Marine Biological Records Reveals Chronic Under-Exploration of the Deep Pelagic Ocean”, Published: August 02, 2010DOI: 10.1371/journal.pone.0010223, http://web.b.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=2&sid=2a1e8d35-45cf-4f1e-ac7693f5bbc10b4b%40sessionmgr110&hid=125, accessed 7/6/14, LLM) The tragedy of studying biodiversity during an extinction crisis is that we are losing our subject matter faster than we are able to describe it [1]. This is especially true in the marine environment, where the need to value and conserve taxa and habitats that we know little about has been termed a paradox of marine conservation [2]. Recent efforts by international networks such as the Marine Biodiversity and Ecosystem Functioning EU Network of Excellence (www.marbef.org) and the Census of Marine Life (www.coml.org) have substantially advanced our knowledge of the marine diversity of specific regions [3,4] and habitats [5], in large part by harnessing the power of integrated databases [6]. As well as highlighting what we know about marine biodiversity, however, such databases also allow us to quantify what we do not know. For instance, global synthetic analyses have revealed that even for the best known marine taxa, regional inventories remain worryingly incomplete [7]. Spatial biases are also apparent. In particular, the deep pelagic ocean is revealed as biodiversity’s big wet secret. The marine pelagic environment is the open oceans and seas, away from the coasts and above the sea bed; and the deep pelagic ocean is typically defined as that part of the water column deeper than 200m. It constitutes a vast biovolume of space in which organisms can exist – by far the largest on Earth at over a billion km3 [8-11]. We know that this vast realm and the organisms living in it provide globally important ecosystem services [11], including the support of fisheries, the provision of a range of natural products of potential use in medicine and other applications, as well as the regulation of climate and ocean chemistry through the capture and storage of atmospheric carbon and the production of marine carbonate. But, the limits of our knowledge of this system are continually exposed by the regular discovery of new clades of often large, active and conspicuous organisms [12] whenever surveys are undertaken. Even a charismatic, widely distributed and very large species, the megamouth shark Megachasma pelagios, was not discovered until 1976, and has since been recorded so rarely that each individual specimen has become well known [13]. Our understanding and framing of the oceans incorrectly assumes the ocean exists outside of what we know as history Bolster, Chair of Humanities at the University of New Hampshire, associate professor of history, 6 (W. Jeffrey, “Opportunities in Marine Environmental History,” Environmental History 11 http://fishhistory.org/wp-content/uploads/2010/05/BolsterEH2006.pdf, accessed 7/6/14, LLM) This literary and spiritual sense of the ocean’s immortality, the idea that it rolled on before human life existed, and that it will roll on changelessly thereafter, long contributed to the fundamentally flawed assumption that the ocean, unlike forests, plains, and deserts, has always existed outside of history. The myth of the timeless ocean has been so seductive that even professional historians have succumbed. “To stand on a sea-washed promontory looking westwards at sunset over the Atlantic is to share a timeless human experience.” So begins Barry Cunliffe’s Facing the Ocean: The Atlantic and Its Peoples, 8000 BC-AD 1500. “We are in awe of the unchanging and unchangeable as all have been before us and all will be.” This is a rather ahistorical opening to a history book. It suspends attention to the viewers’ cultural frames of reference and to changes in the sea. But it is remarkably similar in its mythic content to the first two lines of medievalist Vincent H. Cassidy’s The Sea Around Them: The Atlantic Ocean, A.D. 1250. “No gesture is equal in futility to scratching the surface of the sea. Although many a momentary wake left by some frail ocean-borne craft has been of permanent significance to mankind, the ocean has made more of an impression upon men than they have made upon the ocean.”24 This conceptual stumbling block has impeded the development of marine environmental history. A new generation of historians can make their mark by delineating how cultural assumptions about the oceans (and the oceans themselves) have changed through time, sometimes dramatically within a short span of years. People today neither use nor imagine the oceans in the same ways as their ancestors.25 Until very recently it has been difficult for historians to imagine the unsustainability of industrial fisheries, much less pre-industrial ones. The idea of the eternal sea, after all, had intellectual legitimacy for centuries. In the first half of the eighteenth century Baron du Montesquieu asserted that oceanic fish were limitless. J. B. Lamarck concurred. “But animals living in the waters, especially the sea waters,” he wrote in 1809 in his Zoological Philosophy, “are protected from the destruction of their species by man. Their multiplication is so rapid and their means of evading pursuits or traps are so great, that there is no likelihood of his being able to destroy the entire species of any these animals.” Four fifths of the species that inhabit the Earth are unknown to people are in the oceans and we’ve defaulted to trivial taxonomy practices Costello et al 10 (Mark, Marta Coll, Institute of Marine Science (ICM-CSIC), Barcelona, Roberto Danovaro, Department of Marine Sciences, Polytechnic University of Marche, Ancona, Italy, Pat Halpin, Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America, Henn Ojaveer, Estonian Marine Institute, University of Tartu, Pa¨rnu, Estonia, Patricia Miloslavich, Departamento de Estudios Ambientales and Centro de Biodiversidad Marina, Universidad Simo´n Bolı´var, Caracas, Venezuela, “A Census of Marine Biodiversity Knowledge, Resources, and Future Challenges” Published: August 02, 2010, http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.00121 10&representation=PDF, accessed 7/6/14, LLM) The resources available for research are always limited. When setting priorities for research funding, governments, industry, and funding agencies must balance the demands of human health, food supply, and standards of living, against the less tangible benefits of discovering more about the planet’s biodiversity. Scientists have discovered almost 2 million species indicating that we have made great gains in our knowledge of biodiversity. However, this knowledge may distract attention from the estimated four-fifths of species on Earth that remain unknown to science, many of them inhabiting our oceans [1,2]. The world’s media still find it newsworthy when new species are discovered [1]. However, the extent of this taxonomic challenge no longer appears to be a priority in many funding agencies, perhaps because society and many scientists believe we have discovered most species, or that doing so is out of fashion except when new technologies are employed. Another symptom of this trend may be that the increased attention to novel methods available in molecular sciences is resulting in a loss of expertise and know-how in the traditional descriptive taxonomy of species [3]. The use of molecular techniques complements traditional methods of describing species but has not significantly increased the rate of discovery of new species (at least of fish), although it may help classify them [4]. At least in Europe, there was a mismatch between the number of species in a taxon and the number of people with expertise in it [5]. Unfortunately, because most species remain to be discovered in the most species-rich taxa [2,5,6,7], there are then few experts to appreciate that this work needs to be done. Evidently, a global review of gaps in marine biodiversity knowledge and resources is overdue. Ocean Policy Debate Key Critiques of ocean policy fail to engage productive reform – only a balanced debate can provide solutions to problems Campbell et. Al., Nicholas School of Environment, Duke University, 9 [Lisa M., Noella J. Gray, Elliott L. Hazen, Janna M. Shackeroff, “Beyond Baselines: Rethinking Priorities for Ocean Conservation,” Ecology and Society, Vol. 14, No. 1, TYBG] Although Bolster poses these questions as ones of historical interest, they can be recast in the present. We need to better understand many of these things now, and how they will play out as we move from describing the past to charting the future. To provide just one example, Aswani and Hamilton (2004) illustrate how an understanding of local marine tenure regimes and attitudes toward management in the Western Solomon Islands can (and should) be used to guide conservation interventions and predict their success. Broader interdisciplinary collaboration will enhance the analysis of both problems and potential solutions, and may also help avoid the divide that has arisen in terrestrial conservation, where interactions between social and natural scientists have been characterized as a “dialog of the deaf” (Agrawal and Ostrom 2006). The impasse is attributed to both groups: natural scientists who are hostile and resistant to critiques of their efforts, and social scientists who, having delivered such critiques, fail to engage in constructive policy reform (Redford et al. 2006). Although there are hints that such divides could emerge in the marine realm [e.g., Pitcher (2005) dismisses as uniformed the critiques of SBS by unnamed social scientists], we hope that wider, earlier engagement of social and natural scientists can put marine research and conservation on a more productive trajectory. Science Good Science is key to problem solving and informed policymaking Pena, Chair of the International Council for Science, 4 (J. A. De La, December 2004, “The Value of Basic Scientific Research,” http://www.icsu.org/publications/icsu-position-statements/value-scientific-research, accessed 7-92014, LK) Major innovation is rarely possible without prior generation of new knowledge founded on basic research. Strong scientific disciplines and strong collaboration between them are necessary both for the generation of new knowledge and its application. Retard basic research and inevitably innovation and application will be stifled. New scientific knowledge is essential not only for fostering innovation and promoting economic development, but also for informing good policy development, and as a sound foundation for education and training. Notwithstanding, it is sometimes argued at a national level that investment in research should focus primarily, or even exclusively, on the use of existing information to develop applied solutions. Superficially at least, such an approach appears to be facilitated by the emergence of a global society, linked by internet and a continuous flow of information that anyone is able to access and use. Whilst an exclusive focus on application may have some merit in the short-term, there are several reasons why neglecting basic research is seriously flawed in the longer-term: Basic and applied science are a continuum. They are inter-dependent. The integration of basic and applied research is crucial to problem-solving, innovation and product development. Negative Inherency Funding The status quo solves - $23 million in federal funding has been allocated, and other research vessels exist Providence Business News, 11 [9/12/11, “New NOAA research vessel Okeanos to call Quonset home,” Providence Business News, TYBG] A deep-sea exploration ship, Okeanos will be for the next decade at Pier One in the Port of Davisville. According to a news release, the facility includes 8,280 square feet of energy-efficient space for NOAA’s Office of Marine and Aviation Operations, which will contain office space for the ship’s support staff and warehouse space. Okeanos Explorer is 224 feet in length with a beam of 43 feet and a draft of 15 feet. The ship can embark 46, including crew members and those assigned to mission support. According to the release, Reed secured more than $23 million in federal funding to make Okeanos Explorer the first U.S. government ship dedicated solely to ocean exploration and to bring it to Rhode Island. The Okeanos will join the Endeavor, a smaller research vessel owned by the National Science Foundation and home-ported in Narragansett at the University of Rhode Island Bay Campus and NOAA Partnerships The aff is indistinguishable from the status quo – NOAA exploration and tech company partnerships solve extensive ocean exploration Schectman, reporter for the Wall Street Journal, 13 [Joel, 7/19/13, Wall Street Journal, “Government and Tech Companies Plan Exploration of Oceans,” http://blogs.wsj.com/cio/2013/07/19/government-and-tech-companies-plan-exploration-of-oceans/, accessed 7/11/14, TYBG] The U.S. National Oceanic and Atmospheric Administration is meeting today with scientists and technology companies like Google Inc.GOOG +1.41%, in Long Beach, Calif. to create a national plan to explore and map the 3.3 million miles of ocean that fall under America’s sovereignty — a size nearly equal to the continental U.S. That understanding could help the U.S. discover new fuel sources, better regulate the nation’s fishing resources and preserve endangered species. The complexity and remoteness of ocean floors have stalled these efforts for decades, says Stephen Hammond, a senior scientist at NOAA. “Ocean scientists would be hard pressed to tell you what the sea floor looks like and what are the animals that live there. That’s very surprising to most people,” Mr. Hammond said. But new data systems, says Mr. Hammond, that allow for more collaboration and access by the world’s scientists are beginning to “blow the doors open” on the world’s ocean systems. The complexity of ocean systems, with their interplay of tidal forces, animal species, and underwater geography, has frustrated previous efforts at understanding the ecology below 75% of the world’s surface. The sciences involved in ocean exploration have been “stove-piped,” with researchers specializing in the migration of whales or ocean currents and not working together towards “an interconnected understanding of the system,” said Larry Mayer, a University of New Hampshire oceanographer, who is participating in the planning session. For example, scientists were unable to fully understand the effect of the 2010 Deepwater Horizon spill on the Gulf’s sea creatures, Mr. Mayer said. “We’re not yet at the point where we have this overall view of the complete ecosystem model for a system as complex as Gulf of Mexico,” said Mr. Mayer, who sat on a National Academy of Sciences committee that advised government on the issue. “What we have are little models of subcomponents. But we don’t have comprehensive models of how the ecosystems interact — particularly in the deep sea.” Advances in data tools, which allow scientists to layer maps with thousands of separate information sources, now make that three-dimensional understanding possible, Mr. Mayer said. “We are just now at the point where where we can use these tools to look at the system in its entirety,” Mr. Mayer said. NOAA says the exploration will involve dozens of public and private partnerships, with technology companies like Esri Inc., the mapping software firm, and Google, which are both participating in the planning forum. For example, Esri Chief Scientist Dawn Wright, says sensors placed on whales, and data sent from vessels, will help policy makers and companies use Esri software to get real time information on whether a shipping lane is effecting an animal population. China Solves China solves – it just renewed funding for its ocean exploration program Fan, writer for Chinese news source ECNS, 14 [Wang, 7/3/14, ECNS, “China takes lead in underwater exploration,” http://www.ecns.cn/2014/0703/122159.shtml, accessed 7/11/14, TYBG] The Jiaolong submersible won the 2014 Hans Hass Fifty Fathoms Award in Sanya, Hainan province, in June. The award is jointly given by the Historical Diving Society Hans Hass Award Committee and Swiss watchmaker Blancpain. The submersible, independently developed in China, reached as deep as 7,062 meters in the Mariana Trench in the western Pacific Ocean in 2012, setting a new record among Chinese divers. The committee initiated a double prize for Cui Weicheng, deputy chief designer of Jiaolong, for his individual achievements, and the State Oceanic Administration for its support in building the submersible. The award has been honoring individuals for excellence in underwater science and technology since 2003. Previous recipients include renowned film director and diving pioneer James Cameron and Stan Waterman, pioneering underwater film producer and photographer. This is the first time a Chinese project has won the award. "Today, it is China that is leading the world in its commitment to manned deep ocean exploration," says Krov Menuhin, chairman of the award committee and advisory board member at the Historical Diving Society, an international non-profit organization that studies man's underwater activities and promotes public awareness of the ocean. "And the far-sighted vision, the courage and the immense engagement to implement this program is in keeping with the pioneering spirit of Hans Hass. He entered the ocean with the same vision, courage and commitment," he says. The winners received a framed cast bronze plaque, with an image of Hans Hass, designed by ocean artist Wyland. And Blancpain presented them Fifty Fathoms Bathyscaphe diving watches with specially engraved cases. The brand will serve as the official time keeper for Jiaolong's future underwater expeditions. It also announced a collaboration with the State Oceanic Administration to launch projects to raise public consciousness of the ocean in China in the coming years. The details are still being discussed. "We are very impressed with Jiaolong with its ability to constantly dive into new depths, especially its crew, whose courage, focus and action enabled them to reach new frontiers all the time," says Marc Junod, vice-president and head of sales at Blancpain. The research and development of Jiaolong basically started from zero in 2002. None of the crew members had seen, let alone been in, a virtual submersible before. Fu Wentao, one of the oceanauts of Jiaolong, shared his experience underwater, including encounters with curious creatures. "Unlike the terrestrial creatures, those under the water are not cautious at all. They are actually very curious and will even swim toward us," Fu says. Cui is planning to launch a project to develop a submersible that will be able to dive as deep as 11,000 meters with financial support from both the government and the private sector. "The combination will fuel faster development in underwater science," Cui says. "The sea is vast and rich, but we have a lot of research to do before we can exploit it." While funds for the financing of manned deep-ocean explorations in the West are drying up, China has just committed to a long-term project that will change the way everyone thinks about the sea, says Menuhin. Solvency NOAA Bad – Fund Siphoning NOAA will redirect the aff’s funds – tanks solvency Rein, writer for the Washington Post, 12 [Lisa, 6/20/12, Washington Post, “Congress to allow National Weather Service to reconfigure budget,” http://www.washingtonpost.com/politics/congress-to-allow-national-weather-service-to-reconfigurebudget/2012/06/20/gJQAG8jVrV_story.html, accessed 7/12/14, TYBG] Congress will allow the National Weather Service to reallocate $36 million to stave off furloughs of 5,000 employees this summer, lawmakers said Wednesday. But they said they are no closer than they were a month ago to an explanation for why the weather service moved millions of dollars a year that Congress approved for other projects to pay employees, without asking permission. The decision to allow the practice “does not conclude the committee’s examination into the [National Weather Service’s] long standing budget formulation and execution problems,” Sen. Barbara A. Mikulski (D-Md.) and Kay Bailey Hutchison (R-Tex.) wrote Wednesday in a letter to Acting Commerce Secretary Rebecca Blank. The senators are chairman and ranking Republican, respectively, of the Appropriations subcommittee on commerce, justice, science and related agencies, which approved the request, called a “reprogramming.” The money will come largely from funding for long-term capital projects, including a planned system to provide weather data using advanced technology. A similar House panel headed by Rep. Frank Wolf (R-Va.) is scheduled to hold a hearing Thursday on the practice of reallocating money from one department budget to another without asking Congress. “We just want to get to the bottom of it,” Wolf said Wednesday. “If they had asked for the reprogramming, we would have approved it. Why didn’t they just come up and ask?” Wolf said his panel is likely to approve the reprogramming request. Officials with the National Oceanic and Atmospheric Administration, which oversees the weather service, did not respond to a request for comment on Congress’ action. Since they revealed the budget problems in May following an internal investigation, NOAA leaders have said little except that they were not aware that the practice was going on. The investigation prompted the abrupt retirement of the weather service’s director, John L. “Jack” Hayes, after the agency’s chief financial officer was replaced. Union leaders have said NOAA was long aware that the weather service could not pay for critical forecast operations without reallocating money from other projects, but did not address the problem. “We thank them for looking out for the weather service and going the extra mile,” Dan Sobien, president of the National Weather Service Employees Organization, said Wednesday. “The real question is what happens next year.” The Senate and House have included millions of dollars in additional funding for the agency in next year’s budget. After the weather service made a formal request to reallocate the $36 million, Mikulski and Hutchison said they would not agree until they knew why the agency had “manipulated” its budget. But negotiations sped up in recent weeks after NOAA officials notified lawmakers and the union that furloughs, while a last resort, were possible. With labor costs of $2 million a day, the weather service said it could not pay forecasters and other employees through September, the end of the fiscal year. NOAA Bad – Wasteful Spending NOAA management fails and wastes taxpayer dollars Travis, writer for the American Association for the Advancement of Science, 94 [John, 7/8/94, “NOAA's "Arks" Sail Into a Storm,” Science, New Series, Vol. 265, No. 5169, pg. 176-178, TYBG] Rough seas are pounding against the National Oceanic and Atmospheric Administration (NOAA) these days, as waves of criticism have battered plans to rejuvenate its aging oceanographic fleet. Over the past 2 years, NOAA and its $1.9-billion blueprint for fleet renewal have taken hits by groups ranging from an advisory committee for the Commerce Department--NOAA's parent department--to Vice President Al Gore's National Performance Review. The latest blast came in April, when a committee of the Marine Board of the National Research Council--in a report written at NOAA's own behest--condemned the plan as unrealistic, misleading, and a potential waste of taxpayer money. "We told the truth. It's just a flawed plan," says oceanographer Donald Walsh of International Maritime Inc., who headed the review panel. The object of all this scorn is NOAA's Fleet Replacement and Modernization plan (FRAM), potentially the largest shipbuilding program in the history of oceanography. Without replacing aged vessels and updating research equipment, agency officials say, NOAA will soon lose the ability to carry out many of its scientific missions, such as annual studies of U.S. fisheries, numerous ocean and atmospheric circulation investigations of global warming and other climate concerns, and the production of accurate charts for maritime commerce. Even critics like Walsh note these are important tasks. "They do a lot of marine scientific research not done by others," he says. But critics also say that in its rush to build a new fleet, the agency has ignored other, more cost-effective data-gathering options such as chartering private ships, contracting out research tasks, or using airplane-borne technology. There are growing signs that Congress, which until now has strongly supported NOAA's shipbuilding aspirations, may take heed of these rebukes--and as a result, the agency may be forced to rethink its ambitious plans. In light of current budget realities, "NOAA is going to reassess the number and types of platforms we need," says Admiral William Stubblefield. director of the agency's FRAM office. NOAA Bad - Cost Overruns NOAA is especially prone to cost overruns – there’s no accountability and oversight fails Morello, writer for the Joint Ocean Committee, 6 [Lauren, 7/14/6, Joint Ocean Committee, “APPROPRIATIONS: Senate panel boosts ocean, fisheries research at NOAA,” http://www.jointoceancommission.org/news-room/in-the-news/2006-0714_Senate_panel_boosts_ocean_&_fisheries_research_at_NOAA@E&E_Daily.pdf, accessed 7/12/14, TYBG] "If not for DOD, this committee wonders at what point NOAA would have acted on its own to report the cost overruns and conduct its own recertification," the CJS committee report reads, echoing similar harsh report language approved by House appropriators (E&E Daily, June 21). The Senate report goes on to cite "NOAA's history of passive oversight" as justification for the spending cuts, noting that the Senate will withhold $100 million in NPOESS funds from NOAA until the agency contracts a "nonprofit research organization," presumably the National Academy of Sciences, to conduct a cost and operational effectiveness analysis of NPOESS. NOAA Bad – Oversight Failure NOAA oversight fails – insufficient planning Cantwell, Senator from Washington, 10 [Maria, 6/29/10, Maria Cantwell Senator, “Cantwell: Top-to-Bottom Mismanagement of NOAA Home Porting Decision, Millions of Taxpayer Dollars Could be Wasted,” http://www.cantwell.senate.gov/public/index.cfm/2010/6/post-38ade549-e8c2-4dc8-86cab0d80faf58e4, accessed 7/12/14, TYBG] The IG report found flaws in the way NOAA handled the competition for the home port site selection, though it said elimination of those flaws would not have changed the outcome. The most serious problems, according to the IG, occurred before the beginning of the competition. “In our view, the more fundamental problems pertain to NOAA’s process prior to the competitive lease process,” the IG wrote in the letter. “A primary cause of these problems is grounded in the fact that NOAA did not subject the MOC-P project to a rigorous capital investment planning and oversight process….While the Department has a clear real property policy, NOAA did not follow it. NOAA thus proceeded with requirements for its desired option of a consolidated MOC-P facility and an operating lease, based on justification and consideration of alternatives that on their face and without additional documentation were significantly lacking.” Applied Science Good Applied Research Good Applied research is good – it’s key to US competitiveness, innovation in technology and medicine Holden, president and CEO of Research Triangle Park-based RTI International, 12 [E. Wayne, 8/23/12, News Observer, “The worth of applied research,” http://www.newsobserver.com/2012/08/23/2285016/the-worth-of-applied-research.html, accessed 7/11/14, TYBG] With the upcoming election and looming “fiscal cliff” due to sequestration of funds combined with projected tax increases, our choices have become fewer and are now much tougher. These difficult choices threaten many areas of the federal discretionary budget, including research and development funding, which represents a small but vital investment in the future of our nation and economy. Most of the commentary in support of federal R&D spending has thus far centered on the economic payoffs in biotechnology, medical breakthroughs, new technologies and the resulting job creation from these investments. Experts also have contrasted stagnant U.S. government spending to support R&D with substantial increases in government spending in other parts of the world, most notably Asia, where China’s year-over-year R&D spending increases are accelerating. Sustaining our investments in R&D is critical to maintaining innovation and ensuring competitiveness in a global economy. As president and CEO of an independent nonprofit institute dedicated to conducting research that improves the human condition, I share these concerns. Decreases in federal R&D funding will have a detrimental impact on many of the organizations in the Research Triangle, including universities, notfor-profit organizations and commercial entities. North Carolina currently ranks seventh overall as a recipient of federal research funding, according to data compiled by Research! America. To maintain our reputation for creativity and innovation as one of the world’s leading R&D centers, it is vital that we continue to receive federal research funding. But as a social scientist, I also take a broader view toward funding for federal research and closely associated program evaluation studies. I believe that applied research – including program evaluation and ongoing population-based surveillance – provides policy makers with the information they need to make smart choices, not just tough ones. At RTI, we have spent more than 50 years conducting economic and social policy research to help inform and positively benefit public policy. We have conducted numerous studies to help federal officials determine which programs work, which are most cost-effective and which are ineffective. We have evaluated the cost and effectiveness of federal programs ranging from food stamps and K-12 education, to Medicare and Medicaid and public health programs. In just the past 18 months, RTI researchers completed studies that identified ways to improve dental care among rural populations in Alaska and reduce health care costs by allowing nurse anesthetists to provide care in appropriate settings. Other studies have found that the now widespread pay-forperformance programs cannot guarantee improvements in the quality or value of health care, nor do they necessarily result in net health care savings. Additionally, RTI research showed that federal government investment in prison-based drug treatment programs can help reduce overall costs across the criminal justice system, because prisoners who receive treatment are less likely to commit future crimes than those who don’t. So when we consider the need to reduce federal spending, it’s important to have critical information generated by studies like these. Rather than facing the binary logic of making or not making tough choices – cutting federally funded programs or not – applied research allows us to decide which programs work and which do not and to allocate federal funding for maximum positive impact on the lives of people across the country. There is a saying that it’s easy to be hard, but it’s hard to be smart. As our elected officials struggle with difficult choices regarding the federal budget, I hope that they choose to continue investing in the applied research that provides the information to enhance their decision-making. All scientific innovation has been driven by practical questions of applied research – pure science is useless and expensive – gendered language not endorsed Mulder, writer for Science and You, 2K [Henry, Science and You, “Pure Science and Applied Science,” http://www.scienceandyou.org/articles/ess_09.shtml, accessed 7/13/14, TYBG] This brings us to an interesting chicken-and-egg question. Are scientific insights in essence the product of invention or to what degree is the inventor a scientist. Is there a fundamental difference between pure and applied science? The current wisdom suggests that there is. In this model, the pure scientist pursues knowledge strictly for its own sake. The applied scientist uses known principles to solve practical problems1. With that we have opened up a whole can of worms or a hornets' nest, which may be a better metaphor. As usual, the controversy has to do with power and money. There is a large and powerful lobby in the scientific community that wants to maintain the present state of affairs. That's what it's about you see, affairs of State or more specifically the role of the State in providing a steady stream of public money to feed the golden calf of "research". Aside from the fact that this money ultimately comes out of your pocket and mine, there is the additional problem that the results of all this research—pure science if you will—may not be all that useful or even productive. Who says so? Terence Kealey, that's who in his controversial book The Economic Laws of Scientific Research. In this book which is neatly summarized here, Professor Kealey makes the case that pure research, funded and inspired by practical needs is historically much more fruitful than the State-funded variety. One of the points raised by Kealy is the role of technology, what we would call engineering or applied science, in leading to pure research instead of the other way around. He argues that in the past both in the United States and Britain the whole scientific enterprise was inspired by hobbyists who neither sought nor received Government funding. According to Kealey, "The loss of the hobby scientists has been unfortunate because the hobby scientists tended to be spectacularly good." He continues, "They were good because they tended to do original science. Professional scientists tend to play it safe; they need to succeed, which tempts them into doing experiments that are certain to produce results. Similarly, grant-giving bodies which are accountable to government try only to give money for experiments that are likely to work...They represent the development of established science rather than the creation of the new. But the hobby scientist is unaccountable. He can follow the will-o'the-wisp...Neither (Henry) Cavendish nor (Charles) Darwin would have survived in a modern university any better than did (1978 Nobelist Peter) Mitchell, yet they were scientific giants..." Even Albert Einstein was essentially a "hobby" scientist. When he arrived at his insights on relativity was he struggling with what for him were real practical problems? In a sense he was trying to invent something—a theory that would account for the anomalies in the Newtonian physics of his day. It is said that his revelations on relativity occurred because he had a dream. In this dream he was riding a beam of light. With the kind logic only an Einstein could have mustered, this led him directly to the conclusion that the speed of light was a constant. Eureka? How did man discover that a polished lens could magnify things? Perhaps someone noticed that a large drop of water appeared to enlarge what was directly underneath. Is this how discoveries happen? Pay attention now! It gets tricky. In our hypothetical(?) case, what was the significant factor? Was it the fact that someone saw objects magnified by a water drop or was it the next step? For discovery to happen someone has to make the connection, the connection between the observed phenomenon and its logical implications. One implication would be to arrive at some way to apply the insight i.e. make a lens. However, without some practical use—magnifying things—why take the process any further? Without a need, real or perceived, discovery may not take place. Equally, without someone's having noticed the effect in the first place there would not have been a phenomenon to exploit. With this in mind let us visit some high points of discovery. We will see that man's curiosity is often influenced by practical considerations. Copernicus, Galileo, and Phlogiston Discovery occurs because someone asked a question. As likely as not that question is in response to a practical need. For example, shaping a piece of glass to duplicate the effect of that drop of water I mentioned earlier, would probably not have occurred without the practical application of enhanced vision. In another example, what was Copernicus' mission? He wasn't looking to create a new cosmology. No, his goal was to simplify, if only on paper, the awkward description of the motion of the planets devised by Ptolemy. Ptolemy's system, as you may know, required the addition of evermore epicycles to the motion of the planets in order to "save the appearances". Copernicus developed the convenient "fiction" of having the Earth and the planets revolve around the Sun, convenient because it addressed a need. Although it wasn't a perfect solution it reduced the number of "orbits" considerably. Was it a whim that led Galileo to point his telescope to the heavens and discover the moons of Jupiter and the phases of Venus? We know that he was aware of Copernicus' "fiction" and that probably caused to him check it out. Most science starts with simple questions such as how, why or even why not. Even bad questions can lead to a positive outcome. I am reminded of the birth of modern chemistry. As in the case of much of science, there were many detours along the way. Very likely man's early attempts at metallurgy led him to ask what made it all happen. He came up with some very strange answers. Creating metals from the earth, it was thought, involved a process of birth. In fact there is evidence to suggest actual embryos were tossed into the fire along with the ore to facilitate the process. The notion arose that gold being the noblest of all metals, was produced from lesser metals through some form of death and rebirth deep within the earth. This the alchemists tried to exploit As many attempts were made to invent just the right combination of spiritual and secular events to produce the magic results, a number of processes were developed. In time these processes turned out to be useful as the experience of working with Mercury, Sulphur and other materials was applied to the issues of analysis in Chemistry. Boyle, Priestly and Lavoisier For nearly 2000 years, scholars believed that everything was made up of combinations of just four elements: air, earth, fire and water. This belief, predated Aristotle and survived until the 17th century. In Chemistry, one of the first to challenge this notion was Robert Boyle. In his book, The Skeptical Chymist he defined for the first time the modern idea of an element as a substance which cannot be broken down into simpler ones. He likely borrowed this concept from René Déscartes which is a bit ironic. Ironic, because his famous law on gases was the result of his experiments to prove that a vacuum can exist, something Déscartes himself had roundly rejected. The law which states "that at a constant temperature the volume of a gas varies inversely with its pressure", was in fact tossed in as an afterthought to the main thrust of his research. Anyway let's get back to Boyle's 'The Skeptical Chymist'. We now have a universe which chemically is composed of many different elements. This was ultimately put into a neat perodic table by Dmitri Mendeleev in 1869. But I digress. Let us talk instead about beer and a clergyman who lived next to a brewery. His name was Joseph Priestley and in 1772 he invented soda water. Beer, unless it's flat, contains a lot of carbon dioxide. In fact the brewing process tends to produce too much of the stuff. Having access to an abundance, Priestley hit on the idea of saturating ordinary water with this gas. Being the godly man he was, he meant to create a cure for scurvy. His concoction, carbonated water, never actually cured scurvy but it proved to be a delightful drink. Aside from the fact that the world owes him a debt of gratitude for this little gem, Priestley's experiments with gases led him to a much more significant discovery. In 1774 the Reverend discovered oxygen. Trouble is he didn't really believe he had. Being a staunch believer in the phlogiston theory, he was convinced his new gas was dephlogisticated air. As soon as something was burned in its presence, the air would have it's phlogiston restored. It would no longer be "de-phlogisticated" and all would be well. We of course know the real story. The former was oxygen and the latter carbon dioxide. In a little twist of history, unknown to the reverend, a German born, Swedish scientist Carl Wilhelm Scheele is thought to have isolated oxygen two years before Priestley did. Scheele called it combustionsupporting-gas and let it go at that. For the next step in our adventure we turn to a French scientist, Antoine Lavoisier. It wasn't until the work of Lavoisier that in 1770 the true nature of oxygen and its role in combustion was understood. After conducting many experiments he learned that when oxygen was consumed during combustion, it increased the amount of fixed air, which was none other than carbon dioxide, the stuff Priestley used to make his soda pop. This insight was the final piece in the puzzle. It was now known that when things burned in the presence of oxygen, the oxygen was diminished and the burned substance also lost mass. The lost mass of both was converted to carbon dioxide and the equation was left intact. Some of these things were known and the rest discovered through experiment until the theory fit the facts. Antoine Lavoisier is often called the father of modern chemistry. This is not so much because he correctly identified oxygen and its role, but more because of his quantitative approach to the subject of analysis. Rather than argue about what the nature of this or that substance might be, he realized that by measuring the quantity of things he could draw valid conclusions. Because of his methods he discovered many of the fundamental concepts still used in chemistry today. It is most unfortunate that at age 51 he became the victim of the French Revolution and was executed on the guillotine. As we trace the events that led to the development of modern science we see that the quest for knowledge was driven by practical needs. Archimedes was trying to expose a fraud. Ptolemy was trying to create a chart of the movement of the stars and planets. Copernicus was trying to simplify these charts. Galileo wanted to confirm his suspicions that Copernicus' charts were probably the way things really were. The alchemists were trying to manufacture gold. Boyle's law grew out of his desire to prove that a vacuum could exist and that the ether did not. Priestley was trying to find a cure for scurvy. Lavoisier was trying to prove that phlogiston did not exist. Most of these people were engaged in "pure" science. Were they driven simply by idle curiosity? Doesn't look like it. Applied science is key to technological advancement but government funding is key Miller, Chair of the House of Commons Science and Technology Select Committee, 12 [Andrew, 7/24/12, The Economist, “Research funding: Should public money finance applied research?,” http://www.economist.com/debate/days/view/867, accessed 7/13/14, TYBG] I have concerns about the funding gap that already exists between pure and applied science. In my view this would be exacerbated if the public purse suddenly removed support for applied research. Would the private sector rush to fill the gap? I think not. I worry that artificial labels might generate greater divisions between pure and applied science and that the gap would steadily grow, resulting in a disconnect between innovative new science and market pull. There is a need for business to react to new fundamental research discoveries. There is also a need for academia to understand where there is an economic imperative to solve a fundamental problem. If an artificial gap is created between pure and applied science, what would bridge the gap to allow this kind of efficient co-ordination of research effort? Publicly funded research should recognise and address problems that contribute to the public good, whether those issues are based in fundamental or in applied science. Private funders of research will rarely be persuaded to put the necessary money into the long-term, lowreturn applied research that was crucial to the early development of space technology or future energy potential such as advanced battery technology. There needs to be clever, consistent and insightful provision of public funds to ensure that vital technologies are progressed and developed in addition to those from which private funders can see a quick return. Science is richer when funding is fluid—that is, when public money occasionally helps to fund research very close to the market and when private money occasionally is drawn into research that has no immediate applied use. Science benefits when artificial labels do not get in the way of what a scientist can and cannot investigate. Applied Research 1st Applied research is good and is a pre-condition to basic research – it’s key to test the legitimacy of basic research – that turns the case Herrmann et. Al., Ph.D, Indiana State University, 98 [Douglas; Douglas Raybeck, Ph.D. Hamilton College; Michael Gruneberg, Ph.D. University of Wales; Robert Grant, Ph.D.; Carol Yoder, Ph.D. Indiana State University, “The Importance of Applied Research to Demonstrating the Utility of Basic Findings and Theories: Commentary on Buschke, Sliwinski, and Luddy,” Cognitive Technology, Vol: 3, Issue 2, TYBG] Applied research has been said to be crucial to demonstrating the utility of basic findings and theories. If applied research fails to demonstrate that basic findings and theories can be applied, such a failure would suggest that the basic findings and theories are not valid and/or that the application efforts are invalid. Presently, the basic research community pays little or no attention to applied research failures. We propose that the credibility that basic research attributes to well designed studies which do not support a theory should be extended to properly executed applications. Like basic research, applied research may fail to test basic findings and theories adequately due to weaknesses in the research. Nevertheless, there are criteria that permit evaluation of the adequacy of applied research. Our point is that basic researchers have failed to realize that the vast field of applied research provides an abundance of data that could help accelerate the development and refinement of basic research and theories. Buschke, Sliwinski, and Luddy (1998) argue that the failure to support a prediction derived from a basic theory cannot be used to reject the theory and accept the null hypothesis because the null hypothesis cannot be proved. However, sometimes null findings are credible. For example, a null finding obtained repeatedly under different conditions by different investigators contrary to theoretically-based predictions raise serious challenge to any theory. For basic research to refuse to consider null findings slows the advance of science and makes irresponsible use of public funds for research. Basic Research Bad Basic research fails – costs are too high and the process is risky – that kills investment Remedios, Member of Council, IUPAB, and Director, Institute for Biomedical Research, The University of Sydney, 6 [Cris dos Remedios, The International Union for Pure and Applied Biophysics, “The Value Of Fundamental Research,” http://iupab.org/publications/value-of-fundamental-research/, accessed 7/11/14, TYBG] HIGH COSTS The actual costs of basic research are high. Salaries comprise the major cost of research projects but by community standards, scientists’ salaries are not high. Part of the problem is that basic research often requires a critical mass of scientists and this generates a multiplier effect (see below). Modern research requires increasingly sophisticated equipment and it is well known that the cost of scientific equipment (e.g. a glassware drying oven) can be inflated compared to equivalent commercial equipment (e.g. a food-warming oven). INHERENT RISK Fundamental research is inherently a high-risk process and yet there is built into the peer-review system of scientific evaluation an oddly contradictory philosophy. Research councils which review scientific projects feel that it is their responsibility to minimise the risk to limited funds. At the same time, many scientists realise that if occasional failures do not eventuate, then the funding agencies might justifiably be criticised for having been too cautious. Thus the question becomes, how much risk is acceptable? Perhaps only one experiment out of seven will succeed but that is not to say the other six experiments are of no value or that the time was wasted. Indeed, failed experiments often form the basis of important new research programmes. On the other hand, this philosophy of inherent risk can be a major impediment to investment by private enterprise, which normally expects a worthwhile return on investment within a short time-scale. We are not stuck with the minimal risk model. In the US, the DARPA grants deliberately seek to provide funding for wildly radical ideas (e.g. they fund a project which uses bees to sniff out land mines). Here researchers must provide clear milestones which must be met before funding continues. Pure Science Bad Pure science is ethically corrupt and distorted – advocating pure research represents a shirking of scientific responsibility Douglas, Department of Philosophy, University of Waterloo, 14 [Heather, 4/8/14, Department of Philosophy, University of Waterloo, “Pure Science and the Problem of Progress,” http://www.academia.edu/4547054/Pure_Science_and_the_Problem_of_Progress, accessed 7/12/14, TYBG] John Dewey, who also sought to make philosophy more scientific, disagreed with this characterization of science. Rather than shielding pure science from responsibility for the problematic impacts of applied science, Dewey argued instead that the artificial distinction between pure and applied science was in fact the reason why so many harmful outcomes were being seen. He saw the undesirable "materialism and dominance of commercialism of modern life" as due not to "undue devotion to physical science," but rather to the artificial divisions such as the "separation between pure and applied science." (Dewey 1927.1954, pp. 173-174) The separation brings with it "honor of what is 'pure' and contempt for what is 'applied'," which leads to "a science which is remote and technical, communicable only to specialists, and a conduct of human affairs which is haphazard, biased, unfair." (ibid., p. 174) Because pure science is pursued without regard to the import for society, it cannot effectively inform the needed discussions over public affairs, leading to poor social decisions. Dewey was emphatic that this way to pursue science is problematic forboth society and science: "Science is converted into knowledge in its honorable and emphatic sense only in application. Otherwise it is truncated, blind, distorted." (ibid., emphasis his) The implications for practicing scientists are stark: "The glorification of 'pure' science... is a rationalization of an escape; it marks a construction of an asylum of refuge, a shirking of responsibility." (ibid., p. 175) Scientists should not, could not, hide behind claims that they were only doing pure science and so the impact of science on society was not part of their burden. Indeed, it was this kind of thinking that had led to there being such harmful impacts of science as were found in World War I. Pure science fails – it’s too costly, risky, and unpredictable – their evidence is hype Sarewitz, co-director of the Consortium for Science and Policy and Outcomes, 13 [Daniel, 5/22/13, Nature, “Pure hype of pure research helps no one,” http://www.nature.com/news/pure-hype-of-pure-research-helps-no-one-1.13031?WT.ec_id=NEWS20130528, accessed 7/12/14, TYBG] The core argument of the scientific community and its leaders has always been that they are perfectly capable of ensuring accountability themselves, thank you. After all, the outcomes of basic research are unpredictable and therefore politicians need only pour in the money and stand aside as the scientists make the world a better place. This simplistic and self-serving myth was trotted out yet again in an 8 May letter from former NSF officials to Smith, which explained that the “history of science and technology has shown that truly basic research often yields breakthroughs … but that it is impossible to predict which projects (and which fields) will do that … Over the years, federal funding of basic research, using peer review evaluation, has led to vast improvements in health care, national security, and economic development.” Smith seems to believe all of this. Conservative politicians are typically loyal supporters of basic science because they recognize that it is one domain that does not provide sufficient incentives for private sector investment, so government must play a part. What Smith is doing is reminding the scientific community about Congress’s authority to establish broad research spending priorities in the context of the ongoing budget gridlock, and he is reminding the NSF about its accountability to his committee: “The draft bill maintains the current peer review process and improves on it by adding a layer of accountability.” So the problem here isn’t that Smith doesn’t understand what the scientific community is saying, it’s that after more than 60 years of hype about unpredictability and the inevitable benefits of pure science, he and other conservatives seem to understand and believe it all too deeply. Thus it’s no surprise that when budgets are tight and progress towards achieving many goals — from curing cancer to revitalizing the nation’s manufacturing base — is a lot slower than promised, a new conservative chairman would seek to make his mark by trying to make things run better. The grave danger here is not that he is going to interfere with peer review but that he will discover that the real world of science — in which progress is often halting and incremental, a lot of research isn’t particularly innovative or valuable, and institutional arrangements are often more important than peer review or serendipity for determining the social value of science — doesn’t match very well to the world on which he has been sold. As of now, Smith’s bill has not been formally introduced for congressional consideration, and perhaps it is best understood as a shot across the science community’s bow. But with years of budgetary stress ahead, the science community needs to be much more assertive in articulating a vision for science that doesn’t depend on continually rising budgets and isn’t defended by resort to some mythical ideal of pure research. Hype is fine until people start to believe in it. Pure Focus Bad Focusing solely on pure science blurs scientific progress and is divorced from reality – that turns the case Douglas, Department of Philosophy, University of Waterloo, 14 [Heather, 4/8/14, Department of Philosophy, University of Waterloo, “Pure Science and the Problem of Progress,” http://www.academia.edu/4547054/Pure_Science_and_the_Problem_of_Progress, accessed 7/12/14, TYBG] Rather than pursue these technical arguments, I want to try a different tack, to try to get at the underlying sense of progress that seems undeniable for science. This sense, that across the broad sweep of history, that across deep ontological and conceptual change, science has been getting better at something, undergirds much public appraisal of science, even as the public might contest particular scientific claims. I think that this problem of characterizing the progress of science arises for Kuhn, and indeed for philosophers of science generally, primarily because Kuhn (and the current philosophical community) is focused on pure science, quite divorced from applied science. It is an interest in theory, in the theoretical development of science, and theory alone, that generates the puzzle of progress. As such, it is somewhat an artificial problem. If we relinquish the idea that science is only or primarily about theory, the problem of progress disappears. If instead we see science as both a theoretical and a practical activity, progress becomes easier to track and assess. Science Bad Bias Scientific objectivity is a joke even to scientists – societal pressures and theoretical prejudice contribute to data falsification even among the likes of Newton Sheldrake, Ph.D. in biochemistry from the University of Cambridge, 1995 (Rupert, Ph.D. in biochemistry from U of Cambridge, Research Fellow of the Royal Society, [www.rense.com/general93/crit.htm] AD: 6-25-11, jam) The illusion of objectivity is most powerful when its victims believe themselves to be free of it. Along with a laudable sense of honor, a tendency to self-righteousness has been present in experimental science right from the outset. With Galileo, the desire to make his ideas prevail apparently led him to report experiments that could not have been performed exactly as described. Thus an ambiguous attitude toward data was present from the very beginning of Western experimental science. On the one hand, experimental data was upheld as the ultimate arbiter of truth, on the other hand, fact was subordinated to theory when necessary and even, if it didn't fit, distorted. A similar vice afflicted other giants in the history of science, not least Sir Isaac Newton. He overwhelmed his critics with an exactness of results that left no room for dispute. His biographer Richard Westfall has documented how he adjusted his calculations on the velocity of sound and the precession of the equinoxes, and altered the correlation of a variable in his theory of gravitation to give a seeming accuracy of better than 1 part in 1,000. Not the least part of the Prindpias persuasiveness was its deliberate pretense to a degree of precision quite beyond its legitimate claim. If the Prindpia established the quantitative pattern of modern science, it equally suggested a less sublime truth -- that no one can manipulate the fudge factor so effectively as the master mathematician himself. Probably the commonest kind of deception -- and of self-deception -- depends on the selective use of data. For example, from 1910 to 1913, the American physicist Robert Millikan was engaged in a dispute with an Austrian rival, Felix Ehrenfeld, about the charge on the electron. Both Millikan's and Ehrenfeld's early data were rather variable. They depended on introducing oil drops into an electric field and measuring the strength of the field needed to keep them suspended. Ehrenfeld claimed that the data showed the existence of subelectrons with fractions of a unit electron charge. Millikan maintained there was a single charge. To rebut his rival, in 1913 he published a paper full of new, precise results supporting his own view, emphasizing in italics that "this is not a selected group of drops but represents all of the drops experimented upon during sixty consecutive days." A historian of science has recently examined Millikan's laboratory notebooks, which reveal a very different picture. The raw data were individually annotated with comments such as "very low, something wrong" and "beauty, publish this." The 58 observations published in his article were selected from 140. Ehrenfeld meanwhile went on publishing all his observations, which continued to show a far greater variability than Millikan's selected data. Ehrenfeld was disregarded while Millikan won the Nobel Prize. Millikan was no doubt convinced that he was right, and did not want his theoretical convictions to be disturbed by messy data. Probably the same was true of Gregor Mendel, the results of whose famous pea-breeding experiments were, according to modern statistical analysis, too good to be true. The tendency to publish only the "best" results and to tidy up data is certainly not confined to famous figures in the history of science. In most if not all areas of science, good results are likely to advance the career of the person who produces them. And in a highly competitive and hierarchical professional environment, various forms of improving the results are widely practiced, if only by omitting unfavorable data. This practice is indeed normal. Apart from anything else, journals are disinclined to publish the results of problematical or negative experiments. Little professional credit results from unclear data or seemingly meaningless results. I know of no formal study on the percentage of research data that are actually published. In the fields I know best from personal experience -biochemistry, developmental biology, plant physiology, and agriculture -- I estimate that only about 520 percent of the empirical data are selected for publication. I have asked colleagues in other fields of inquiry, such as experimental psychology, chemistry, radioastronomy, and medicine, and come up with similar results. When the great majority of the data are discarded in private processes of selection -often 90 percent or more -- there is obviously plenty of scope for personal bias and theoretical prejudice to operate both consciously and unconsciously. The selective publication of data creates a context in which deception and self-deception become a matter of degree. Moreover, scientists usually regard their research notebooks and data files as private, and tend to resist any attempts by critics and rivals to go through them. True, it is usually assumed that a researcher will, within reason, make his or her data available to any colleague who might express a desire to see them. But in my own experience, this ideal is far from the reality. On the several occasions I have asked researchers if I may see their raw data, I have been refused. Maybe this says more about me than about prevailing scientific norms. But one of the very few systematic studies of this cherished principle of openness gives little ground for confidence. The procedure was simple. The person conducting it, a psychologist at Iowa State University, wrote to thirty-seven authors of papers published in psychology journals requesting the raw data on which the papers were based. Five did not reply. Twenty-one claimed that their data had unfortunately been misplaced or inadvertently destroyed. Two offered access only on very restrictive conditions. Only nine sent their raw data; and when their studies were analyzed, more than half had gross errors in the statistics alone. Objectivity assumes science happens in a vacuum – social, economic, and political bias inevitably seeps in Sheldrake, Ph.D. in biochemistry from the University of Cambridge, 1995 (Rupert, Ph.D. in biochemistry from U of Cambridge, Research Fellow of the Royal Society, [www.rense.com/general93/crit.htm] AD: 6-25-11, jam) Many non-scientists are awed by the power and seeming certainty of scientific knowledge. So are most students of science. Textbooks are full of apparently hard facts and quantitative data. Science seems supremely objective. Moreover, a belief in the objectivity of science is a matter of faith for many modern people. It is fundamental to the worldview of materialists, rationalists, secular humanists, and all others who uphold the superiority of science over religion, traditional wisdom, and the arts. This image of science is rarely discussed explicitly by scientists themselves. It tends to be absorbed implicitly and taken for granted. Few scientists show much interest in the philosophy, history, or sociology of science, and there is little room for these subjects in the crowded curriculum of science courses. Most simply assume that by means of "the scientific method," theories can be tested objectively by experiment in a way that is uncontaminated by the scientists' own hopes, ideas, and beliefs. Scientists like to think of themselves as engaged in a bold and fearless search for truth. Such a view now excites much cynicism. But I think it is important to recognize the nobility of this ideal. Insofar as the scientific endeavor is illuminated by this heroic spirit, there is much to commend it. Nevertheless, in reality most scientists are now the servants of military and commercial interests. Almost all are pursuing careers within institutions and professional organizations. The fear of career setbacks, rejection of papers by learned journals, loss of funding, and the ultimate sanction of dismissal are powerful disincentives to venture too far from current orthodoxy, at least in public. Many do not feel secure enough to voice their real opinions until they have retired, or won a Nobel Prize, or both. Popular doubts about the objectivity of scientists are widely shared, for more sophisticated reasons, by philosophers, historians, and sociologists of science. Scientists are part of larger social, economic, and political systems; they constitute professional groups with their own initiation procedures, peer pressures, power structures, and systems of rewards. They generally work in the context of established paradigms or models of reality. And even within the limits set by the prevailing scientific belief system, they do not seek after pure facts for their own sake: they make guesses or hypotheses about the way things are, and then test them by experiment. Usually these experiments are motivated by a desire to support a favorite hypothesis, or to refute a rival one. What people do research on, and even what they find, is influenced by their conscious and unconscious expectations. In addition, feminist critics detect a strong and often unconscious male bias in the theory and practice of science. Many practicing scientists, like doctors, psychologists, anthropologists, sociologists, historians, and academics in general, are well aware that detached objectivity is more an ideal than a reflection of actual practice. In private, most are prepared to acknowledge that some of their colleagues, if not they themselves, are influenced in their researches by personal ambition, preconceptions, prejudices, and other sources of bias. The tendency to find what is being looked for is deep-seated. It has a basis in the very nature of attention. The ability to focus the senses in accordance with intentions is a fundamental aspect of animal nature. Finding what is looked for is an essential feature of everyday human life. Most people are well aware that other people's attitudes affect the way they interact with the world around them. We are not surprised by such biases in politicians, nor by the differences in the way people see things within different cultures. We are not surprised to find many everyday examples of self-deception in members of our families and among friends and colleagues. But the "scientific method" is generally supposed to rise above cultural and personal biases, dealing only in the currency of objective facts and universal principles. Biases in science are easiest to recognize when they reflect political prejudices, because people of opposing political views have a strong motive to dispute the claims of their opponents. For example, conservatives like to find a biological basis for the superiority of dominant classes and races, explaining their differences as largely innate. By contrast, liberals and socialists prefer to see environmental influences as predominant, explaining existing inequalities in terms of social and economic systems. In the nineteenth century, this "nature-nurture" debate focused on measurements of brain size; in the twentieth, on measurements of IQ. Eminent scientists who were convinced of the innate superiority of men over women and of whites over other races, were able to find what they wanted to find. Paul Broca, for example, the anatomist after whom the speech area of the brain is named, concluded that: "In general, the brain is larger in mature adults than in the elderly, in men than in women, in eminent men than in men of mediocre talent, in superior races than in inferior races."3 He had to overcome many factual obstacles to maintain this belief. For example, five eminent professors at Gottingen gave their consent to have their brains weighed after they died; when these cerebral weights turned out to be embarrassingly close to average, Broca concluded that the professors hadn't been so eminent after all! Critics of a more egalitarian political persuasion have been able to show that generalizations based on different brain sizes or IQ scores rested on the systematic distortion and selection of data. Sometimes the data themselves "were actually fraudulent, as in the case of some of the publications of Sir Cyril Burt, a leading defender of the view that intelligence is largely innate. In his book The Mismeasure of Man, Stephen Jay Gould traces the sorry history of these purportedly objective studies of human intelligence, showing how persistently prejudice has been clothed in scientific garb. "If -- as I believe I have shown -- quantitative data are as subject to cultural constraint as any other aspect of science, then they have no special claim on final truth."4 Not Objective Thought and reality are entirely disconnected – no basis for objectivity, the “knowledge” of science is all defined by the discourse within science – it’s historically created Hekman, Professor of Political Science at the University of Texas, 1983 (Susan, Professor of Political Science @ U Texas, The Western Political Quarterly, Vol. 36, No. 1, Mar., p. 98-115, JSTOR, http://www.jstor.org/stable/447847, JMB, accessed 6-26-11) Althusser's theory of the production of scientific concepts, like that of his deconstruction of the concept of "man" is rooted in his understanding of Marx's theoretical approach. His theory can be reduced to two theses which he derives from Marx's analysis in Capital: first, the radical separation of the realms of thought and reality, and second, the analogy between the production of scientific concepts and the production of objects in the material world. The first thesis stems from the position that science has no object outside its own activity but, rather, produces its own norms and the criterion of its own existence. Althusser opposes this theory to what be labels the "empiricist" conception of knowledge, a position roughly equivalent to what was referred to above under the heading of positivism. On Althusser's definition the empiricist sees knowledge as the extraction of the essence from ihe real, concrete object. This extraction, which retains the reality of the object sought, is accomplished through the use of the scientist's "abstract" concepts. In opposition to this conception of knowledge Althusser proposes a radical separation of the realms of thought and reality that entails a rejection of the empiricist notion that knowledge is a part of the real world. Marx's analysis in Capital, Allhusser claims is informed by this position. For Marx the production process of real, material objects takes place entirely in the real world, while the production of thought objects takes place entirely in the realm of thought (1970: 41). The goal of Marx's analysis, then, is not to understand the relationship between the real and the thought, but, rather, to analyze the process of production of thought objects (1970: 54). As Althusser understands it, then, what is accomplished in the acquisition of scientific knowledge is not, as in the empiricist account, the appropriation of the real world by the world of thought. This is the case because "the sphere of the real is separate in all its aspects from the sphere of thought" (1970: H7).s The goal of Althusser's theory, rather, is to present an analysis of how the scientist produces and manipulates concepts within the realm of thought. His point of departure is the assertion that, in the separate worlds of the real and the theoretical, an analogous form of production occurs. Like production in the material world, the production of scientific concepts begins with raw materials. But these raw materials are not, as the empiricists claim, "objective" or "given" facts about the real world. They are, rather, the body of concepts operative in the scientific community at a particular time. This body of concepts will necessarily differ from one historical period to another and with the developmental level of a particular science. But they are at any given point a product of the norms and values of scientific discourse and the particular problematic motivating that discourse." This understanding of the process of the production of scientific concepts provides Althusser with answers to a number of questions central to the definition of knowledge in the social sciences. One of these questions is the definition of what constitutes a scientific concept. Scientific concepts, on Althusser's account, have no connection with the real world. They are formulated with only one end in view: the production of knowledge. Another question concerns the means of guaranteeing the scientificity of the knowledge produced by the scientific community. Althusser's answer to this is very straightforward: the guarantee of scientificity is given by the operating norms and rules wholly internal to scientific discourse (1970: 67). Two important results follow from this position. First. Althusser clearly rejects the empiricist notion that the scicntificity of results is guaranteed through reference to the "facts." Since there are no "facts" in the sense of real world data in Althusser's theory, there can be no "checking" of the facts to guarantee the accuracy of the results. In short, the whole question of the "objectivity" of scientific facts is dissolved. The second result is equally significant. Since Althusser claims that the criterion of scientificity is given by the norms of scientific discourse and that these norms change with the development of the particular science, it follows that those things that are recognized as "knowledges" arc historically conditioned. Since the norms of the scientific community are historically produced, there is no general criterion of 'scientificity, but only the particular criteria developed by particular sciences! 1970: (>2-7).T Ii can be concluded, then, that despite the differences between the two theories, Althusser, like Gadamer, rejects the Enlighienment conception of scientific methodology not by claiming, as the humanists do, that it is inapplicable to the social sciences, but, rather, by attacking its central epistemological tenets. He rejects the possibility or even desirability of "objective knowledge" provided by this model not by claiming that the 'mill sciences are inherently subjective but by denying any connection between the real and theoretical worlds. He also establishes the unavoidable historicity of knowledge by defining the production of scientific discourse in historical terms. In sum, he grounds his conception of knowledge entirely within the confines of scientific discourse and grounds that discourse firmly in history. AT: Progress Establishing science as the only way of knowing reverse progress Watson, Graduate in Philosophy from Macquarie University 2003 (Brett, Graduate Diploma in Philosophy, Macquarie University, Jun 16, [www.nutters.org/docs/feyerabend] AD: 6-26-11, jam) Although Feyerabend does not cite any particular source when he mentions the application of Darwinism to "the battle of ideas", I think the relevance of the issue is fairly obvious. If the competitive forces of evolution can give us the very brains by which we are able to conduct this discussion, then competitive forces between opposing views must surely work to the advantage of those views in terms of weeding out weak arguments, fallacies, anomalies, and so on. Mill's suggestion that we should manufacture dissent is the intellectual equivalent of a fighter seeking a competent sparring partner. We typically acknowledge "competition" as an agent of improvement in biology, economics, and sports; why not, then, in the field of science? A defender of science might object, at this point, that we do have such competition in the field of science. There are, for example, competing theories with regards to the mechanism of evolution; and scientific revolutions could not occur at all unless there were at least two competing theories at the time. These points are true, but the diversity exists despite the prevailing culture, not because of it. Diversity is not encouraged; the existence of competing theories is viewed as a problem, since at least one of them must be false. Feyerabend addresses this when he compares Mill's approach to that of Karl Popper. Finally, Popper's standards eliminate competitors once and for all: theories that are either not falsifiable or falsifiable and falsified have no place in science. Popper's criteria are clear, unambiguous, precisely formulated; Mill's criteria are not. This would be an advantage if science itself were clear, unambiguous, and precisely formulated. Fortunately, it is not. — Paul Feyerabend, How to Defend Society Against Science Arguments for scientific progress conflate results with method Watson, Graduate in Philosophy from Macquarie University 2003 (Brett, Graduate Diploma in Philosophy, Macquarie University, Jun 16, [www.nutters.org/docs/feyerabend] AD: 6-26-11, jam) Our first straw-man believes we should ignore Feyerabend because science works. If the progress of science weren't self-evident enough, then the accompanying march of technology must surely be proof. When did any religion offer us such technological progress? Surely, therefore, one cannot construct a meaningful case against science as knowledge; the only kind of argument in which one can reasonably engage is that of why science does work. This straw-man is largely playing definitional games; equating "science" with "progress". Under this definition, something is "science" to the extent that it effects "progress", for values of that term principally relating to technology. But note that this formulates science in terms of its results rather than its methods, and Feyerabend's main thrust is against method, not against results. Thus, this argument would appear to be a non sequitur; it fails to address Feyerabend's actual point. Even if we were to grant the very generous assumption that the results in question can be best obtained by a particular "scientific method", the selection of results is value-laden. On what rational grounds could we say that the person who prefers "spiritual progress" over "technological progress" is wrong? To this, our straw-man may retort that technological progress is objectively measurable, whereas anything that might be called "spiritual progress" (whatever it is) is subjective at best, and purely imaginary at worst. This is, of course, absolutely true, so long as we accept the metrics used in making the judgment. But the preference for one system of metrics over another is also a value judgment, and therein lies a strong indicator that science could be something against which society ought to be defended. No doubt our straw-man considers it bad for religions to impose their value-judgments on society as a whole; why should it be different for science? AT: Innovation Science restricts innovation and is biased towards status quo knowledge Stevenson 99 (Ian, U of Virginia School of Medicine, Journal of Scientific Exploration, Vol. 13, No. 2, pp. 266-267 JF) I believe it is more difficult now than formerly to introduce new ideas and concepts and have them accepted by scientists. I attribute this to the larger number of practicing scientists compared with former times. This larger number of scientists increases the likelihood that one or more of them will make important discoveries. Unfortunately, it also has the disadvantage of presenting a larger mass of scientists resistant to change. “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it” (Planck, l950, pp. 33–34). To this we may add that death is often not a sufficient facilitator of the acceptance of new ideas. How did science arrive at this condition? It sometimes seems that little has changed since Francis Bacon, who, surveying the world of learning in his time, remarked that “the last thing anyone would be likely to entertain is an unfamiliar thought” (1607/1964, p. 79). Scientists who think for themselves have few defenses against the thoughtcollective. In principle, peer review of research grant applications and articles submitted to scientific journals should boost their chances of escaping the vigilance of thwarting conservatives. That it does not do so is not news. In 1793 the Royal Society rejected Jenner’s paper on vaccination (Magner, 1992); he published it privately five years later (Jenner, 1798). In the next century anonymous reviewers for the Annalen der Physik refused to publish Helmholtz’s paper on the conservation of energy (Graneau & Graneau, 1993). Readers who wish experimental evidence of the imperfections of peer review can find it in Mahoney’s (1977) study of the influence of personal bias on referee’s judgments of the quality (and hence suitability for publication) of manuscripts submitted to a journal. Horrobin (1990) has gone so far as to stigmatize peer review as a suppressor of innovation. State Guts Solvency The state frames the debate on questions of science which inevitably destroys objectivity Borders, Fellow at Phillips Foundation, 9 (Max, Robert Novak, Dec 4, [washingtonexaminer.com/blogs/examiner-opinion-zone/separationscience-and-state] AD: 6-27-11, jam) In America, religion flourishes. It gets no subsidies from the government except for various tax exemptions. There is religious diversity and religious dynamism. In Europe, governments have subsidized Roman Catholicism and Protestantism over the years. In these countries, religion either languishes or tends to be monolithic. But don’t take my word for it. Read the work of Laurence Iannaccone, a top expert on the economics of religion. The more the government gets mixed up in religion, science – or anything for that matter – the more bias, corner-cutting and groupthink is likely to result. Climate science is starting to look like a really good example of this effect. Indeed, if you are one that thinks the $23 million Exxon-Mobil has thrown at climate change skepticism has lead to “bias,” consider the $79 billion since 1989 in government largess that has gone to “consensus” climate science. Wait. You don’t think government-funded scientists face perverse incentives? Think again. One of the principle players in the Climategate scandal has, himself, received over $3 million for his contributions to the IPCC’s body of research. When you consider that climate skepticism has gotten 1/1000th of that from Big Oil, accusations that the oil industry has corrupted the debate start to look a little silly. “But the government’s charge is only to find the truth!” they’ll cry. “They don’t have a stake in the outcome.” (Now look in the mirror and say that three times with a straight face.) For politicians, $79 billion is an investment in the trillions in ROI they can expect from cap-and-trade revenues—not to mention all the green energy special interests groups that will jockey to fill their campaign coffers. I know, I know. Many bureaucrats are honest folks. But the idea that government scientists and their funders are immune to incentives because they get our tax dollars is, well, laughable. Of course, none of this is to argue that scientific truth doesn’t stand on its own. Arguments should be judged on their merits and on accurate observable data, not whether they were funded by oil money or Barack Obama’s federal credit card. So here’s a radical idea: how about the separation of Science and State modeled after the 1st Amendment? I can hear the outcries: “Heavens! What will be the fate of science if government funding dries up? It will disappear! We won’t get pure research!” Again, there are plenty of analogs in American religion. But more importantly, no one ever stops to ask what kinds of science never emerges because central bureaucrats decide to pick and choose what’s important and what’s not--using our scarce resources to pick those winners and losers. With a decentralized system of science funded via private patronage and university-based philanthropy, we may not get capital-T Truth to rise up and glow above the people like a beacon. But we will get more diversity and less politicization. Then we’ll be more likely to get a natural coalescence of the scientific community around a view – one subject to the forces of refutation, rather than politics and activism. History proves that, when the government uses scientific findings or initiate studies, autonomy is lost and the science becomes useless – Lysenko incident Reznik 8 (David, Episteme, vol 5, p. 220-238, lido) There is historical evidence that government interference in scientific decisionmaking can stunt or retard the growth of science (Sheehan 1993). The example I will discuss in this article is the negative effect of government control of science in the former Soviet Union, where biology suffered the effects of Marxist ideology from the 1930s to the 1960s. Following the Russian revolution of 1917, David B. Resnik the All Union Communist Party (a.k.a. the Bolsheviks) demanded that all social institutions, included science, conform to Marxist political theory. Members of the Party opposed scientific ideas they regarded as the product of Bourgeoisie thought, such as free market economics and Mendelian genetics. They also favored scientific ideas that supported the idea of re-engineering human society along Marxist lines. In the 1920s, Trofim D. Lysenko (1898–1976) developed a theory of inheritance that found favor among powerful members of the Party. Lysenko developed a process, known as vernalization, that involved soaking and chilling seeds from summer crops for winter planning. Lysenko claimed that vernalization could improve agricultural productivity, when, in fact, it could not. Scientists and politicians accepted Lysenko’s ideas, even though he had little evidence to support his ideas, he did not keep good research records, and he manipulated the data by not reporting negative results (Sheehan 1993). In 1930, the Ukrainian Commissioner of Agriculture created a vernalization department at a genetics institute in Odessa (Sheehan 1993). Lysenko proposed a theory to explain vernalization phenomena: one can alter the development of a plant by changing its environment because plants have different needs at different stages of development. Lysenko and I. I. Prezent, a member of the Communist Party, proposed a new environmental theory of heredity that stood in sharp contrast to Mendel’s theory of inheritance. The theory found favor with other members of the Communist Party, because it implied that human behavior can be changed through environmental manipulation, making it possible to overcome greed, selfishness, and possessiveness to create a communist state. Proponents of Mendelian genetics objected to the environmental theory as unscientific and unsound, but their criticisms could not overcome the theory’s political appeal (Sheehan 1993). Lysenko soon won the support of Joseph Stalin (1878–1953), the General Secretary of the Communist Party. In 1938, Lysenko was appointed President of the Lenin Academy for Agricultural Sciences, and in 1940 he became Director of the Department of Genetics at the Soviet Academy of Science (Hossfeld and Olsson 2002). By 1948, Lysenkoism became the official view of the Communist Party, and the Soviet government began to repress Mendelian genetics. Soviet scientists who attacked Lysenkoism or endorsed Mendelianism were denounced, declared mentally ill, imprisoned, exiled, or even murdered. Soviet scientists were not allowed to teach Mendelian ideas or conduct research in Mendelian genetics until the 1960s, when the period of official repression ended (Joravsky 1986). The suppression of ideas that contradicted Marxist ideology had a devastating effect on Soviet genetics, but many other disciplines also suffered, including zoology, botany, evolutionary biology, agronomy, and economics (Sheehan 1993). Before the 1940s, some of the world’s leading geneticists, such as Theodosius Dobzhansky (1900–75), lived in the Soviet Union, but by the 1960s, genetics and many other scientific disciplines in the Soviet Union were decades behind Western science (Joravsky 1986). Lysenkoism is an extreme example of what can happen when the government restricts the autonomy of individuals, groups, and organizations; yet it still offers us some important lessons that apply to situations where science is not as politicized. The Soviet Union’s repression of views that contradicted Lysenkoism undermined the progress of science in several different ways. First, the Soviet government’s actions interfered with objectivity of science. Theories of inheritance were accepted or rejected based on political reasons, not epistemological ones. Scientists were forced to ignore the evidence against Lysenkoism and the evidence in favor of Mendelianism. Second, the actions of the government interfered with communication among scientists and the sharing of ideas. Honest, open communication is vital to scientific inquiry, criticism, and debate (Burke 1995); yet the Soviet government stifled the exchange of information concerning some topics. Scientists were rightfully afraid to criticize Lysenkoism in public or to discuss or teach Mendelian theory. Third, the repression of Mendelian genetics nearly extinguished creativity in many areas of science. Creativity flourishes only when scientists are free to explore new ideas, theories, and methods and to challenge existing ones (Kantorovich 1993). The Soviet government violated the freedom of many citizens, and scientists were no exception. The government dictated the areas of science and the scientific ideas that would and would not be studied. It established a rigid research program geared toward promoting Marxist ideology. The government interfered with the freedom of scientists, scientific groups, and scientific institutions. Fourth, the Soviet government’s restrictions had a widespread impact. Many different research disciplines were directly or indirectly affected by the government’s repressive policies. The plague of Lysenkoism spread throughout the research community and affected many different scientists, scientific groups, and scientific institutions. Even people working in fields of research far removed from human genetics were apprehensive about potential intimidation, harassment, or repression (Joravsky 1986). The moral of Lysenkoism is that governments should be very wary of interfering with scientific decisions. Scientists (and scientific groups and organizations) should be granted autonomy within their domains of practice and expertise. The progress of science depends on its independence from government control and authority. Info Sharing DA The US is winning the space race now, but Russia and China are main competitorsinformation sharing like the aff makes them competitive with the US and causes conflict Gruss, Military Analyst, 2-28-14 (Mike, February 28th, “U.S. Space Assets Face Growing Threat From Adversaries, Stratcom Chief Warns”, http://www.spacenews.com/article/military-space/39669us-space-assets-face-growing-threat-fromadversaries-stratcom-chief, accessed 7/13/14, LLM) “The U.S. still retains a strategic advantage in space as other nations are investing significant resources — including developing counterspace capabilities — to counter that advantage,” Haney testified. “These threats will continue to grow in the next decade.” Haney, who as Stratcom’s commander is responsible for space surveillance and protecting U.S. space systems from hostile actions, did not identify these nations by name. Nor did the commander of Air Force Space Command, Gen. William Shelton, when he warned in a Feb. 7 speech at the Air Force Association here that the threat to U.S. space assets is moving at a quick pace. “I’ll tell you the considered wisdom of the intelligence community has produced some really good seminal work on the space threats that are out there,” Shelton said. “And what we are finding is they were maybe a little too conservative. Things are moving much faster than we certainly would like and certainly they had predicted.” Shelton’s comments came a little more than a week after U.S. Director of National Intelligence James Clapper told the Senate Intelligence Committee that the United States will face increased threats to its national security space assets in 2014, specifically mentioning China and Russia. “Threats to U.S. space services will increase during 2014 and beyond as potential adversaries pursue disruptive and destructive counterspace capabilities,” Clapper said in written testimony. “Chinese and Russian military leaders understand the unique information advantages afforded by space systems and are developing capabilities to disrupt U.S. use of space in conflict.” Shelton has also identified China by name, telling SpaceNews in a Jan. 27 interview that defense leaders saw a mismatch between Chinese space activities and rhetoric. “If you listen to their rhetoric it is peaceful purposes, regional power, not global hegemony, but the kind of capability we see them demonstrating don’t match that same rhetoric,” he said. At the start of the Senate Armed Services Committee’s wide-ranging Feb. 27 hearing on U.S. Strategic Command and U.S. Cyber Command matters, Committee Chairman Carl Levin (D-Mich.) asked Haney to address “steps that may be needed to ensure that we can protect or reconstitute our space assets in any future conflict.” Haney did not provide many specifics but he did testify that disaggregation — the concept of distributing space capabilities among a greater number of platforms to better protect them against attack and other hazards — needs more analysis before it is accepted as a cost-cutting measure. “We are exploring options such as disaggregation as a method to achieve affordable resilience but additional analysis is necessary in this area,” he said, according to his written testimony. Air Force Space Command, which is expected to complete a series of studies on disaggregation later this year, has embraced the space architecture concept as a way to improve resiliency while cutting costs. In April, Shelton and officials from the U.S. Government Accountability Office said preliminary study results suggest disaggregation would in fact save the Air Force money. Haney also testified Feb. 27 that while space situational awareness (SSA) is one of the nation’s top priorities, there are concerns about sharing more data internationally because it may help competitors. “Sharing SSA information with other nations and commercial firms promotes safe and responsible space operations, reduces the potential for debrismaking collisions, builds international confidence in U.S. space systems, fosters U.S. space leadership, and improves our own SSA through knowledge of other owner/operator satellite positional data,” Haney said in his written testimony. “For all its advantages, there is concern that SSA data sharing might aid potential adversaries.” Data sharing crashes the economy, creates Sino-US military competition, and Chinese, Russian, and Iran nanotech development Clayton, Staff Writer and international analyst for CS Montior, 2011 (Mark, November 3rd, “US names names – China and Russia – in detailing cyberespionage”, http://www.csmonitor.com/USA/Foreign-Policy/2011/1103/US-names-names-China-and-Russia-indetailing-cyberespionage, accessed 7/13/14, LLM) "The evidence has simply become overwhelming," says Joel Brenner, head of US counterintelligence in the Office of the Director of National Intelligence from 2006 to 2009. "It was the gorilla in the room that could no longer be ignored. Not to have named these countries would have yielded a report that would have been irrelevant." The report also identified allies like France. But China, in particular, was fingered for massive ongoing cyberespionage against American companies in an alleged effort to gather the technological insights needed to make its economy more competitive. An official spokesman for China vehemently denied any sponsorship of such attacks. But naming China was probably inevitable, intelligence experts say: A number of countries, including Britain, Germany, and South Korea, have already been placing blame. "One of the biggest challenges America faces is how to deal with countries like China that have been so blatant in their theft of economically important information," says Scott Borg, director of the US Cyber Consequences Unit, a nonprofit think tank. The report, Mr. Borg adds, appears to be moving the issue of cyberespionage into a more formal realm where diplomats will negotiate the issue. "This is a serious threat to our economy, yet it's so new that government officials don't know what would be an appropriate response,” he says. “This report looks like another step toward putting this on the diplomatic agenda." The Office of the National Counterintelligence Executive was formed in 2001. Its purpose is better evaluation of counterintelligence threats from foreign nations and nonstate actors, as well as integration of all counterintelligence activities. Whereas in the past, individual spies might have painstakingly collected and transferred physical copies of secret corporate documents, the ease and anonymity of downloading files from the Internet or copying thousands of documents at a time onto a portable thumb drive have made cyberespionage a crucial threat to the nation, the report says. Project 863, for instance, is a clandestine initiative launched by China in 1986, the report says, "to enhance China's economic competitiveness and narrow the science and technology gap between China and the West in areas like nanotechnology, computers and biotechnology." Cyberespionage is now a big part of Project 863, it says. Against that backdrop, the report says, American companies and specifically cybersecurity companies have "reported an onslaught of computer network intrusions" originating from Internet Protocol (IP) addresses in China. Private security firms in the US have dubbed the trend the "advanced persistent threat." Examples cited by the report include: • A February 2011 report by the cybersecurity firm McAfee found that Chinese hackers had broken into the computer networks of global oil, energy, and petrochemical companies. Their alleged goal: steal data on sensitive proprietary operations and the financing of bids and operations for oil and gas fields. (The McAfee report substantially corroborated a January 2010 Monitor report that found Chinese links to cyberespionage attacks on three global oil giants – Marathon Oil, ExxonMobil, and ConocoPhillips.) • The Chinese government sponsored hackers' intrusions into Google’s networks, VeriSign iDefense reported in January 2010. Google later claimed that its source code had been stolen, a claim China denies. • Last year, computer security firm Mandiant reported secret business information stolen from the corporate network of a Fortune 500 company while that company was in negotiations to buy a Chinese company. The stolen data may have helped the Chinese company in its negotiations. In his new book "American the Vulnerable," Mr. Brenner amplifies what is contained in the report. What became known as Operation Aurora, he writes, was a "coordinated attack on the intellectual property of several thousand companies in the United States and Europe – including Morgan Stanley, Yahoo, Symantec, Adobe, Northrop Grumman, Dow Chemical and many others. Intellectual property is the stuff that makes Google and other firms tick." A spokesman for the Chinese Embassy in Washington rebuts the report. "China's rapid development and prosperity are attributed to its sound national development strategy and the Chinese people's hard work as well as China's ever enhanced economic and trade cooperation with other countries that benefits all," writes Wang Baodong, spokesman for the Chinese Embassy, in an e-mail. "Willfully making unwarranted accusations against China is irresponsible, and we are against such demonization efforts as firmly as our opposition to any forms of unlawful cyberspace activities." Looking ahead a few years, the study cites a technological shift toward a "proliferation of portable devices that connect to the Internet and other networks, [which] will continue to create new opportunities for malicious actors to conduct espionage." Such devices are expected to double from 12.5 billion in 2010 to 25 billion by 2015. Another trend that could make the nation more vulnerable is the massive swing toward "cloud computing," which pools information processing and storage. While cheaper than hosting computer services in-house, data sharing will provide new means for cyberspies to do their work, the report warns. According to the report, key targets of cyberspies going forward will include information and communications technology and military technologies, especially those pertaining to naval propulsion and aerospace. But the focus will also include civilian and dual-use technologies, including clean-energy technologies such as solar, wind, and other "energy-generating technologies" – expected to be the fastest-growing investment sector in many nations. China's 863 program, the report says, is trying to acquire advanced materials and manufacturing techniques, in particular to boost its industrial competitiveness in aviation and high-speed rail. Meanwhile, Russia and Iran are focusing on advanced materials like nanotechnology. DA Links Politics – Science Republicans strongly oppose science funding McKelwee, writer and researcher of public policy, 13 (Sean, November 13th, “GOP is an anti-science party of nuts (sorry, Atlantic!)”, http://www.salon.com/2013/11/13/gop_is_an_anti_science_party_of_nuts_sorry_atlantic/, accessed 7/11/14, LLM) What is the false dichotomy between those who support science and those who oppose it? Scientists should actively war with any administration or politicians who opposes science. The Bush administration, for instance, happily filled up federal bureaucracies with partisans, and 60 scientists (including 20 Nobel laureates) wrote a letter criticizing him for “distorting and suppressing findings that contradict administration policies, stacking panels with like-minded and underqualified scientists with ties to industry, and eliminating some advisory committees altogether.” In contrast, the Obama administration has poured money into mapping the brain and political capital into fighting climate change (perhaps one reason 68 Nobel Prize-winning scientists signed a letter endorsing Obama). There is a real dichotomy between those who support science and those who don’t — and those who don’t are generally on the Republican side. One hundred and thirty-one members of the Republican caucus deny the science behind climate change. A disturbing 17 of those Republican members are on the House Committee on Science, Space and Technology. As to the Huxley quote, scientists need to treat themselves like any other lobby, and support candidates and policies that promote their profession and research. That means supporting Democrats, as most of them do (only 6 percent of scientists identify as Republicans). The false equivalence that blames both parties for the cuts to science funding, the lack of research and our inadequate response to global warming will only make it harder to shame the party responsible for its intransigence. The idea that Republicans are anti-science isn’t a caricature. It’s a sad fact. Science funding is extremely partisan Bailey, Science Correspondent for Reason Magazine, 11 (Ronald, October 4th, “Are Republicans or Democrats More Anti-Science?”, accessed 7/11/14, LLM) A fight has broken out in the blogosphere over whether Team Blue or Team Red is more “anti-science.” Microbiologist Alex Berezow, editor of RealClearScience, struck the first blow in the pages of USA Today. "For every anti-science Republican that exists," he wrote, "there is at least one anti-science Democrat. Neither party has a monopoly on scientific illiteracy." The battle of the blogs was joined when Chris Mooney, author of The Republican War on Science, denounced Berezow’s column as “classic false equivalence on political abuse of science,” over at the Climate Progress blog at the Center for American Progress. He accused Berezow of trying “to show that liberals do the same thing” by “finding a few relatively fringe things that some progressives cling to that might be labeled anti-scientific.” Berezow acknowledged that a lot prominent Republican politicians including—would-be presidential candidates—deny biological evolution, are skeptical of the scientific consensus on man-made global warming, and oppose research using human embryonic stem cells. As evidence for Democratic antiscience intransigence, Berezow argued that progressives tend to be more anti-vaccine, antibiotechnology when it comes to food, anti-biomedical research involving tests on animals, and antinuclear power. In support of his claims, Berezow cited some polling data from a 2009 survey done by the Pew Research Center for the People and the Press. In fact that survey identified a number of partisan divides on scientific questions. On biological evolution, the survey reported that 97 percent of scientists agree that living things, including human beings, evolved over time and that 87 percent of them think that this was an entirely natural process not guided by a supreme being. Some 36 percent of Democrats believe that humans naturally evolved; 22 percent believe that evolution was guided by a supreme being; and 30 percent don’t believe humans have evolved over time. The corresponding figures for Republicans are 23 percent, 26 percent, and 39 percent, respectively. On climate change, the Pew survey reported that 84 percent of scientists believe that the recent warming is the result of human activity. Among Democrats, 64 percent responded that the Earth is getting warming mostly due to human activity, whereas only 30 percent of Republicans thought so. That is truly a deep divide on this scientific issue. The Pew survey next asked about federal funding of human embryonic stem cell research. Democrats favored such funding by 71 percent compared to only 38 percent among Republicans. The Republican response is likely tied to two issues here: (1) the belief that embryos have the same moral status as adult people; and (2) less general support for spending taxpayer dollars on research. With regard to the latter, the Pew survey reports that 48 percent of conservative Republicans believe that private investment in research is enough, whereas 44 percent believe government “investment” in research is essential. As Mooney might say, the partisan differences over stem cell research might be considered a “science-related policy disagreement” that should not be “confused with cases of science rejection.” Science funding and research sparks partisan battles Fidalgo, communications director for the Center for Inquiry, 13 (Paul, November 14th, “Are Republicans Unfairly Pegged as Anti-Science? (Mostly No)”, http://www.patheos.com/blogs/friendlyatheist/2013/11/14/are-republicans-unfairly-pegged-as-antiscience-mostly-no/, accessed 7/11/14, LLM) In a piece in The Atlantic, Fisher argues that Democrats are, more or less, just as prone to anti-scientific thinking as Republicans, but on different subjects, and that Republicans aren’t nearly as backward on science acceptance as their more extreme clown-characters like Paul Broun and Michele Bachmann would make them seem to be. At the outset, let me just say I agree with where Fisher is going with his argument, but its presentation is flawed. First, the problems. He attempts to make a wash of Republicans’ and Democrats’ beliefs on creationism, Yes, an embarrassing half of Republicans believe the earth is only 10,000 years old — but so do more than a third of Democrats. And a slightly higher percentage of Democrats believe God was the guiding factor in evolution than Republicans. That’s fine, but the number of Republicans who believe this, not mentioned by Fisher, is 52% — a majority. That’s a lot more than one-third. And even if the numbers were closer to each other, the fact remains that the elected representatives that Republican voters put in power are far more likely to resist science than the folks the Democrats put in power, even if many of those Democrats think the Bible is a biology textbook. But maybe this is also an unfair presupposition on my part? Okay, I’m willing to be wrong here. But look at his own proof of pro-science Republicanism. It’s anecdote. Of the many Republican members of Congress I know personally, the vast majority do not reject the underlying science of global warming. Now remember, Fisher was a science policy staffer for the House GOP, so of course he’s going to be exposed to a proportionally higher number of science-accepting Republicans, probably a higher percentage than actually exist among Republicans generally. So maybe he rubs elbows with realitybased GOPers, but the Republicans that are getting elected to positions of power are folks like Bobby Jindal, Rick Perry, and George W. Bush; just a sampling of chief executives who reject basic premises of science, or advocate policies that directly combat science in the name of theology. Don’t even get me started on the House Science Committee. I’m also unconvinced by Fisher’s take on which party is better at funding scientific research, but rather than get into it here, I’ll just say that it seems to me that when Democrats are lackluster on this point, it has more to do with politics in the sense of “art of the possible,” seeking to fund that which has a chance of being funded, or directing it at things that have a known (or anticipated) payoff, such as clean energy. Politics – NOAA NOAA funding is highly partisan and sparks heated debates Jensen, Political and Science Analyst for the Clarion Peninsula, 12 (Andrew, April 27th, “Congress takes another Ax to NOAA budget”, http://peninsulaclarion.com/news/2012-04-27/congress-takes-another-ax-to-noaa-budget, accessed 7/13/14, LLM) In a Congress defined by fierce partisanship, no federal agency has drawn as much fire from both parties as NOAA and its Administrator Jane Lubchenco. Sen. Scott Brown, R-Mass., has repeatedly demanded accountability for NOAA Office of Law Enforcement abuses uncovered by the Commerce Department Inspector General that included the use of fishermen's fines to purchase a luxury boat that was only used for joyriding around Puget Sound. There is currently another Inspector General investigation under way into the regional fishery management council rulemaking process that was requested last August by Massachusetts Reps. John Tierney and Barney Frank, both Democrats. In July 2010, both Frank and Tierney called for Lubchenco to step down, a remarkable statement for members of Obama's party to make about one of his top appointments. Frank introduced companion legislation to Kerry's in the House earlier this year, where it should sail through in a body that has repeatedly stripped out tens of millions in budget requests for catch share programs. Catch share programs are Lubchenco's favored policy for fisheries management and have been widely panned after implementation in New England in 2010 resulted in massive consolidation of the groundfish catch onto the largest fishing vessels. Another New England crisis this year with Gulf of Maine cod also drove Kerry's action after a two-year old stock assessment was revised sharply downward and threatened to close down the fishery. Unlike many fisheries in Alaska such as pollock, crab and halibut, there are not annual stock assessment surveys around the country. Without a new stock assessment for Gulf of Maine cod, the 2013 season will be in jeopardy. "I applaud Senator Kerry for his leadership on this issue and for making sure that this funding is used for its intended purpose - to help the fishing industry, not to cover NOAA's administrative overhead," Frank said in a statement. "We are at a critical juncture at which we absolutely must provide more funding for cooperative fisheries science so we can base management policies on sound data, and we should make good use of the world-class institutions in the Bay State which have special expertise in this area." Alaska's Sen. Mark Begich and Murkowski, as well as Rep. Don Young have also denounced the National Ocean Policy as particularly misguided, not only for diverting core funding in a time of tightening budgets but for creating a massive new bureaucracy that threatens to overlap existing authorities for the regional fishery management councils and local governments. The first 92 pages of the draft policy released Jan. 12 call for more than 50 actions, nine priorities, a new National Ocean Council, nine Regional Planning Bodies tasked with creating Coastal Marine Spatial Plans, several interagency committees and taskforces, pilot projects, training in ecosystem-based management for federal employees, new water quality standards and the incorporation of the policy into regulatory and permitting decisions. Some of the action items call for the involvement of as many as 27 federal agencies. Another requires high-quality marine waters to be identified and new or modified water quality and monitoring protocols to be established. Young hosted a field hearing of the House Natural Resources Committee in Anchorage April 3 where he blasted the administration for refusing to explain exactly how it is paying for implementing the National Ocean Policy. "This National Ocean Policy is a bad idea," Young said. "It will create more uncertainty for businesses and will limit job growth. It will also compound the potential for litigation by groups that oppose human activities. To make matters worse, the administration refuses to tell Congress how much money it will be diverting from other uses to fund this new policy." Natural Resources Committee Chairman Doc Hastings, R-Wash., sent a letter House Appropriations Committee Chairman Hal Rogers asking that every appropriations bill expressly prohibit any funds to be used for implementing the National Ocean Policy. Another letter was sent April 12 to Rogers by more than 80 stakeholder groups from the Gulf of Mexico to the Bering Sea echoing the call to ban all federal funds for use in the policy implementation. "The risk of unintended economic and societal consequences remains high, due in part to the unprecedented geographic scale under which the policy is to be established," the stakeholder letter states. "Concerns are further heightened because the policy has already been cited as justification in a federal decision restricting access to certain areas for commercial activity." Congress refused to fund some $27 million in budget requests for NOAA in fiscal year 2012 to implement the National Ocean Policy, but the administration released its draft implementation policy in January anyway. Begich told the Journal when the draft implementation plan was released that fund diversion was a "main concern." "At a time Congress is reining in spending, I think the administration needs to prioritize funding for existing services especially those which support jobs such as fishery stock assessments and the like, and not new and contentious initiatives," he said. Murkowski called the administration's implementation plan "clear as mud" at an Appropriations Committee hearing April 19. "It's expensive; there are no dedicated funds for agencies to follow through with the commitments that have been identified in the draft implementation plan," she said. "I have been told that the national ocean policy initiative is going to be absorbed by these existing programs, but yet the agencies haven't been able to provide me with any indication as to what work is actually going to be set aside as part of that trade-off, so it is as clear as mud to me where the administration is really intending to take this." NOAA and ocean policies are a mixed bag of partisan footballs Kollipara, Political Science Analyst for the American Association for the Advancement of Science, 14 (Puneet, June 3rd, “U.S. House Wants Limits on Climate, Marine Policy Programs”, http://news.sciencemag.org/climate/2014/06/u-s-house-wants-limits-climate-marine-policy-programs, accessed 7/13/14, LLM But lawmakers adopted several amendments that targeted marine research and climate science programs. The U.S. Senate, which this week begins work on its version of the spending bill, would have to agree to the amendments in order for them to become law (and in the past has stripped similar provisions from the legislation). For now, however, these amendments remain in the mix: Representative Bill Flores (R–TX) successfully added language barring the president from enforcing his National Ocean Policy, which has been a partisan football in recent years. The amendment, which is similar to past amendments adopted by the House but later stripped from final measures, was approved on a voice vote. In a 226 to 179 vote, the House adopted a proposal from Representative Mark Meadows (R–NC) to bar the United States from entering international trade agreements to cut climate-warming greenhouse gas emissions. An amendment from Representative Scott Perry (R–PA), adopted on a voice vote, would bar spending money on a number of government climate assessments and reports, including the U.S. Global Change Research Program’s National Climate Assessment (NCA). The president has used the most recent NCA, released last month, to bolster his Climate Action Plan to cut U.S. greenhouse gas emissions. Several other amendments offered by Democrats to bolster funding for ocean acidification and climate research failed on voice votes. Advocates for strong action on climate change are hoping the Senate will hold firm against the climaterelated funding restrictions and strip out the “poison pills,” says Michael Halpern of the Union of Concerned Scientists in Cambridge, Massachusetts. The White House has also indicated its opposition to climate research limits. One ocean advocate, meanwhile, calls the House bill a “mixed bag. … We’re not thrilled but not devastated,” says Jeff Watters, acting director of government relations at the Ocean Conservancy in Washington, D.C. “It certainly doesn’t meet our expectation of what needs to happen.” Overall, the bill would keep top-line funding numbers for the Commerce Department’s National Oceanic and Atmospheric Administration (NOAA) roughly equal to current spending. But it would cut NOAA’s climate-related research funding by $37.5 million, or 24%, from 2014. It also rejects a NOAA request to spend $15 million on a package of three space-based instruments including the Total Solar Irradiance Sensor, and a $9 million boost, to $15 million, for NOAA’s ocean acidification research and monitoring programs. K Links Cap Liberal democracies use science to prop up capital and science itself is then held hostage for capital because the system is fueled by it McCauley, Science Philosopher and Writer for the Oxford University Press, 11 (Robert N., “Why Religion is Natural and Science is Not”, pdf. p. 284, accessed 7/11/14, LLM) One counterargument to my claims about the vulnerability of science looks to its intimate connections with technology over the past century. The success of capitalist enterprises has largely depended upon technological advances, and those advances have regularly emerged from research in the basic sciences, so capitalism relies on basic science and will not permit its demise. This argument makes sense, so far as it goes. The argument’s soundness, though, depends upon at least three unstated assumptions: (1) that liberal, democratic governments will continue to be the principal source of support for scientific research, (2) that the priorities of liberal, democratic governments will not be held hostage by corporate interests, and (3) that executives will both recognize their corporations’ long-term interests and rank those interests among their top priorities. The first and second assumptions describe conditions that ensure that neither an entrenched political class (for example, the Communist party in China) nor particular corporations (either individually or in concert) will dominate the character and direction of that research in any scientific field. The third assumption aims to guard against the possibility that the people who run corporations do not run them into the ground. Fortunately, current circumstances satisfy these conditions fairly well in many liberal democracies around the world, but nothing guarantees any of this. 139 The cultural and political arrangements, the legal measures, and the sheer effort necessary to sustain such conditions quickly uncover the close connections between modern science and open, democratic societies in a world where, simultaneously, the ties between science and technology have become so intimate and the pursuit of science has become so esoteric. The cornerstone for the capitalist knowledge regime is basic science and technology that furthers pervasive modes of production that are used to stimulate elitists’ power Amaral, Center for Higher Education Policy Studies, Matosinhos, Portugal, Meek, University of New England, Armidale, Australia, and Larsen, Norwegian Institute for Studies in Research in Higher Education, Oslo, Norway, 3 (Alberto, V. Lynn, Ingvild M., “THE HIGHER EDUCATION MANAGERIAL REVOLUTION?”, pdf. pg 203, accessed 7/12/14, LLM) In this chapter, we argue that, in the United States, an academic public good knowledge regime is shifting to an academic capitalist knowledge regime. The public good knowledge regime was characterised by valuing knowledge as a public good to which the citizemy has claims. Mertonian norms - such as communalism, universality, the free flow of knowledge and organised scepticism - were associated with the public good model. The public service regime paid heed to academic freedom, which honoured professors' rights to follow research where it led, and gave professors the right to dispose of discoveries as they saw fit (Merton 1942). The cornerstone of the public good knowledge regime was basic science that led to the discovery of new knowledge within the academic disciplines, serendipitously leading to public benefits. Mertonian values are often associated with the Vannevar Bush model, in which basic science that pushes back the frontiers of knowledge was necessarily performed in universities (Bush 1945). The discoveries of basic science always preceded development. Development occurred in federal laboratories and sometimes in corporations. It often involved building and testing costly prototypes. Application followed development and almost always took place in corporations. The public good model assumed a relatively strong separation between public sector and private sector. The academic capitalist knowledge regime values knowledge privatisation and profit taking, in which institutions, inventor faculty and corporations have claims that come before the public's. Public interest in science goods are subsumed in the increased growth expected from a strong knowledge economy. Rather than a single, non-exclusively licensed, widely distributed product (e.g. vitamin D irradiated milk) serving the public good, the exclusive licensing of many products to private firms contributes to economic growth which benefits the whole society. Knowledge is construed as a private good, valued for creating streams of high technology products that generate profit as they flow through global markets. Professors are obligated to disclose their discoveries to their institutions which have the authority to determine how knowledge shall be used. The cornerstone of the academic capitalism model is basic science for use and basic technology, models that make the case that science is embedded in commercial possibility (Stokes 1997; Branscomb 1997; Branscomb et al. 1997). These models see little separation between science and commercial activity. Discovery is valued because it leads to high technology products for a knowledge economy. We look at state system and institutional intellectual property policies for three states to see if they indicate a shift from a public good to an academic capitalist knowledge regime. The questions we are particularly interested in answering are: What values are embedded or explicit in these policies? Have they changed over time? Have the organisations that frame the values changed? What is the direction of the change? What do these changes tell us about the relation between market, state and higher education, and how these are valued? Science is now used as a market actor that capitalism has taken hostage and is a pillar of the commercial system at large- only a separation from the state and market soves Amaral, Center for Higher Education Policy Studies, Matosinhos, Portugal, Meek, University of New England, Armidale, Australia, and Larsen, Norwegian Institute for Studies in Research in Higher Education, Oslo, Norway, 3 (Alberto, V. Lynn, Ingvild M., “THE HIGHER EDUCATION MANAGERIAL REVOLUTION?”, pdf. pg 225, accessed 7/12/14, LLM) As universities became more involved with the economy, the organisational structure surrounding research changed. Universities were major actors in the process, responding to the opportunity structure created by Bayh-Dole and an array of federal legislation supported by a bi-partisan competitiveness coalition in the US Congress that began in the 1980s, the end of the Cold War, and the rise of a global economy (Slaughter and Rhoades 1996; Slaughter and Leslie 1997). The NSF, in response to the competitiveness coalition, began to support university-industry partnerships, initially in engineering (Feller, Ailes and Roessner 2002). The NSF moved further away from funding basic research when it took on the job of organising the Internet for privatisation (Slaughter and Rhoades forthcoming). Periodic and severe fiscal crises in the several states encouraged state legislators to work with university administrators to pass laws that made disclosure obligatory and public-private partnerships possible (Eisinger 1988; Isserman 1994; Chew 1992). As we have seen in our review of university intellectual property policies, these policies began to reorganise research by dismantling the firewall that had separated the state (universities) from the economy. The process of patenting and copyrighting enclosed knowledge, commodified it as property to be licensed to corporations in return for royalties, making universities market actors. The royalty splits provided powerful economic incentives for institutions and faculty. With anywhere from onethird to one-half of the royalty stream as their reward for patenting, entrepreneurial faculty became part of a compensation system that was more like that of CEOs paid 300 times that of their workers than like faculty on merit or market-based salaries, governed by the norms of their disciplines. Simultaneously, and ironically, the patent policies also made faculty more like all other workers, in that the institution, intent on generating revenue streams, over the period considered, came to claim virtually all intellectual property from all members of the university community, making faculty, staff and students less like university professionals and more like corporate professionals whose discoveries are considered workfor-hire, the property of the corporation, not the professional. The patent policies also evidence the creation of organisational capacity to engage the market: the development of technology transfer offices that handle licensing, the creation of equity procedures and agreements that allow universities to act as venture capitalists and faculty as state-subsidised entrepreneurs, and conflict of interest policies to regulate the increasingly porous boundary between state and market. In sum, the patent policies restructure the organisation of research so it is closer to a commercial system. Currently, the Mertonian/Bush and academic capitalist knowledge regimes coexist and sometimes overlap. However, the values of both systems depend in part on the organisational structures which sustain the cultures of research. The academic status and prestige system is still concerned with discovery, fundamental (broad) scientific questions, pushing back the frontiers of knowledge, and recognition as reward. However, we think that system can be sustained only if there continues to be an organisational infrastructure that supports it: a degree of separation from a (relatively autonomous) state and a degree of separation from the market. The academic capitalist knowledge system is setting up an alternative system of rewards, one in which discovery is valued because of its commercial properties and economic rewards, broad scientific questions are framed so that they are relevant to commercial possibilities (biotechnology, telecommunications, computer grids), knowledge is regarded as a commodity rather than a free good, and universities have the organisation capacity (and are permitted by law) to license and invest and profit from these commodities. Thus far, the academic capitalist knowledge regime has developed around science and engineering, which lends itself to patenting, and touches a relatively small number of research university faculty. However, the passage of the Digital Millennium Copyright Act, and the rapid development of university copyright policies in the 1990s, suggest that the academic capitalist knowledge regime may touch the lives of all faculty. Copyright policies cover software, courseware, making what is taught a commercial property. Ecofem The notion of rationality is a masculine mode of thought that has swindled its way into science and academia where the gender bias against women is exacerbated and proliferated Yurkiewicz, Biologist from Yale University, 12 (Ilana, September 23rd, “Study shows gender bias in science is real. Here’s why it matters.”, http://blogs.scientificamerican.com/unofficial-prognosis/2012/09/23/study-shows-gender-bias-inscience-is-real-heres-why-it-matters/, accessed 7/12/12, LLM) In a real-world setting, typically the most we can do is identify differences in outcome. A man is selected for hire over a woman; fewer women reach tenure track positions; there’s a gender gap in publications. Bias may be suspected in some cases, but the difficulty in using outcomes to prove it is that the differences could be due to many potential factors. We can speculate: perhaps women are less interested in the field. Perhaps women make lifestyle choices that lead them away from leadership positions. In a real-world setting, when any number of variables can contribute to an outcome, it’s essentially impossible to tease them apart and pinpoint what is causative. The only way to do that would be by a randomized controlled experiment. This means creating a situation where all variables other than the one of interest are held equal, so that differences in outcome can indeed be attributed to the one factor that differs. If it’s gender bias we are interested in, that would mean comparing reactions toward two identical human beings – identical in intelligence, competence, lifestyle, goals, etc. – with the one difference between them that one is a man and one is a woman. Not exactly a situation that exists in the real world. But in a groundbreaking study published in PNAS last week by Corinne Moss-Racusin and colleagues, that is exactly what was done. On Wednesday, Sean Carroll blogged about and brought to light the research from Yale that had scientists presented with application materials from a student applying for a lab manager position and who intended to go on to graduate school. Half the scientists were given the application with a male name attached, and half were given the exact same application with a female name attached. Results found that the “female” applicants were rated significantly lower than the “males” in competence, hireability, and whether the scientist would be willing to mentor the student. The scientists also offered lower starting salaries to the “female” applicants: $26,507.94 compared to $30,238.10. This is really important. This is really important. Whenever the subject of women in science comes up, there are people fiercely committed to the idea that sexism does not exist. They will point to everything and anything else to explain differences while becoming angry and condescending if you even suggest that discrimination could be a factor. But these people are wrong. This data shows they are wrong. And if you encounter them, you can now use this study to inform them they’re wrong. You can say that a study found that absolutely all other factors held equal, females are discriminated against in science. Sexism exists. It’s real. Certainly, you cannot and should not argue it’s everything. But no longer can you argue it’s nothing. We are not talking about equality of outcomes here; this result shows bias thwarts equality of opportunity. Here are three additional reasons why this study is such a big deal. 1)Both male and female scientists were equally guilty of committing the gender bias. Yes – women can behave in ways that are sexist, too. Women need to examine their attitudes and actions toward women just as much as men do. What this suggests is that the biases likely did not arise from overt misogyny but were rather a manifestation of subtler prejudices internalized from societal stereotypes. As the authors put it, “If faculty express gender biases, we are not suggesting that these biases are intentional or stem from a conscious desire to impede the progress of women in science. Past studies indicate that people’s behavior is shaped by implicit or unintended biases, stemming from repeated exposure to pervasive cultural stereotypes that portray women as less competent…” 2)When scientists judged the female applicants more harshly, they did not use sexist reasoning to do so. Instead, they drew upon ostensibly sound reasons to justify why they would not want to hire her: she is not competent enough. Sexism is an ugly word, so many of us are only comfortable identifying it when explicitly misogynistic language or behavior is exhibited. But this shows that you do not need to use antiwomen language or even harbor conscious anti-women beliefs to behave in ways that are effectively anti-women. Practically, this fact makes it all the more easy for women to internalize unfair criticisms as valid. If your work is rejected for an obviously bad reason, such as “it’s because you’re a woman,” you can simply dismiss the one who rejected you as biased and therefore not worth taking seriously. But if someone tells you that you are less competent, it’s easy to accept as true. And why shouldn’t you? Who wants to go through life constantly trying to sort through which critiques from superiors are based on the content of your work, and which are unduly influenced by the incidental characteristics of who you happen to be? Unfortunately, too, many women are not attuned to subtle gender biases. Making those calls is bound to be a complex and imperfect endeavor. But not recognizing it when it’s happening means accepting: “I am not competent.” It means believing: “I do not deserve this job.” Science contributes to a pervasive form of neurosexism that draws distinctions between male and female rationale and cognitive thought, when no such thing exists that very accusation is what replicates gender binaries McKie, Science and technology analyst for the Observer, 10 (Robin, August 14th, “Male and female ability differences down to socialisation, not genetics”, http://www.theguardian.com/world/2010/aug/15/girls-boys-think-same-way, accessed 7/12/14, LLM) It is the mainstay of countless magazine and newspaper features. Differences between male and female abilities – from map reading to multi-tasking and from parking to expressing emotion – can be traced to variations in the hard-wiring of their brains at birth, it is claimed. Men instinctively like the colour blue and are bad at coping with pain, we are told, while women cannot tell jokes but are innately superior at empathising with other people. Key evolutionary differences separate the intellects of men and women and it is all down to our ancient hunter-gatherer genes that program our brains. The belief has become widespread, particularly in the wake of the publication of international bestsellers such as John Gray's Men Are from Mars, Women Are from Venus that stress the innate differences between the minds of men and women. But now a growing number of scientists are challenging the pseudo-science of "neurosexism", as they call it, and are raising concerns about its implications. These researchers argue that by telling parents that boys have poor chances of acquiring good verbal skills and girls have little prospect of developing mathematical prowess, serious and unjustified obstacles are being placed in the paths of children's education. In fact, there are no major neurological differences between the sexes, says Cordelia Fine in her book Delusions of Gender, which will be published by Icon next month. There may be slight variations in the brains of women and men, added Fine, a researcher at Melbourne University, but the wiring is soft, not hard. "It is flexible, malleable and changeable," she said. In short, our intellects are not prisoners of our genders or our genes and those who claim otherwise are merely coating old-fashioned stereotypes with a veneer of scientific credibility. It is a case backed by Lise Eliot, an associate professor based at the Chicago Medical School. "All the mounting evidence indicates these ideas about hard-wired differences between male and female brains are wrong," she told the Observer. "Yes, there are basic behavioural differences between the sexes, but we should note that these differences increase with age because our children's intellectual biases are being exaggerated and intensified by our gendered culture. Children don't inherit intellectual differences. They learn them. They are a result of what we expect a boy or a girl to be." Thus boys develop improved spatial skills not because of an innate superiority but because they are expected and are encouraged to be strong at sport, which requires expertise at catching and throwing. Similarly, it is anticipated that girls will be more emotional and talkative, and so their verbal skills are emphasised by teachers and parents. The latter example, on the issue of verbal skills, is particularly revealing, neuroscientists argue. Girls do begin to speak earlier than boys, by about a month on average, a fact that is seized upon by supporters of the Men Are from Mars, Women Are from Venus school of intellectual differences. However, this gap is really a tiny difference compared to the vast range of linguistic abilities that differentiate people, Robert Plomin, a professor at the Institute of Psychiatry in London, pointed out. His studies have found that a mere 3% of the variation in young children's verbal development is due to their gender. "If you map the distribution of scores for verbal skills of boys and of girls you get two graphs that overlap so much you would need a very fine pencil indeed to show the difference between them. Yet people ignore this huge similarity between boys and girls and instead exaggerate wildly the tiny difference between them. It drives me wild," Plomin told the Observer. This point is backed by Eliot. "Yes, boys and girls, men and women, are different," she states in a recent paper in New Scientist. "But most of those differences are far smaller than the Men Are from Mars, Women Are from Venus stereotypes suggest. "Nor are the reasoning, speaking, computing, emphasising, navigating and other cognitive differences fixed in the genetic architecture of our brains. "All such skills are learned and neuro-plasticity – the modifications of neurons and their connections in response experience – trumps hard-wiring every time." Science’s reliance on the idea of rationality is systemically gendered and oppressive- it conflate nature and femaleness together to justify the “right kind of male domination over her” Lloyd, Philosopher and Feminist at Somerville College, Oxford, 87 (GENEVIEVE, “The Man of Reason”, p. 11, accessed 7/12/14, LLM) In this new picture, the material world is seen as devoid of mind, although, as a product of a rational creator, it is orderly and intelligible. It conforms to laws that can be understood; but it does not, as the Greeks thought, contain mind within it. Nature is construed not by analogy with an organism, containing its intelligible principles of motion within it, but rather by analogy with a machine: as object of scientific knowledge, it is understood not in terms of intelligible principles enforming matter, but as mechanism. Bacon thus repudiated the model of knowledge as a correspondence between rational mind and intelligible forms, with its assumption that pure intellect could not distort reality. There are, he thinks, errors which 'cleave to the nature of the understanding'. 'For however men may amuse themselves, and admire, or almost adore the mind, it is certain, that like an irregular glass, it alters the rays of things, by its figure, and different intersections.'14 The sceptics, rather than mistrusting the senses, should have mistrusted the 'errors and obstinacy of the mind', which refuses to obey the nature of things.15 The mind itself should be seen as 'a magical glass, full of superstitions and apparitions'. The perceptions of the mind, no less than those of the senses, 'bear reference to man and not to the universe'.16 Nature cannot be expected to conform to the ideas the mind finds within itself when it engages in pure intellectual contemplation. Knowledge must be painstakingly pursued by attending to Nature; and this attending cannot be construed in terms of contemplation. Bacon, notoriously, used sexual metaphors to express his idea of scientific knowledge as control of a Nature in which form and matter are no longer separated. In Greek thought, femaleness was symbolically associated with the non-rational, the disorderly, the unknowable — with what must be set aside in the cultivation of knowledge. Bacon united matter and form - Nature as female and Nature as knowable. Knowable Nature is presented as female, and the task of science is the exercise of the right kind of male domination over her. 'Let us establish a chaste and lawful marriage between Mind and Nature,' he writes.17 The right kind of nuptual dominance, he insists, is not a tyranny. Nature is 'only to be commanded by obeying her'.18 But it does demand a degree of force: 'nature betrays her secrets more fully when in the grip and under the pressure of art than when in enjoyment of her natural liberty.'19 The expected outcome of the new science is also expressed in sexual metaphors. Having established the right nuptual relationship, properly expressed in a 'just and legitimate familiarity betwixt the mind and things',20 the new science can expect a fruitful issue from this furnishing of a 'nuptual couch for the mind and the universe'. From the union can be expected to spring 'assistance to man' and a 'race of discoveries, which will contribute to his wants and vanquish his miseries'.21 The most striking of these sexual metaphors are in an early, strangely strident work entitled The Masculine Birth of Time. I am come in very truth', says the narrator in that work, 'leading to you Nature with all her children to bind her to your service and make her your slave.'22 In science, the more masculine, the more true – every decision as to what constitutes truth is based off of who has the most power Cambell, Professor of Science, 2009 (Nancy, prof of science, Fronteirs: A journal of womens studies, vol 30, 2009, p. 1-29, Muse, da: 6-242011, lido) Striking resonances and parallels between post-positivist, feminist, and reconstructivist agendas include the following. “Facts” are constructed and are thus not determinative of the forms that social interactions and negotiations take. Negotiating the conceptual practices of power or ruling relations inevitably involves conflicting and partial perspectives. Coping with disagreement is a necessary part of social and political life (including those parts of it that shape decisions about what kinds of technoscientific innovation to pursue). Science is not about closure but about interpretive flexibility in the face of the ongoing production of uncertainty. Thus coping with uncertainty will inevitably challenge those for whom science raises more questions than it answers. Reconstructivists argue that “reconstructivism starts from the premise that ‘better’ design of sociotechnical life ought to be built directly into scholarly inquiry. Notions of better and worse inevitably involve a partisan component. . . . ”22 Similarly, Haraway’s work on “situated knowledges” acknowledges the inevitably partial and partisan processes by which knowledge claims are produced and negotiated.23 Weaving together the strands of similarity between feminist and reconstructivist science and technology studies reveals a tapestry against which the knowledge production projects of each stand out more clearly. Embracing partisanship and struggle as they do, reconstructivists have 8 frontiers/2009/vol. 30, no. 1 taken to heart various critiques of objectivity, among which feminists figure prominently. Quoting Harding’s Science and Social Inequality (2006), Woodhouse and Sarewitz get the point that privilege is both a material advantage and an epistemological disadvantage: that “those advantaged by the status quo tend to operate in a state of denial about the maldistribution of costs and benefits of technoscience.”24 Taking “science-policy influentials” to task for failing to mention inequality except in toothless and conventional ways, Woodhouse and Sarewitz call for greater recognition of social conflict in the tussle over who gets what, when, and how that is science and technology policy and politics. They share with feminists the intention to “move equity considerations higher on science-policy agendas.” They share the suspicion that the social organization of technoscience exacerbates social inequality and consistently rewards the already affluent, while hurting the persistently poor. They call for refocusing R&D on “poor people’s problems” yet do not call upon feminist scholarship to explain precisely how welfare states and labor markets are structured to reproduce gendered and racialized poverty.25 How can it be that well intentioned and well informed scholars seeking to refocus technoscientific R&D on the needs of the poor, broaden participation in research priority-setting, and reorient technoscientific innovation toward the creation of public goods miss the feminist point that addressing inequity requires attending to how gender and power relations structure the world? How can those who set out to “level the playing field among diverse social interests so that all are represented fairly” miss the point that forms of “fairness” inattentive to power differentials lead to unfair processes and outcomes?26 The reconstructivist agenda is too important to be dismissed by feminists as not “getting it,” and thus it seems important to understand how reconstructivists propose to reshape inquiry by encouraging scholars to adopt projects that incorporate “normative, activist, or reconstructive intentions” into their research.27 Reconstructivists suggest a more collective agenda-setting process to channel scholarly work into areas where inadequate attention has been directed to a collective need. They urge scholars not to shy from “thoughtful partisanship” despite recognizing their own fallibility and partiality, and encourage them to participate more actively in “positive efforts to shape technoscientific activities in progressive directions.” They call for revision of the academic reward system so as to recognize academic participation in public engagement and community building. Science has an intersectional bias – poor women are underrepresented Cambell, Professor of Science, 2009 (Nancy, prof of science, Fronteirs: A journal of womens studies, vol 30, 2009, p. 1-29, Muse, da: 6-242011, lido) Upper-middle-class professional women of the sort who might be disadvantaged by gendering of recruitment and advancement in science and engineering probably are getting about their fair share of study these days, but poorer women . . . surely are not. And the deeper insights of feminist theory rarely get applied concretely to science and technology policy outside the reproductive and medical fields.12 While we may disagree with the sentiment that any women get their fair share of study or decry the nonspecific language of “poorer women,” the impulse to direct inquiry toward those excluded or marginalized aligns with feminist goals. Social inequality shapes not only what science is done and how it is done, according to reconstructivists, but what science remains undone. David J. Hess defines this problem in the following terms: Because political and economic elites possess the resources to water and weed the garden of knowledge, the knowledge tends to grow (to be “selected”) in directions that are consistent with the goals of political and economic elites. When social movement leaders and industry reformers who wish to change our societies look to “Science” for answers to their research questions, they often find an empty space—a special issue of a journal that was never edited, a conference that never took place, an epidemiological study that was never funded—whereas their better funded adversaries have an arsenal of knowledge to draw on. . . . Heidegger/Managerialism The foundation of managerialism sits atop the construct of a superior social class based on scientific ability, science and its application has created the managerial and industry-reliant socioeconomic system Hsu, Business School of University of Newcastle, 3 (Shih-Wei, “Beyond Managerialism: Towards An Ethical Approach”, http://www.mngt.waikato.ac.nz/ejrot/cmsconference/2003/proceedings/managementgoodness/Hsu.p df, accessed 7/12/14, LLM) The emergence of managerialism was first contributed by Fredrick Taylor, the founder of Scientific Management (Bendix, 1974). In the end of the 19th century, with the increase in the size of work organisations, the concept of management had become to refer to a certain social class. However, the term manager was usually a pejorative one, because managers were seen as ‘agents’—a kind of employees acting for ‘owners’, and this attitude is a legacy of the medieval European culture3 (Mant, 1979). Yet, at the beginning of the 20th century Taylor gave management a new sense. Taylor, as a mechanical engineer, treats all work organisations as mechanical systems consisting of different mechanical components, e.g. factory equipments, and workers. His contribution here is that he linked management to science—he asserts that managers should be a specific social class whose main task is, via ‘scientific’ study and analysis throughout ‘the mechanical arts’, to get the best out of men, for benefit of all (Taylor, 1911: 25). At this point, Taylor seems to share the idea with the Enlightenment thinker Francis Bacon. While Bacon, in seventeenth-century England, believed that scientists should be treated as a special class, acting as ‘power brokers’ on behalf of the government (Garcia, 2001), Taylor attempted to promote management to a special social class, based on their superior scientific ability— or rationality, in the 20th century. The strong belief in science, or scientific methods, is central to our dominant worldview. Science is, in essence, a process that eliminates the possibility of subjective bias, hence producing purely objective knowledge. Science has always been power. In the period of enlightenment, its major function was to liberate humankind from subordination to the other-worldly force, i.e. the Judo-Christian orthodoxy. In the modern age, science, and its application—technology, has successfully created our industry-dominated social/economic system. Modern science attempts to calculate and enframe objects by revealing them, causing man to forget that concealing belongs with revealing. This forgetting is the ultimate danger because enframing causes man to become a standing-reserve unable to encounter or realize himself Cariño, Professor of Philosophy at the University of Santo Thomas, 9 (Jovito V., “PHILIPPINIANA SACRA”, Vol. XLIV, No. 132 September-December, 2009 pp. 491-504 Heidegger and the Danger of Modern Technology, Enframing and the History of Western Thought – The Danger of Modern Technology) Human subjectivity as mere calculation of objects first displays itself in the appearance of modern physics as a modern science. “Modern science’s way of representing,” according to Heidegger, “pursues and entraps nature as a calculable coherence of objects.” 27 The description “modern,” Heidegger further added, points not so much to the sciences’ application of experiments on nature but to modern sciences penchant to set up nature that it may be calculated in advance. It is in this sense that modern physics is the herald of Enframing, 28 and as the essence of modern technology, enframing “starts man upon the way of that revealing through which the real everywhere, more or less distinctly, becomes standing reserve.” Enframing is a mode of revealing that challenges forth and orders. As a mode of revealing, enframing also belongs to destining, to Geschick. Man also belongs to destining because he is the one who listens and hears but once he opens himself to the essence of modern technology, he can be swayed to the pursuit only of what is revealed in the sense of ordering and challenging forth. In such an event, the other possibility of belonging to what is unconcealed is also blocked. Even this, says Heidegger, is part of the destining of man and part of the destining of all coming to presence. Following the Greeks, Heidegger maintains that: “That which is earlier with regard to the arising that holds sway become manifest to us men only later.” 30 In another essay, Heidegger describes that which manifests itself only later as the “inaccessible and not to be gotten around.” 31 That is how what is concealed reveals itself to us – as a concealment that unconceals itself in revealing and a revealing which remains concealed in unconcealment. The two elements, concealing and revealing, always go hand in hand. Modern man though, through the enchanting effect of modern technology, has been fixated merely with what is revealed. It is this penchant for what is revealed that deceives man to believe that he can grasp everything or to use Mcwhorter’s expression, manage everything. 32 When we delude ourselves this way, we forget not only ourselves but the passing of unconcealment itself. Such forgetting is an element of what Heidegger points out as “danger”. What is such danger? I shall explore Heidegger’s answer to this question in the third part of this paper. The Danger of Modern Technology. In the destining that destines both man and Enframing, two possibilities come to fore: first, the possibility of man pursuing nothing but what is revealed in ordering and challenging forth; and second, the possibility of the blocking of man from being admitted to what is unconcealed and apparently, given the contemporary man’s obsession to rule and control the “mega-energies” of nature. It seems that the former possibility is the one holding sway. Since nature is seen as a storehouse of resources, man’s relation with it is reduced in terms of management. Earth’s resources are a plenty hence the necessity of management as a strategy for domination and control, but the more he tries to manage everything, the more man distances himself from what is essential. What makes the situation doubly unfortunate is that this fact is hidden from man himself, that is, the fact of his belonging to what is concealed. McWhorter writes: The danger of a managerial approach to the world lies not, then , in what it knows – not in its penetration into the secrets of galactic emergence or nuclear fission – but in what it forgets, what it itself conceals. It forgets that any other truths are possible, and it forgets that the belonging together of revealing with concealing is forever beyond the power of human management. We can never have, or know, it all; we can never manage everything. Man’s belonging to what is concealed although oftentimes forgotten by man himself is within the destining of man, the same destining which introduces itself to man as danger. It is danger or as Heidegger calls it, “danger as such” because once the destining of revealing holds sway, man can turn away from what is unconcealed by reducing it to what is calculable and can be represented. Heidegger calls this representation of the unconcealed “correct determinations.” 36 Further, Heidegger holds that God himself is not free from this representational thinking. He is invariably called the cause or causa efficiens or whatever is convenient for those who think they exalt God by domesticating him in their own categories. As pointed out by Heidegger, the determinations may be correct but in the midst of these correct formulations, the danger can likewise persist, that “in the midst of all that is correct the true will withdraw.” However, this is not yet the ultimate danger for Heidegger. Man encounters the ultimate danger when the destining holds sway in the manner of Enframing. Heidegger calls it the ultimate danger because in its holding sway, Enframing does not only reduce objects as standing-reserve, as the orderer of the standing-reserve, man himself is reduced to the status of standing-reserve. This leads to what Heidegger characterizes as “the ultimate delusion” which man experiences when he “stands so decisively in attendance on the challenging-forth of Enframing that he does not apprehend Enframing as a claim, that he fails to see himself as the one spoken to and hence, also fails in every way to hear in what respect he eksists, from out of his essence, in the realm of an exhortation or address, and thus can never encounter himself.” Science leads to the manipulation of nature such that its original essence is destroyed along with humanity’s connection to it Kidner 2k (David, faculty of humanites @ Nottingham Trent U, Environmental Ethics Vol. 22.4 Winter Pg. 345-346 JF) Science, then, may be a partial understanding which we often fatefully misconstrue as being a complete description of nature, but it is nevertheless firmly anchored in realities which are beyond the influence of language. To an increasing extent, however, even these realities are being modified by industrialism, not only through the breeding of certain species and the elimination of others, but also, and increasingly directly, through genetic manipulation. If nature, then, was not originally constructed by technology and language, it is in many ways in the process of being reconstructed by these means; and the metaphor of “construction” assumes the absence or obliteration of natural structure, so that the world is simply made up of (verbal or physical) “raw materials.” This demolition of the nature that frames and transcends human awareness, and its replacement by a “nature” which is defined and constructed by industrial and discursive activity from the fragments of the original nature, implies a corresponding redefinition of the person to fit a rational, commercial world—a redefinition which, in Arthur Kleinman’s words, has “deepened discursive layers of experience . . . while paradoxically making it more difficult to grasp and communicate poetic, moral, and spiritual layers of the felt flow of living.”32 This transformation, suggests Kleinman, “can be of a kind to cancel, nullify, or evacuate the defining human element in individuals—their moral, aesthetic, and religious experience.”33 Social constructionism, then, can be seen as rooted within a broader reconstructive project which reconfigures both humanity and the nonhuman world according to an industrialist blueprint. The physical and ideological replacement of nature, understood as the larger order out of which we grow, by a reduced order based on industrialist rationality finds its academic counterpart in the doctrine that nature is a mere part-actor in the wider drama of human life and language. Japan CP 1NC Text: The government of Japan should _________________________. The CP solves – Japan has one the best ocean exploration agencies in the world – JAMSTEC can do the plan Cizdziel, Chair of the American Chamber of Commerce Japan, 14 [Paul E., American Chamber of Commerce in Japan, “Deep Sea Exploration by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC),” http://www.accj.or.jp/en/events/details/21949deep-sea-exploration-by-the-japan-agency-for-marine-earth-science-and-technology-jamstec, accessed 7/13/14, TYBG] JAMSTEC is one of the largest, most active and accomplished research agencies in the world. It operates 7 research vessels, one manned deep-sea submersible, four autonomous underwater vehicles, and three remotely operated underwater vehicles. In addition, the agency is operating a scientific drilling ship "Chikyu". The fleet is so strong that sediments, rocks and organisms from water depths exceeding 7000 m or from depths more than 2000 m below the surface of the ocean bottom have been collected so far. Based on such high technologies, JAMSTEC have carried out a variety of unique sciences, such as observation of fault rock that caused the Tohoku great earthquake, and biological research of microbes living in the ultra-deep biosphere. Dr. Yoshihisa Shirayama, Executive Director for Science of JAMSTEC, will provide updates on what is happening now, and what we can look forward to next, in the field of deep-sea exploration carried out by JAMSTEC. Come join us for wine and finger foods, and an intellectually stimulating topic of importance to everyone. Understanding the vast ocean biosphere and its critical role in the cycle of life on Earth is essential for government and industry leaders to make smart choices. As a leading oceanographic research institution, JAMSTEC contributes greatly to that knowledge. 2NC – Solvency – Investment They’re already investing – they have the resources. Ryall, writer for German news source DW, 14 [Julian, 1/17/14, DW, “Japan hopes seabed will yield data and resources,” http://www.dw.de/japanhopes-seabed-will-yield-data-and-resources/a-17369799, accessed 7/13/14, TYBG] With scant energy and mineral reserves of its own, and nuclear plants mothballed since the Fukushima nuclear disaster, Japan is investing heavily in exploring beneath the oceans for resources that will power its future. Seabed off coast of Japan On the first day of 2014, the Japanese research ship Chikyu set a new record by drilling down to a point 3,000 meters beneath the seabed off southern Japan. It was an appropriate way to ring in the new year and signals an increased commitment to learning more about the secrets that lay beneath the floor of the ocean close to Japan. The research has two distinct but connected driving forces. As Japan prepares to mark the third anniversary of the March 11 Great East Japan Earthquake, the Chikyu is undertaking the most extensive survey ever attempted of the Nankai Trough, a geological fault that extends for several hundred kilometers parallel to the southern coast of Japan and widely seen as the source of the next major earthquake that will affect this tremor-prone nation. And with all of Japan's nuclear reactors presently mothballed in the aftermath of the disaster, which destroyed the Fukushima Dai-Ichi nuclear plant, there is a new sense of urgency in the search for sources of energy and other natural resources close to Japan. 2NC – Solvency – Leadership Japan is already taking a leadership role in oceanic research Teranishi and Oda, writers for Japanese news source Asahi Shimbun, 13 [Kazuo, Makoto, 12/28/13, “Japan seeks to make up for 'lost decade' in marine development,” http://ajw.asahi.com/article/business/AJ201312280010, accessed 7/13/14, TYBG] Japan has long lagged behind other countries in oceanic development of minerals and resources, despite being one of the world's largest maritime states. Today, however, it is aggressively exploring the seabed in search of natural riches. In early October, the Hakurei, a state-of-the-art marine resources survey ship, set off from Shimonoseki Port in western Yamaguchi Prefecture for the Okinawa Trough, located in waters northwest of the main island of Okinawa. The ship’s mission was to investigate oceanic resources that lie in the waters, which are within Japan’s 370-kilometer exclusive economic zone (EEZ), where the nation is allowed to develop minerals and other resources. Having a total area of 4.47 million square kilometers, Japan’s EEZ and territorial waters are the sixth largest in the world. Experts believe a large amount of untouched natural resources rests beneath the seabed. Between January and February, Hakurei surveyed the Okinawa Trough, a potential gold mine of offshore resources, at a depth of 1,600 meters. Drilling about 40 meters down into the seafloor, the survey vessel discovered a large-scale submarine hydrothermal deposit that contains various minerals, such as zinc, lead, copper and gold. The 118-meter-long research vessel, which began operation in February 2012, is outfitted with 32meter-high drilling equipment on its stern. The equipment can submerge to a depth of up to 2,000 meters and drill a maximum of 400 meters down into the seabed--a major improvement from the 20-meter limit of Hakurei’s predecessor, which started operation in 1980. The survey vessel is operated by Japan Oil, Gas and Metals National Corp. (JOGMEC), a governmentaffiliated organization, which has been playing a leadership role in Japan’s marine resources exploration. “We have many problems to solve, but hope to establish a new method in five years to mine seabed resources and raise them from the ocean,” said Nobuyuki Okamoto, the chief of JOGMEC’s abyssal floor survey section. The Hakurei is just one sign that Japan is increasing its presence in the area of oceanic development. In March, the deep-sea drilling vessel Chikyu successfully extracted natural gas from offshore methane hydrate deposits for the first time in the world off Atsumi Peninsula in Aichi Prefecture. China CP 1NC Text: The People’s Republic of China should ____________. The CP solves - China leads in ocean exploration and has extensive technological expertise Yuanqing, China Daily European Edition, 14 [Sun, 7-3-14, China Daily European Edition, “China takes lead in underwater exploration,” http://www.lexisnexis.com/lnacui2api/api/version1/getDocCui?lni=5CK2-K9P1-JB4BV2F4&csi=270944,270077,11059,8411&hl=t&hv=t&hnsd=f&hns=t&hgn=t&oc=00240&perma=true, 713-14, FCB] "Today, it is China that is leading the world in its commitment to manned deep ocean exploration," says Krov Menuhin, chairman of the award committee and advisory board member at the Historical Diving Society, an international non-profit organization that studies man's underwater activities and promotes public awareness of the ocean. "And the far-sighted vision, the courage and the immense engagement to implement this program is in keeping with the pioneering spirit of Hans Hass. He entered the ocean with the same vision, courage and commitment," he says. The winners received a framed cast bronze plaque, with an image of Hans Hass, designed by ocean artist Wyland. And Blancpain presented them Fifty Fathoms Bathyscaphe diving watches with specially engraved cases. The brand will serve as the official time keeper for Jiaolong's future underwater expeditions. It also announced a collaboration with the State Oceanic Administration to launch projects to raise public consciousness of the ocean in China in the coming years. The details are still being discussed. "We are very impressed with Jiaolong with its ability to constantly dive into new depths, especially its crew, whose courage, focus and action enabled them to reach new frontiers all the time," says Marc Junod, vice-president and head of sales at Blancpain. The research and development of Jiaolong basically started from zero in 2002. None of the crew members had seen, let alone been in, a virtual submersible before. Fu Wentao, one of the oceanauts of Jiaolong, shared his experience underwater, including encounters with curious creatures. "Unlike the terrestrial creatures, those under the water are not cautious at all. They are actually very curious and will even swim toward us," Fu says. Cui is planning to launch a project to develop a submersible that will be able to dive as deep as 11,000 meters with financial support from both the government and the private sector. "The combination will fuel faster development in underwater science," Cui says. "The sea is vast and rich, but we have a lot of research to do before we can exploit it." While funds for the financing of manned deep-ocean explorations in the West are drying up, China has just committed to a long-term project that will change the way everyone thinks about the sea, says Menuhin. As the creator of the world's first modern diving wristwatch, Blancpain has long been a supporter of major manned deep-water explorations. "We are not just getting involved today because it is trendy to protect the Ocean. Our philosophy is to help as many people as possible to learn about, and get familiar with, the underwater world. Because we believe that people can only respect and protect what they love. And they can only love what they know," says Junod. 2NC – Solvency - General China has ocean exploration equipment – it solves the aff Kashyap, Senior Editor at the International Business Times, 14 [Arjun, 1-27-14, “China-Led International Ocean Exploration Mission To Look For Oil In South China Sea, Including In Disputed Regions”, http://www.ibtimes.com/china-led-international-ocean-explorationmission-look-oil-south-china-sea-including-disputed, Lexis, 7-13-14, FCB] In a first-of-its-kind exercise for the world’s second-largest economy, an international scientific expedition to look for oil in the South China Sea will set sail from Hong Kong on Tuesday, according to the South China Morning Post. The trip is part of the latest edition of the decade-long International Ocean Discovery Program that will run from 2013 to 2023. The IODP was launched by the U.S. in the 1960s, and its latest effort will include 31 scientists from 10 countries drilling at three different sites for two months. "Oil and gas fields lie close to the coast, but the key is to open the treasure box buried beneath the basin," Wang Pinxian, a marine geologist and member of the Chinese Academy of Sciences, told the Post Monday. The IODP invited proposals from 26 member nations and, while a proposal to drill in the controversial South China Sea -- first proposed by China in 2008 -- was not the most popular one, it was reportedly mainly chosen because the Chinese government agreed to pick up 70 percent, or $6 million, of the mission’s tab. The NSF, which used to contribute 70 per cent of the Joides Resolution's expenses, cut its annual ocean drilling budget to $50 million last year, David Divins, director of the IODP’s ocean drilling program. The expedition will sail aboard the American scientific drill ship, Joides Resolution, operated by the National Science Foundation, or NSF, the Post reported, adding that the voyage will take the team to waters claimed variously by China, the Philippines and Vietnam. So far, the ship has received permission from the Philippines and Beijing but is waiting for a response from the Vietnamese government to drill at a site in the southwest part of the South China Sea, the Post reported, citing Divins, adding that the expedition may have to opt for an alternative site. Tensions stemming from China's energy interests are a constant undercurrent to the region's geopolitics. For instance, in May 2012, China began drilling to new depths in the South China Sea, 200 miles southeast of Hong Kong, with the launch of its first deep-water oil drilling rig, triggering tensions between Manila and Beijing. In December 2012, China had asked Vietnam to stop exploring for oil in disputed areas of the South China Sea and demanded that the latter not harass Chinese fishing boats. However, findings of the IODP expedition, which includes 13 scientists from mainland China, nine from the U.S. and one from Taiwan, will reportedly be shared around the world, including with countries that are not part of the program. 2NC – Solvency – Mineral Expedition The CP solves – China has the equipment for exploration – mineral expeditions prove The Balochistan Times, 14 [“China speeds up Indian Ocean exploration for minerals,” http://www.lexisnexis.com/lnacui2api/api/version1/getDocCui?lni=5BMF-K871-DXH0K41K&csi=270944,270077,11059,8411&hl=t&hv=t&hnsd=f&hns=t&hgn=t&oc=00240&perma=true, Lexis, 7-13-14, FCB] The State Oceanic Administration (SOA) hailed achievements by Chinese scientists onboard an oceanic research vessel surveying polymetallic deposits in Indian Ocean, Authin mail web site reported. The "Dayang-1" vessel arrived at the ocean's polymetallic sulfide exploration contract area on Jan 26 and left on Feb 19. Scientists onboard the vessel discovered two seafloor hydrothermal areas and four hydrothermal anomaly areas, and deepened understanding about the overall area. They also gained insight on the origins of carbonate hydrothermal areas, and made successful attempts to explore for sulfide, said the SOA. Hydrothermal sulfide is a kind of sea-bed deposit containing copper, zinc and precious metals such as gold and silver. Those metals formed sulfides after chemical reactions and came to rest in the seabed in "chimney vents." Dayang-1 gathered three carbonate pieces and a "chimney vent," the first time Chinese scientists have collected such a structure from the Ocean. The team also secured many other samples, and two people designated by the International Seabed Authority were trained during the expedition.